Giter VIP home page Giter VIP logo

Comments (8)

WuNein avatar WuNein commented on August 9, 2024 2

话说照着你这样修改的话,原本的l2_loss就没有了吗? 最终的loss = loss + orthogonal_loss * lamda_1吗?

你自己加上就好了,又不冲突…… 只是我懒得写了

是直接用matched_modules进行计算吗?

l2_loss = 0.
        for name, param in matched_modules:
            l2_loss += torch.norm(param, p=2)

完全不对吧,

# l2-normalization for loranew_A/B
        l2_loss = 0.
        for name, param in self.model.named_parameters():
            if "loranew_" in name:
                l2_loss += torch.norm(param, p=2)

原本代码里面写的是新的loranew,那么简化代码以后目标是

# l2-normalization for loranew_A/B
        l2_loss = 0.
        for name, param in self.model.named_parameters():
            if "lora_" in name:
                l2_loss += torch.norm(param, p=2)

lora_ 就是原本的lora_new啊,l2正则肯定是对现在task的参数进行的啊

from o-lora.

cmnfriend avatar cmnfriend commented on August 9, 2024

可以的!👍

from o-lora.

WuNein avatar WuNein commented on August 9, 2024

哦对,有个问题我不懂就问了:)懒得再翻您改的PEFT代码了(不是
既然说是当前LoRA在之前LoRA的正交方向上更新的;那么当前的LoRA大概率是merge之前LoRA,以此为基础继续训练的吧?我没理解错吧

from o-lora.

DumoeDss avatar DumoeDss commented on August 9, 2024

哦对,有个问题我不懂就问了:)懒得再翻您改的PEFT代码了(不是 既然说是当前LoRA在之前LoRA的正交方向上更新的;那么当前的LoRA大概率是merge之前LoRA,以此为基础继续训练的吧?我没理解错吧

训练完会进行merge
#5 (comment)

from o-lora.

WuNein avatar WuNein commented on August 9, 2024

哦对,有个问题我不懂就问了:)懒得再翻您改的PEFT代码了(不是 既然说是当前LoRA在之前LoRA的正交方向上更新的;那么当前的LoRA大概率是merge之前LoRA,以此为基础继续训练的吧?我没理解错吧

训练完会进行merge #5 (comment)

我的疑惑在新的task的lora初始化上面,既然说是最后合并的,我姑且认为是随机初始化的~毕竟代码上loss要保证两个lora_a是正交的。

from o-lora.

DumoeDss avatar DumoeDss commented on August 9, 2024

话说照着你这样修改的话,原本的l2_loss就没有了吗?
最终的loss = loss + orthogonal_loss * lamda_1吗?

from o-lora.

WuNein avatar WuNein commented on August 9, 2024

话说照着你这样修改的话,原本的l2_loss就没有了吗? 最终的loss = loss + orthogonal_loss * lamda_1吗?

你自己加上就好了,又不冲突…… 只是我懒得写了

from o-lora.

DumoeDss avatar DumoeDss commented on August 9, 2024

话说照着你这样修改的话,原本的l2_loss就没有了吗? 最终的loss = loss + orthogonal_loss * lamda_1吗?

你自己加上就好了,又不冲突…… 只是我懒得写了

是直接用matched_modules进行计算吗?

l2_loss = 0.
        for name, param in matched_modules:
            l2_loss += torch.norm(param, p=2)

from o-lora.

Related Issues (20)

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.