Giter VIP home page Giter VIP logo

Comments (3)

yxli2123 avatar yxli2123 commented on July 27, 2024

Thanks for your inserts of our work. Setting either full_matrices=True or full_matrices=False will not affect the results. For square matrices, they are the same. For non-square matrices, reduced SVD is also the same as full SVD but removes the zero singular vectors. In our work, we are actually using truncated SVD, which only obtain the first r singular vectors.

from loftq.

MarsJacobs avatar MarsJacobs commented on July 27, 2024

Thank you for your kind response. I have some additional questions regarding the LoftQ algorithm.

I am struggling to intuitively understand how repeatedly performing quantization and SVD approximation leads to progressively better initialization at adapter weight.

If we rewrite LoftQ Algorithm 1 with an added Error Term, it looks as follows: ($\epsilon$ is a error term)

스크린샷 2023-11-21 오후 8 23 34

As in Equation 3, when we approximate the difference between $W_{FP}$ and the $Q_t + A_tB_t$ using SVD, let's say the SVD approximation error is $\epsilon^{svd}_t$.

I have personally measured how this error term ($\epsilon^{svd}_t$) changes across layers with each iteration.
The results show that in all layers, this svd error term decreases as the number of iteration steps increases.

In summary, as the steps in the LoftQ algorithm increase, the SVD approximation becomes more accurate, effectively minimizing the main objective, as stated in eq.6 of the paper. However, I am not entirely clear on why this error minimizes through the repetition of these two steps (1)Quantization 2)SVD). Could you please explain this once more?

I conceptually understand how the initialization of quantized weight and adapter weight are jointly optimized but it is not clear to me why this process minimizes $W_{FP} - Q - AB^T$ analytically. (maybe I miss something)
I would greatly appreciate additional clarification on this, as it would help me deeply understand the core idea of this excellent paper.

from loftq.

yxli2123 avatar yxli2123 commented on July 27, 2024

Hi @MarsJacobs, the error decreasing as the step increasing is not guaranteed. This algorithm is heuristic. For some models, like some layers in DeBERTa-v3-base, the error fluctuates as steps increases.

from loftq.

Related Issues (20)

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.