Giter VIP home page Giter VIP logo

Comments (5)

koenhelwegen avatar koenhelwegen commented on July 4, 2024 2

Hi, great to hear you find larq & larq-zoo useful!

You are correct, the default implementation doesn't use scaling of the inputs. In our experiments, we found that scaling the inputs did not improve accuracy, so we decided to leave it out.

As a side note, in our recent work on BNN optimization (paper, code) we found we could improve accuracies without using any scaling factors (neither activations or weights).

from zoo.

koenhelwegen avatar koenhelwegen commented on July 4, 2024 1

The best way to implement this is probably to follow equation 11/figure 2 in the paper, i.e. something like:

def xnor_conv(x, kernel_size=3):
    A = tf.reduce_mean(tf.abs(x), axis=[3], keepdims=True)
    k = tf.ones([kernel_size, kernel_size, 1, 1])* 1/(kernel_size**2)
    K = tf.nn.conv2d(A, k, 1, "SAME")
    x = lq.layers.QuantConv2D(32, kernel_size, padding="SAME", input_quantizer="ste_sign", kernel_quantizer="ste_sign", kernel_constraint="weight_clip")(x)
    x = x * K
    return x

We would have to revisit this before sharing results as it has been a while ago. At this point I don't think we will do that anytime soon, XNOR-net was a bit of a dead end for us and indeed this type of scaling adds complexity to the model. But would be curious to hear if you have better luck using input scaling.

from zoo.

18kiran12 avatar 18kiran12 commented on July 4, 2024

Thanks for the quick response.

I have a couple of questions regarding the same.

I have read in other research papers stating that the input scaling is expensive, but I couldn't find any reference that says scaling inputs would not improve accuracy. Would it be possible for your team to share the results of this experiment which led to ignoring the input scaling factor?.

Also how would one recreate the exact XNOR network with input scaling using the Larq framework?

from zoo.

18kiran12 avatar 18kiran12 commented on July 4, 2024

Thanks a lot for the clarification.
I have already read BNN optimization (paper, code) and found it to be a really interesting view on BNN's.

from zoo.

bogdankjastrzebski avatar bogdankjastrzebski commented on July 4, 2024

You are correct, the default implementation doesn't use scaling of the inputs. In our experiments, we found that scaling the inputs did not improve accuracy, so we decided to leave it out.

Hi!
Is there a comparison of scaling factors I could cite? I have a similar result, I wonder if it is a known fact.

from zoo.

Related Issues (20)

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.