Comments (5)
Hi, great to hear you find larq & larq-zoo useful!
You are correct, the default implementation doesn't use scaling of the inputs. In our experiments, we found that scaling the inputs did not improve accuracy, so we decided to leave it out.
As a side note, in our recent work on BNN optimization (paper, code) we found we could improve accuracies without using any scaling factors (neither activations or weights).
from zoo.
The best way to implement this is probably to follow equation 11/figure 2 in the paper, i.e. something like:
def xnor_conv(x, kernel_size=3):
A = tf.reduce_mean(tf.abs(x), axis=[3], keepdims=True)
k = tf.ones([kernel_size, kernel_size, 1, 1])* 1/(kernel_size**2)
K = tf.nn.conv2d(A, k, 1, "SAME")
x = lq.layers.QuantConv2D(32, kernel_size, padding="SAME", input_quantizer="ste_sign", kernel_quantizer="ste_sign", kernel_constraint="weight_clip")(x)
x = x * K
return x
We would have to revisit this before sharing results as it has been a while ago. At this point I don't think we will do that anytime soon, XNOR-net was a bit of a dead end for us and indeed this type of scaling adds complexity to the model. But would be curious to hear if you have better luck using input scaling.
from zoo.
Thanks for the quick response.
I have a couple of questions regarding the same.
I have read in other research papers stating that the input scaling is expensive, but I couldn't find any reference that says scaling inputs would not improve accuracy. Would it be possible for your team to share the results of this experiment which led to ignoring the input scaling factor?.
Also how would one recreate the exact XNOR network with input scaling using the Larq framework?
from zoo.
Thanks a lot for the clarification.
I have already read BNN optimization (paper, code) and found it to be a really interesting view on BNN's.
from zoo.
You are correct, the default implementation doesn't use scaling of the inputs. In our experiments, we found that scaling the inputs did not improve accuracy, so we decided to leave it out.
Hi!
Is there a comparison of scaling factors I could cite? I have a similar result, I wonder if it is a known fact.
from zoo.
Related Issues (20)
- Unexpected behavior of the "include_top" argument HOT 3
- Unexpected behavior of the "preprocess_input" function HOT 1
- Make ordering of docstring constistent
- Snapshot tests of model summaries HOT 2
- RFC: structure change HOT 3
- Add model accuracies to docstrings
- QuickNet(Large) models don't match released h5 files
- Support TensorFlow 2.2 HOT 3
- QuickNet model and flip_ratio metric do not work together HOT 3
- Update weights and parameters in docstrings
- QuickNet no-top models pretrained weights are not working as expected HOT 1
- No 'sota' module HOT 2
- Speech Models HOT 2
- About RealToBinaryNet model HOT 14
- Intermediate results of training R2B model HOT 5
- Data directory HOT 1
- Reproducing R2B model HOT 6
- Help, no logs are printed! HOT 2
- The usage of data.cache() causes the run out of memory. HOT 5
- Drop Python 3.6 support
Recommend Projects
-
React
A declarative, efficient, and flexible JavaScript library for building user interfaces.
-
Vue.js
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
-
Typescript
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
-
TensorFlow
An Open Source Machine Learning Framework for Everyone
-
Django
The Web framework for perfectionists with deadlines.
-
Laravel
A PHP framework for web artisans
-
D3
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
-
Recommend Topics
-
javascript
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
-
web
Some thing interesting about web. New door for the world.
-
server
A server is a program made to process requests and deliver data to clients.
-
Machine learning
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
-
Visualization
Some thing interesting about visualization, use data art
-
Game
Some thing interesting about game, make everyone happy.
Recommend Org
-
Facebook
We are working to build community through open source technology. NB: members must have two-factor auth.
-
Microsoft
Open source projects and samples from Microsoft.
-
Google
Google ❤️ Open Source for everyone.
-
Alibaba
Alibaba Open Source for everyone
-
D3
Data-Driven Documents codes.
-
Tencent
China tencent open source team.
from zoo.