Comments (7)
It it possible to put the model somewhere or paste a script to create a simple untrained model that has the same issue?
from hls4ml.
rf_in = Input(shape=(1024, 2), name = 'rf_input')
x = QConv1D(64, 5, kernel_quantizer="quantized_bits(16,6)", padding='same', use_bias=False)(rf_in)
x = QBatchNormalization()(x)
x = QActivation("quantized_relu(16,6)")(x)
x = MaxPooling1D(2, strides = 2, padding='same') (x)
x = QConv1D(32, 5, kernel_quantizer="quantized_bits(16,6)", padding='same', use_bias=False)(x)
x = QBatchNormalization()(x)
x = QActivation("quantized_relu(16,6)")(x)
x = MaxPooling1D(2, strides = 2, padding='same') (x)
x = QConv1D(16, 5, kernel_quantizer="quantized_bits(16,6)", padding='same', use_bias=False)(x)
x = QBatchNormalization()(x)
x = QActivation("quantized_relu(16,6)")(x)
x = MaxPooling1D(2, strides=2, padding='same') (x)
x = Flatten()(x)
dense_1 = QDense(128, activation="quantized_relu(16,6)", use_bias=False)(x)
dropout_1 = Dropout(0.25)(dense_1)
dense_2 = QDense(128, activation="quantized_relu(16,6)", use_bias=False)(dropout_1)
dropout_2 = Dropout(0.5)(dense_2)
softmax = QDense(7, kernel_quantizer="quantized_bits(16,6)", use_bias=False)(dropout_2)
softmax = Activation('softmax')(softmax)
opt = keras.optimizers.Adam(learning_rate=0.0001)
model= keras.Model(rf_in, softmax)
model.compile(loss='categorical_crossentropy', optimizer=opt, metrics=["accuracy"])
model.summary()
from hls4ml.
I think I understand the problem. The hls4ml software assumes that QDense will always have kernel_quantizer defined, but that is not the case here. I will add a check for it, but in the meantime, here is a workaround. Replace:
dense_1 = QDense(128, activation="quantized_relu(16,6)", use_bias=False)(x)
by
dense_1_noact = Dense(128, use_bias=False)(x)
dense_1 = QActivation(activation="quantized_relu(16,6)")(dense_1_noact)
from hls4ml.
Try using https://github.com/fastmachinelearning/hls4ml/tree/weight_quantizer_none
from hls4ml.
I think I understand the problem. The hls4ml software assumes that QDense will always have kernel_quantizer defined, but that is not the case here. I will add a check for it, but in the meantime, here is a workaround. Replace:
dense_1 = QDense(128, activation="quantized_relu(16,6)", use_bias=False)(x)by
dense_1_noact = Dense(128, use_bias=False)(x) dense_1 = QActivation(activation="quantized_relu(16,6)")(dense_1_noact)
I added the two code line to my code, but still the same error. nothing change.
dense_1 = Dense(128, use_bias=False)(x)
dense_1 = QActivation("quantized_relu(16,6)")(dense_1)
dropout_1 = Dropout(0.25)(dense_1)
dense_2 = Dense(128, use_bias=False)(dropout_1)
dense_2 = QActivation("quantized_relu(16,6)")(dense_2)
dropout_2 = Dropout(0.5)(dense_2)
softmax = Dense(7, use_bias=False)(dropout_2)
softmax = QActivation("quantized_relu(16,6)")(softmax)
output = Activation('softmax')(softmax)
from hls4ml.
Even I installed hls4ml library again and still same error.
from hls4ml.
+1 Same issue when using QKeras layers (QLSTM)
from hls4ml.
Related Issues (20)
- part7b of tutorial has no result returned back
- [XFORM 203-502] HOT 6
- Non-quantized QKeras layers break conversion
- Concatenation Layer Issue with PyTorch ResNet
- Keras Reshape Layer is Built with Error HOT 1
- vivadoaccelerator backend : bit file note generated HOT 5
- About QBatchNormalization is not support QKeras po2 quantizer HOT 1
- ERROR: [XFORM 203-504] Stop unrolling loop 'Product1' (firmware/nnet_utils/nnet_dense_latency.h:37) in function 'nnet::dense_latency<ap_fixed<16, 6, (ap_q_mode)5, (ap_o_mode)3, 0>, ap_fixed<16, 6, (ap_q_mode)5, (ap_o_mode)3, 0>, config42_mult>' because it may cause large runtime and excessive memory usage due to increase in code size. Please avoid unrolling the loop or form sub-functions for code in the loop body. myproject_prj:solution1 Dec 27, 2023 6:47:26 PM
- Failure at converters.convert_from_pytorch_model . compile/build()
- Move transpose based on backend from hls4ml/model/layers.py to backend-specific areas
- Problem tracing binary CNN model after recent tracing optimization HOT 8
- Wrong prediction of C Simulation compared to QKeras? HOT 3
- The hls4ml transformation successful, however, the result of inference all zeros. HOT 1
- Issue with ModelGraph function during ONNX model synthesis HOT 4
- SR backend doesn't work with Vitis HLS
- Incorrect hls4ml results for AveragePooling2D/MaxPooling2D Keras layer
- `output_rounding_saturation_mode` pass does not work with convolutional layers HOT 2
- "warning: integer constant is so large that it is unsigned" causes hls_model.compile() to fail
- CNN Synthesis fail : ERROR: [XFORM 203-504] Stop unrolling loop 'Product1' (firmware/nnet_utils/nnet_dense_latency.h:37) in function 'nnet::dense_latency<ap_fixed<2, 1, (ap_q_mode)5, (ap_o_mode)3, 0>, ap_fixed<2, 1, (ap_q_mode)5, (ap_o_mode)3, 0>, config6>' because it may cause large runtime and excessive memory usage due to increase in code size. Please avoid unrolling the loop or form sub-functions for code in the loop body. ERROR: [HLS 200-70] Pre-synthesis failed. HOT 13
Recommend Projects
-
React
A declarative, efficient, and flexible JavaScript library for building user interfaces.
-
Vue.js
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
-
Typescript
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
-
TensorFlow
An Open Source Machine Learning Framework for Everyone
-
Django
The Web framework for perfectionists with deadlines.
-
Laravel
A PHP framework for web artisans
-
D3
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
-
Recommend Topics
-
javascript
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
-
web
Some thing interesting about web. New door for the world.
-
server
A server is a program made to process requests and deliver data to clients.
-
Machine learning
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
-
Visualization
Some thing interesting about visualization, use data art
-
Game
Some thing interesting about game, make everyone happy.
Recommend Org
-
Facebook
We are working to build community through open source technology. NB: members must have two-factor auth.
-
Microsoft
Open source projects and samples from Microsoft.
-
Google
Google ❤️ Open Source for everyone.
-
Alibaba
Alibaba Open Source for everyone
-
D3
Data-Driven Documents codes.
-
Tencent
China tencent open source team.
from hls4ml.