hossein1387 / barvinn Goto Github PK
View Code? Open in Web Editor NEWBARVINN: A Barrel RISC-V Neural Network Accelerator: https://barvinn.readthedocs.io/en/latest/
License: MIT License
BARVINN: A Barrel RISC-V Neural Network Accelerator: https://barvinn.readthedocs.io/en/latest/
License: MIT License
There should be a proper way to read MVU BRAMs from verification env:
1- A method to read/write input data rams
2- A method to write into weight rams
3- A method to read/write output data rams
According to your paper on arXiv, it seems that:
As far as I understand, this repository has been checked with these settings, right?
I'm not sure as the Readme mentions another Vivado version (2019.1)
Hello, thank you for maintaining this project. I met this 'Fail', and I don't know how to fix it. It seems like I use the wrong riscv core to gen code.
[ INFO] 1709600.00 ns Test Fail [0x0f013087: !unknown instruction!] Unknown Instruction, pc=200cc8
[ INFO] 1710400.00 ns Test Fail [0x0e813107: !unknown instruction!] Unknown Instruction, pc=200ccc
[ INFO] 1711200.00 ns Test Fail [0x0e013187: !unknown instruction!] Unknown Instruction, pc=200cd0
[ INFO] 1712000.00 ns Test Fail [0x0d813207: !unknown instruction!] Unknown Instruction, pc=200cd4
[ INFO] 1712800.00 ns Test Fail [0x0d013287: !unknown instruction!] Unknown Instruction, pc=200cd8
[ INFO] 1713600.00 ns Test Fail [0x0c813307: !unknown instruction!] Unknown Instruction, pc=200cdc
This is my Makefile in conv, I change the march.
LDFLAGS = -nostdlib -T$(ROOT_DIR)/common/link.ld -Map=$(OUT_DIR)/$(PROJ).map
CCFLAGS = -march=rv32imafd -std=gnu99 -ffast-math -fno-builtin-printf
To generate the RISCV firmwares for BARVINN simulation, I am trying to compile the convolution C code in csrc/conv/
. So I installed the rv32 RISCV compilation toolchain as per the Picorv32 instructions. When I do make all
in the conv
folder, I get the following error:
I am actually not able to find the definitions for these macros across the entire repo(including in csrc/conv/common/pito.h
:
Could you confirm about these?
Allow input word to be received as a pack of nprecision words where nprecision is a power of 2.
Hi! Thank you for maintaining this project. I am interested in exploring the bit-serial architecture of the BARVINN project. I am trying to compile the simulation flow ON WINDOWS PC to take a look at some example tests for convolution. However I am currently encountering some issues with compilation.
vivado -mode batch -nolog -nojournal -source gen_xilinx_ip.tcl
first time to generate these needed IPs (This would a good thing to add into the main BARVINN readme)set xilinxpart xcku115-flva1517-2-e
in MVU/.tclscripts/common.tcl
. I am using a 2019.1 free Vivado Webpack installation . I changed it to xcku5p-ffvd900-3-e
one of the devices supported in the free WebPack version. I am able to generate the MVU IPs for this different part . (This should be fine I assume?)fusesoc run --target=sim barvinn
(after setting the mvu and pito risc v libraries in fusesoc) I get the following error:Since last release, BARVINN documentation has not been updated. The following needs to be added:
Connect MVU to data transposer and issue writes to MVU from a linear memory layout (e.g PITO mem) and use data transposer to write to MVU in transposed packed format.
Hi, thanks for your brilliant work! The design of MVU and hart control is impressive. I'm tring to get the right simmulation output.hex by fusesoc run --target=sim barvinn
. But when I read the waves, I find that some signals related to scaler, bias or other configs unset in the c code are at x
state, resulting the final all-zero quant_out
.
So I try to set these configs in conv2d.c
by my understanding to the project:
SET_CSR(CSR_MVUSCALER, 1); // scaler_b=1
SET_CSR(CSR_MVUUSESCALER_MEM,0);
SET_CSR(CSR_MVUUSEBIAS_MEM,0);
Now I'm cunfused with CSR_MVUCONFIG1
. The docs indicates that it's about Shift/accumulator load on jump select
and Shift/accumulator load on jump select
, and they're all 8-bit respectively. This does match with the comment in BARVINN/deps/MVU/verification/lib/mvu/mvu_pkg.sv
(branch 72b5413) .
While in mvutop_wrapper.sv
and mvutop_wrapper.sv
, CSR_MVUCONFIG1
seems to control shacc_load_sel
and zigzag_step_sel
, 5-bit respectively.
These two configs seem to be important in cumputing, but MVU_Code_gen
doesn't export them. I also find them in some test files like MVU/c/conv2d_jumps.c
or mvutop_tester.sv
, but still don't quite get the relationship between them and other model parameters. If I just want to run the sample conv2d (1x64x32x32
input, 64x64x3x3
weight and 2-bit precision), how can I calculate shacc_load_sel
and zigzag_step_sel
?
I also have a question about quantized model computation. At the end of a layer's computation, scaler
module can rescale a quantized output. And the next layer's input should be quantized again. But It seems that the quantser
module is not able to quantize inputs by a certain scale. So how does BARVINN deal with this process? I'd appreciate it if you can offer some examples about multi-layers quantized model!
For instance, in the matmul test:
https://github.com/hossein1387/BARVINN/blob/master/verification/tests/conv/conv_tester.sv#L17-L18
weight.hex
and input.hex
are not present. I assume these files should be compiled with a command. But which one?
@wagnersj :
it looks like it's outputting the LSB word first instead of the MSB first which what the MVU's expect. It also seems to be taking the first input word and transposes it to the NUM_WORDS-1 bit position in the output words, whereas I think it should be at the 0th bit position.
Hi!
I went through the sim for BARVINN (current master
branch). When I run the sim with the code compiled from csrc/conv/
, I see that the sim keeps running forever, as you can see here below:
There are few observations from inspecting the APB bus traffic:
mvucommand
register being written to triggers a job with start[0]
.wait_for_mvu_irq()
doesn't seem to complete, and instead somehow seems to reset the processor to execute the code again from start (?) :The second group of APB transactions that you see on the right are the exact same SET_CSR
commands of the conv2d code from beginning, BUT this time with all APB commands addressed to just one MVU. Just a zoomed in version of this aspect here:
3. This seems to repeat indefinitely, and every time you hit wait_for_mvu_irq()
in the first iteration of for loop, it seems to break out of that for loop, and a reset of some sort happens and commands are seen to be issued from beginning. Is there some other sim setup needed for handling interrupts properly?
To confirm its potentially an issue with irq handling, I replaced the wait_for_mvu_irq()
with just a simple counter in c code to count up to 50, to provide some delay and have the mvu job completed in the meantime. When I do this, I now see that simulation completes successfully, showing exactly 8 start signals (the main.c
calls conv3x3_64
twice, and the conv3x3_64
function has a for loop running 4 times, each triggering a job). I also see that all APB commands in this scenario are adressed for all 8 MVUs :
Do you have any pointers on whats going wrong here with the interrupt handling?
Hi!
I'm using 3.12.2 python; 2.3 fusesoc and 2021.1 vivado.
I tried running these commands:
fusesoc run --target=sim barvinn
fusesoc run --target=synth barvinn
but i got these erros, respectively:
ERROR: Setup failed : Cannot find deps/MVU/ip/build/xilinx/bram64k_64x1024_xilinx/simulation/blk_mem_gen_v8_4.v in /home/fauri/Documentos/BARVINN
source barvinn_0.tcl -notrace
ERROR: [Common 17-69] Command failed: Part 'xcvu9p-flgb2104-2-e' not found
INFO: [Common 17-206] Exiting Vivado at Sun Mar 17 20:49:39 2024...
make: *** [Makefile:8: barvinn_0.xpr] Error 1
ERROR: Failed to build ::barvinn:0 : '['make', 'synth']' exited with an error: 2
Can anyone help me?
There are several hardcoded paths in this repository:
Excuse me, is /csrc/conv/common damaged ?
A declarative, efficient, and flexible JavaScript library for building user interfaces.
๐ Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. ๐๐๐
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google โค๏ธ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.