olofk / edalize Goto Github PK
View Code? Open in Web Editor NEWAn abstraction library for interfacing EDA tools
License: BSD 2-Clause "Simplified" License
An abstraction library for interfacing EDA tools
License: BSD 2-Clause "Simplified" License
In the vivado backend source code it says:
- IP: Supply the IP core xci file with file_type=xci and other files (like .prj)
as file_type=data
However, this is the only reference to file_type=data that I found. Is this a mistake?
See m-labs/migen#153
Can you please clarify from which commit of VUnit this can be used ?
Currently, the generated run.py uses the type 'VHDL.standard', and I am getting the error:
Traceback (most recent call last):
File "/run.py", line 25, in
lib.add_source_files("/sourcefile.vhd", vhdl_standard=VHDL.standard("2008"))
File "/vunit/vunit/ui/library.py", line 216, in add_source_files
for file_name in file_names
File "/vunit/vunit/ui/library.py", line 216, in
for file_name in file_names
File "/vunit/vunit/ui/library.py", line 270, in add_source_file
vhdl_standard=self._which_vhdl_standard(vhdl_standard),
File "/vunit/vunit/ui/library.py", line 368, in _which_vhdl_standard
return VHDL.standard(vhdl_standard)
File "/vunit/vunit/vhdl_standard.py", line 80, in standard
return VHDLStandard(name)
File "/vunit/vunit/vhdl_standard.py", line 27, in init
if standard.endswith(standard_name) and len(standard_name) == 2:
TypeError: endswith first arg must be str or a tuple of str, not VHDLStandard
Does this need to be corrected or I am missing something here ?
The way the Makefile.j2 is written on the VCS backend requires the users to have "." (dot) in their PATH.
run: {{ name }}
{{ name }} -l vcs.log {% for plusarg in plusargs %} {{ plusarg }} {% endfor %}
-----------^
Currently, the Riviera backend calls vlog
for each Verilog file individually, which results in each file being a separate compilation unit. This behavior is inconsistent with other tools supported by edalize, which pass the whole file list to the tool, which results in all files being in the same compilation unit.
The main end-user-observable difference is the scope of defines: defines are visible within a compilation unit; if each file is a separate compilation unit, defines are not visible across files.
I've been experimenting with the Quartus (quartus_pow
) and Vivado (report_power
) power analysis tools and would like to add support to Edalize/FuseSoC
For the most accurate results these tools require post-place and route simulation to generate VCD (Quartus) or SAIF (Vivado) files with information on signal activity. Therefore, I need to add generation of a timing simulation netlist and SDF using quartus_eda
or write_verilog
/write_sdf
. Is this low enough overhead that I should just add it to the existing default flows? If this is an uncommon enough use case that it shouldn't be a default, would adding a netlist
option for pnr
, a separate option like timing_netlist
, or something else make the most sense? Is there a better integration strategy like a post-build hook?
It doesn't appear that Edalize (or IP-XACT) has file types for SDF, SAIF, or VCD, but I don't think Edalize does much with these and it's up to the tool to check file_type
.
The next (and longest) step is running the post-place and route simulation. I'm not sure whether this will require any changes for ModelSim or other simulators. The main simulation challenge is the load of plusargs recommended, at least for the Quartus flow, and finding device libraries. This is more of a FuseSoC issue, but it would be nice to be able to chain together targets/tools to be able to do the build, netlisting, simulation, and power analysis with a single command.
Finally, the best way to integrate the power analysis step isn't clear. Would yet another pnr
option (power_analysis
?) make the most sense?
Thanks for any thoughts!
Edalize currently uses ISE's Tcl interface to create a project and set project options via the project set
group of commands. It would be helpful if a user could add their own settings in YAML, perhaps under tool_options
. Some properties apply to multiple steps in the process and require the -process
argument. For example:
tool_options:
device: ...
project_properties:
"Allow Logic Optimization Across Hierarchy":
value: false
LUT Combining:
value: true
process: "Synthesize - XST"
ISE project file generation is currently done with an Python string formatting and file writes. Presumably step one would be to convert this to Jinja templating like several other backends. After that this addition should be pretty straightforward.
It'd be useful to be able to call a tool and get easy access to stdout and stderr from that tool.
Often I'll have a script that processes these to check that everything went OK.
I'd like to modify Edatool._run_tool
so that it returns a string containing the output from the tool (combined stdout and stderr). _run_tool
would also contain optional argument to save stdout and stderr to files, and/or to output them to the same stdout and stderr as the calling process.
Based on Fusesoc + Vivado experience, but it is probably the same for other back ends.
Names of generated project files follows this pattern: {CORE_NAME}_{VERSION}.extension. When working with multiple targets for the same core it is not always clear with which target you work. For example, you open Vivado and based on the project name you do not know what target it is. The same story when you want to program device and you choose .bit file. I think it might be helpful if these names include also target name, if it is different than default
. These names would be a combination of core name, version and target name. @olofk what do you think?
Vivado (and other tools) support setting the contents of on-chip memories in an FPGA image without rebuilding. This would be good to support and doing it in the run stage probably makes most sense
This is a fix for common issue for ULX3S.
Include in your bitstream
assign wifi_gpio0 = btn[0]; // btn0 is 0 pressed (exception), other btn's 1-6 are 1 when pressed
what happens without this line is that ESP32 firmware by default looks
for wifi_gpio0 = 0 holding for few seconds and it will JTAG passthru
bitstream to take control over the board.
With above line only if you hold btn0 long, then ESP32 will take control otherwise
your bitstream will keep running normally
I think it might be interesting to integrate hlsclt ( https://github.com/benjmarshall/hlsclt ) to have support for Vivado HLS.
I use DSE quite a bit to handle multiple P&R runs in parallel for some complex designs targeting Altera FPGAs.
It looked like it should be reasonably straightforward to use the pnr tool option for Quartus to allow DSE to be used instead of the standard Quartus P&R flow, so I had a go at implementing it for my requirements (a seed sweep with limits on parallel runs). It worked a treat, taking options from a FuseSoC core and producing exactly the result I was expecting.
I'm hoping to tidy this up over the next few days and make it completely configurable for other DSE flow types (hopefully with sensible defaults), then submit a PR in case it is of interest to anyone else. Any comments or suggestions are welcome, of course!
In the VCS backend, the vcs_options
field is never used
The --elab-run command of ghdl requires the analyze-options to work correctly. So line 54 in ghdl.py should be changed to
ghdl --elab-run $(ANALYZE_OPTIONS) $(STD) $(TOPLEVEL) $(RUN_OPTIONS) $(EXTRA_OPTIONS)
I think I reported this already a while ago to fusesoc, but not sure anymore, maybe I only intended to report it but never did.
A bit a niche circumstance I admit, but I sometimes do some work ona windows desktop via a Cygwin environment with verilator and GTKWave installed as described here: https://zipcpu.com/blog/2017/07/28/cygwin-fpga.html
Now when building with Verilator it gives this error - at first glance its pretty obvious that the paths are messed up, namely the backslashes are not valid path seperators for the makefile.
$ fusesoc --cores-root . run --target=sim pawc
INFO: Preparing ::elf-loader:1.0.2
INFO: Preparing ::hyperram:0
INFO: Preparing ::verilator_tb_utils:0
INFO: Preparing ::picorv32:0
INFO: Preparing ::pawc:0.1
verilator -f pawc_0.1.vc
%Error: Cannot find file containing module: ..srchyperram_0hyper_xface.v
... Looked in:
..srcelf-loader_1.0.2/..srchyperram_0hyper_xface.v
..srcelf-loader_1.0.2/..srchyperram_0hyper_xface.v.v
..srcelf-loader_1.0.2/..srchyperram_0hyper_xface.v.sv
..srcverilator_tb_utils_0/..srchyperram_0hyper_xface.v
..srcverilator_tb_utils_0/..srchyperram_0hyper_xface.v.v
..srcverilator_tb_utils_0/..srchyperram_0hyper_xface.v.sv
..srchyperram_0hyper_xface.v
..srchyperram_0hyper_xface.v.v
..srchyperram_0hyper_xface.v.sv
%Error: Cannot find file containing module: ..srchyperram_0hyper_dword.v
%Error: Cannot find file containing module: ..srcpicorv32_0axi4_memory.v
%Error: Cannot find file containing module: ..srcpicorv32_0picorv32.v
%Error: Cannot find file containing module: ..srcpicorv32_0picorv32_top.v
%Error: Cannot find file containing module: ..srcpawc_0.1rtl/c10lp_baseline_pinout.v
%Error: Cannot find file containing module: ..srcpawc_0.1rtl/pawc_top.sv
%Error: Cannot find file containing module: ..srcpawc_0.1tb/pawc_tb.sv
%Error: Exiting due to 8 error(s)
make: *** [Makefile:16: Vpawc_tb.mk] Error 1
ERROR: Failed to build ::pawc:0.1 : 'make' exited with an error code
Here is the generated .vc file looks like:
--Mdir .
--cc
-LDFLAGS -lelf
+incdir+..\src\elf-loader_1.0.2
-CFLAGS -I..\src\elf-loader_1.0.2
+incdir+..\src\verilator_tb_utils_0
-CFLAGS -I..\src\verilator_tb_utils_0
..\src\hyperram_0\hyper_xface.v
..\src\hyperram_0\hyper_dword.v
..\src\picorv32_0\axi4_memory.v
..\src\picorv32_0\picorv32.v
..\src\picorv32_0\picorv32_top.v
..\src\pawc_0.1\rtl/c10lp_baseline_pinout.v
..\src\pawc_0.1\rtl/pawc_top.sv
..\src\pawc_0.1\tb/pawc_tb.sv
--top-module pawc_tb
--exe
..\src\elf-loader_1.0.2\elf-loader.c
..\src\verilator_tb_utils_0\verilator_tb_utils.cpp
..\src\verilator_tb_utils_0\jtagServer.cpp
..\src\picorv32_0\picorv32_tb.cpp
And to fix it a find+replace of '\' with '/' then calling make seems to do the job, but obviously a manual step like that is a bit of a hack. I'll be honest this might more be a leakage on the part of Cygwin (purposefully) not being a perfect abstraction so feel free to close if its not EDAlize's area :P
I've run into a bit of an issue creating a suitable core file for a project I'm working on that uses Altera's DDR3 controller IP for Arria 10. Synthesis is great - you can supply the QSys and IP files in the core file, Edalize creates appropriate TCL for adding them to the project and Quartus processes them fine. The issue comes with simulation, which I'm attempting to perform in ModelSim.
The problem here is that the DDR simulation models don't work if you just run your testbench as normal. They provide a TCL script declaring procedures that get run from within ModelSim, taking parameters including the same of the actual testbench to run. These TCL procedures seem to be tied in to the exact version of the DDR controller IP being used, so for example upgrading Quartus products one that, whilst having a compatible interface, does some quite different things when you run it to set up the simulation.
The running of these procedures replaces the vsim call that Edalize uses to start a simulation. I was wondering how best to handle situations like this. It looks like it should be possible to include a TCL file that runs the DDR simulation procedures to get the testbench set up and run, but then I think it would still try to run the top level using vsim afterwards, which would fail.
It's not clear to me what the best way to handle this is in a general way - obviously we don't want to end up with a special case for Altera DDR and any other core that has unusual simulation requirements like this. However, I wondered if there is merit in maybe having an option to avoid the vsim call, if the simulation is handled in some other way (such as a TCL file being included in the fileset)? Or is there some better way that might handle this in the general case, do you think?
I would like to be able to define strategy and flow for synthesis and implementation in .core files the same way as I can define part.
For Vivado backend tool_options is very simple and looks as follows:
tool_options = {'members' : {'part' : 'String'}}
I have also checked files for other tools and for Quartus tools_options looks like this:
tool_options = {'members' : {'family' : 'String',
'device' : 'String'},
'lists' : {'quartus_options' : 'String'}}
I think I don't fully get the idea behind these members
and lists
. What is the difference between them? Should I add strategy and flow options to Vivado as members or maybe I should create lists, the same way as it is done for Quartus?
Please....
Here is a simple example for Vivado.
def vivado_resources(self):
report_path = self.out_dir + "/" + self.project_name + ".runs/impl_1/top_utilization_placed.rpt"
with open(report_path, 'r') as fp:
report_data = fp.read()
report_data = report_data.split('\n\n')
report = dict()
section = None
for d in report_data:
match = re.search(r'\n-+$', d)
if match is not None:
match = re.search(r'\n?[0-9\.]+ (.*)', d)
if match is not None:
section = match.groups()[0]
if d.startswith('+--'):
if section is not None:
# cleanup the table
d = re.sub(r'\+-.*-\+\n','', d)
d = re.sub(r'\+-.*-\+$','', d)
d = re.sub(r'^\|\s+','', d, flags=re.M)
d = re.sub(r'\s\|\n','\n', d)
report[section.lower()] = asciitable.read(d, delimiter='|', guess=False, comment=r'(\+.*)|(\*.*)', numpy=False)
return report
def resources(self):
lut = 0
dff = 0
carry = 0
iob = 0
pll = 0
bram = 0
report = self.vivado_resources()
for prim in report['primitives']:
if prim[2] == 'Flop & Latch':
dff += int(prim[1])
if prim[2] == 'CarryLogic':
carry += int(prim[1])
if prim[2] == 'IO':
iob += int(prim[1])
if prim[2] == 'LUT':
lut += int(prim[1])
for prim in report['clocking']:
if prim[0] == 'MMCME2_ADV' or prim[0] == 'PLLE2_ADV':
pll += prim[1]
for prim in report['memory']:
if prim[0] == 'Block RAM Tile':
bram += prim[1]
ret = {
"LUT" : str(lut),
"DFF" : str(dff),
"BRAM" : str(bram),
"CARRY" : str(carry),
"GLB" : "unsupported",
"PLL" : str(pll),
"IOB" : str(iob),
}
return ret
May I add support for RTLvision PRO, a commercial tool for visually/interactively debugging/exploring RTL designs?
The tool reads files/defines/parameters directly from the command line and is also able to import standard .f
files, so an integration into edalize
should be straight forward.
Many EDA tools support digesting file lists in *.f files. Having a (pseudo-)backend which generates these files would serve as stopgap for EDA tools which we don't support natively so far, and opens the door to more exploration.
Feature list (to be discussed):
Design considerations:
As part of integrating FuseSoC & Edalize with our continuous integration setup, I've noticed that ModelSim always has an exit code of 0 when tests fail with either:
It appears that ModelSim treats the completion of a simulation, regardless of errors/warnings, as a success.
In a previous CI setup, I remember using the 'onbreak' macro to set a flag that could be passed to the 'quit' functions '-code' argument which allowed CI to detect a non zero exit code successfully. I've made an example patch on my fork which implements something similar.
Do you think this is a reasonable way to support returning error codes for easier CI integration? I'll raise a pull request if this seems sensible.
I was just experimenting with FuseSoC's ability to pass arguments to the edalize backends for overriding top level parameters / generics. It appears that at present only verilog params and defines work, as none of the edalize backends include 'generic' in their argtypes list (e.g. edalize/modelsim.py line 65), or include the code that could make use of them.
As a quick test, I've written a simple VHDL counter entity whose number of bits is defined via a generic. I was hoping to be able to vary this in my modelsim simulation by using:
fusesoc sim simplecounter --bits=10
The relevant part of the associated core file:
parameters:
bits:
datatype : int
description : num bits in count
paramtype : generic
Doing this currently results in the following error:
usage: fusesoc run simplecounter_0.0.1 [-h]
fusesoc run simplecounter_0.0.1: error: unrecognised arguments: --bits=31
It looks like this happens when the parse_args() function in edalize/edatool.py is supplied with parameter values that don't meet the argtypes list for the relevant backend. I'm happy to submit an update to the modelsim backend (and possibly the Quartus one as I'm familiar with that), but I've no experience with several of the other backends which may benefit from similar code changes.
Would you like a patch that adds a limited form of 'generic' support, and possibly adds a warning in the parse_args() function when it encounters a parameter name it recognises but whose type isn't present in paramtypes for now?
Verilog to Routing supports using Quartus as a frontend for synthesis and then VtR for doing the place and route. You can find out more about this flow @ http://www.eecg.utoronto.ca/~kmurray/titan/fpl_13_demo.pdf
It would be really great if there was inbuilt support in FuseSoC / edalize for this flow. It would mean we could get more real world designs into the VtR benchmarks (and thus enable more research that is applicable to real world stuff!).
The process seems to be;
3. VQM GENERATION
To generate BLIF, a Verilog Quartus Map (VQM) file must first be produced by Quartus II.
To enable VQM generation, hidden variables must be added to Quartus settings file (.qsf) before synthesis is performed. This is shown in Listing 2 for the Stratix IV family.Listing 2: Assignments added to the .qsf for VQM Generation
set_global_assignment -name INI_VARS "qatm_force_vqm=on;vqmo_gen_sivgx_vqm=on"
The VQM is generated by first synthesizing and merging the design, and finally writing out the VQM. The commands are shown in Listing 3.
Listing 3: Synthesizing & Generating the VQM
$ quartus_map bitcoin_small $ quartus_cdb bitcoin_small --merge $ quartus_cdb bitcoin_small --vqm=bc_small.vqm
4. VQM TO BLIF CONVERSION
The plain-text VQM can now be converted to BLIF, using the
vqm2blif
tool, as shown in Listing 5. Advanced usage is described invqm2blif
โs documentation.Listing 5: VQM to BLIF conversion
$ ../vqm2blif/vqm2blif.exe -vqm bc_small.vqm -arch ../test_arch.xml -out bc_small.blif
5. RUNNING VPR
The generated BLIF file can now be used in academic CAD tools that read BLIF. Figure 1 shows the placement generated by VPR [2], created with the command shown in Listing 6.
Listing 6: VPR command to generate Figure 1
$ vpr ../test_arch.xml bc_small.blif --timing_analysis off
Trellis wrapper is missing tests. We should add it.
It appears the GHDL backend expects VHDL generics to be supplied as vlogparam
arguments from the days before support for the generic argument type was added (#7). Since GHDL doesn't support Verilog, should vlogparam
be replaced with generic
, or should both be supported for backward compatibility?
We are using the Fusesoc AscentLint target in OpenTitan, and run into issues with our license queue when running batch regressions. In particular, without passing the -wait_license
switch to the tool, it immediately errors out with a FlexNet license error, which makes runs unstable.
Would it be possible to either add a simple binary flag for this, or even better, a runtime parameter string that could optionally passed to the tool here:
After aa30b9e Verilator is called with "make -s", suppressing all compiler calls (e.g. gcc ...
). That makes it very hard to debug compiler warnings as we have no idea what options are passed to it. I currently have a failing CI run because of a GCC warning, and just finding out where the "warnings as error" flag is coming from is a pain.
Please revert this commit. Especially in CI we want to have as much information as possible to debug failing runs.
We need support for Synopsys Design Compiler for synthesis. This will most likely be a rather minimal backend, where only the filelist is coming from edalize, all other configuration is done by specifying a TCL file with the appropriate flow.
I would really like to be able to synthesise my RTL without being forced to do a full P&R. I see this option is available for Vivado as a tool option.
Could we have this option for Quartus as well?
Quartus DSE allows users to specify their own algorithm for determining a 'quality of fit' score. The algorithm is expressed in a Python file, to path to which must be included in the .dse
file (or provided through the DSE GUI, which isn't appropriate for FuseSoC).
Support an additional tool option so that this information can be written to the .dse
file. Note that if no path is provided DSE will use a default algorithm (the path to which does not need to exist in the .dse
file).
Synopsys VCS-MX supports not only (System)Verilog as is the case with the backend right now but also VHDL, allowing mixed-language simulation.
Some pointers:
If no device is connected, we get this error message. We should tidy it up to be more helpful and less of a stacktrace.
vivado -quiet -nolog -notrace -mode batch -source fusesoc_utils_blinky_0_pgm.tcl -tclargs xc7a100tcsg324-1 fusesoc_utils_blinky_0.bit
FuseSoC Xilinx FPGA Programming Tool
====================================
INFO: Programming part xc7a100tcsg324-1 with bitstream fusesoc_utils_blinky_0.bit
INFO: [Labtools 27-2285] Connecting to hw_server url TCP:localhost:3121
INFO: [Labtools 27-2222] Launching hw_server...
INFO: [Labtools 27-2221] Launch Output:
****** Xilinx hw_server v2018.2
**** Build date : Jun 14 2018-20:18:37
** Copyright 1986-2018 Xilinx, Inc. All Rights Reserved.
ERROR: [Labtoolstcl 44-199] No matching targets found on connected servers: localhost
Resolution: If needed connect the desired target to a server and use command refresh_hw_server. Then rerun the get_hw_targets command.
ERROR: [Common 17-39] 'get_hw_targets' failed due to earlier errors.
while executing
"get_hw_targets"
invoked from within
"foreach { hw_target } [get_hw_targets] {
puts "INFO: Trying to use hardware target $hw_target"
current_hw_target $hw_target
# Open hardw..."
(file "fusesoc_utils_blinky_0_pgm.tcl" line 17)
make: *** [Makefile:22: pgm] Error 1
ERROR: Failed to run fusesoc:utils:blinky:0 : 'make' exited with an error code
I thought -notrace
should have taken care of the while executing...
part. But perhaps this is because it's a nested error? Pulling this anyway and perhaps there are some more improvements to be done later on
Originally posted by @olofk in #65 (comment)
blinky is not working, some files are generated then error comes with template not found exeption :(
The various tools can produce a lot of messages about the design. Extract them and report in a useful common format.
It would be great if we had support for JasperGold as well. This can also be in the form of a minimal backend that accepts a tcl file.
Let me know if you need assistance with this.
The opentitan repository has an example flow in place, see the fpv
and fpv.tcl
files here:
https://github.com/lowRISC/opentitan/tree/master/hw/formal
Just putting down a note here that it would be great if we can eventually add support for Verible, an open-source style linter/formatter (see: https://github.com/google/verible).
I will add some examples of how this tool can be invoked later.
Following on from this PR comment I'm starting this discussion to find the best way forward:
Desire:
Remove all DSE specific options and replace with one core file parameter such as dse_options
.
Options:
The data provided by dse_options
is written directly to a DSE file which is generated on-the-fly.
E.g.
targets:
build:
default_tool: quartus
filesets: [stuff]
tools:
quartus:
device: abc
family: xyz
dse_options:
- 'seeds="1,2,3"'
- 'num_concurrent=4'
- 'num_parallel_processors=0'
Would produce a .dse
file with the contents:
# Auto generated by Edalize
seeds="1,2,3"
num_concurrent=4
num_parallel_processors=0
Another option might be native support a dse
file type.
E.g.
filesets:
synth:
files:
- /path/to/dse/file {file_type: dse}
The file would be supplied as-required by DSE. This option will probably require some resolution logic to determine what to do if zero or more than one .dse
files are provided.
There are probably other options. Would be grateful for thoughts and opinions.
VCS can compile C files into DPI modules. Until we have real DPI support, we should at least pass C and header files to VCS.
Hello,
I was working on similar project.
My problem was that Vivado has too slow start and it was much more effective if it was running as a backend server and the client was sending the jobs for it. It showed up to be a good idea because then it was easy to build building grid server.
Also real-time communication with TCL interpret in Vivado is much better because it shows the errors exactly when they happen and allows exception handling.
Do you think that code for this would be usefull for you?
Also It would be great if we have an abstraction library for XDC and other constraint file formats.
What you are planing to do in this library, maybe I can help?
Quartus Pro 19.4.0 (and possibly others after 19.1.0) return a slightly different string that the current regex fails to match correctly, therefore incorrectly assumes we are running Quartus Standard.
The main issue is that the letters SJ
have changed to SC
.
I don't know what meaning they have and unless it's important I suggest matching on any string of letters instead.
I need to provide an option to the vcom
command in ModelSim, specifically -O0
because ModelSim optimisation is doing bad things to a test bench.
I see that there is support for vlog_options
, so following convention I guess there should be support for vcom_options
.
When running FuseSoC with a simple core file that includes one VHDL source file and one VHDL test bench with a sim
target that uses ModelSim, FuseSoC doesn't exit when the simulation comes to an end. Instead, I am presented with a ModelSim prompt, for example:
VSIM 2>
The ModelSim prompt is unresponsive. Thus the only way to stop FuseSoC and regain my terminal is to kill it, for example Ctrl-C
, and FuseSoC will report that it's aborted.
INFO: ****************************
INFO: **** FuseSoC aborted ****
INFO: ****************************
I don't know if there is an intention to have a working ModelSim prompt after a simulation finishes or whether FuseSoC should recognise that the simulation has finished and it should exit accordingly. From my perspective I expected the latter, but I am trying to integrate FuseSoC into an automated build flow so a ModelSim prompt is not helpful.
I have found a reasonable workaround by adding the following to my FuseSoC core file, but suspect it isn't a good workaround because it has implications when starting the ModelSim GUI:
vsim_options: [-do, exit]
Looking at the FuseSoC output I see the following:
vsim -c -do "do edalize_main.tcl; exit"
I see that there is a clear exit
included here.
But later on when invoking vsim
for my test bench the following is used:
vsim -do "run -all" -c <library>.<test-bench-entity>
Should the last vsim
invocation also include an exit
?
I suggest;
Python Library for interacting with EDA tools.
Currently you take like 5 paragraphs to describe EDAlize and I get bored before actually knowing what it does....
I believe it should be explicitly stated that you need the vlog_tb_utils.v file/module in order to get the blinky example to run. I also had to explicitly state the path for this verilog file.
Since Vivado is a supported backend, I did expect some kind of project management features. For example, having some default IP-core assets that are updated. When top-level generics/parameters are updated in the EDAM file, these would be applied to the bd/bd.tcl
, component.xml
and xgui/*.tcl
files. Is this supported at all? Or is the user expected to update generics in VHDL, in edalize and in the Vivado files too? In fusesoc/blinky/blob/master/nexys_a7/blinky.xdc IO constraints are defined, but I could not find any other Vivado-specific source.
From a wider perspective, I wonder how would edalize fit in a project for Zynq. Just a 'simple' design of a single accelerador with an AXI Stream input and an AXI Stream output. This would be an IP-core, instantiated in a project with the PS and a DMA. The point would be if edalize can update the files that correspond to the IP-core, rebuild it, and rebuild all the project.
When I worked with edalize I was able to get a backend with:
backend = edalize.get_edatool(tool)(eda_api_file=edam, work_root=work_root
now it expects eda_api_file
argument. This one is not a dictionary anymore. The code expects yaml format.
What is more, git blame shows that edatool constructor was always like that.
@olofk did you change this and force pushed the changes?
A declarative, efficient, and flexible JavaScript library for building user interfaces.
๐ Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. ๐๐๐
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google โค๏ธ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.