prusa3d / libbgcode Goto Github PK
View Code? Open in Web Editor NEWPrusa Block & Binary G-code reader / writer / converter
License: GNU Affero General Public License v3.0
Prusa Block & Binary G-code reader / writer / converter
License: GNU Affero General Public License v3.0
Right now the G-Code data block just lists the encoding header format and lists the possible values for the header field:
0 = No encoding
1 = MeatPack algorithm
2 = MeatPack algorithm modified to keep comment lines
without defining what MeatPack
or the modified MeatPack
algorithms are. It would be good if the definition of those encoding types were documented (also assuming no encoding just means the text gcode). While you can look at the source code, documenting the actual specifics of the encoding formats will make the specification more useful for people trying to interoperate with the format.
According to the spec, the size of the block parameters is variable, but not defined anywhere in the block.
This is not a good idea, since a generic reader will not know how to skip past block types that may be defined in the future.
Two possible solutions are suggested:
You should also be explicit about whether the block size includes or does not include the size of the checksum.
It would be great to have a standard set of keys for the metadata.
For instance:
Something sorely needed in the CNC world is security. GCode isn't just used for 3D printing, it is used in CNC equipment moving tons of material at high speed. We have actual real world instances of state actors corrupting code to destroy machinery.
Making space for a digital signature wrapper around the Gcode would provide the option of signing it. Basically you just have a preamble saying it is going to be signed and the digest value is X and then comes the data and then the computed signature goes at the end.
While 3D printers don't move massive amounts of material in quite the same way as a CNC lathe, signatures could be very useful for making copyright claims. In a production environment, they can be used to make sure that printers are building the correct part rather than a modified one.
There are standard packaging formats and support libraries that can be used.
Hi! I want to add this package for gentoo linux (as well as update for prusaslicer 2.7-alpha). But it seems i can only add live build for this repo, since it doesnt have any tags/releases.
So can you add tag corresponding to version that builds with alpha of prusaslicer, since its mandatory dependency for it.
It was really easy to add support for parsing bgcode files to my project. Thank you to whomever it was that put the time into adding the wasm/emscripten build.
If I had one suggestion, it would be to not name your global Module
. Name it BGcodeModule
or something unique. When you get releases sorted out we can write some typescript bindings for it and make an NPM module.
Hi, I encountered the following error
Could NOT find ZLIB (missing: ZLIB_LIBRARY ZLIB_INCLUDE_DIR) (Required is at least version "1.0")
and tried fixing it by adding to CMakeLists.txt:
find_package(ZLIB REQUIRED "1.0")
from this link: https://askubuntu.com/questions/1244299/cmake-can-not-find-zlib-on-ubuntu-20-04
but it did not work.
Could you please make to try it work?
Th use case is I install https://plugins.octoprint.org/plugins/bgcode/ to Octo4a,
I also needed to install
apk add git apk add zlib
I have this ticket also open:
jneilliii/OctoPrint-BGCode#8
Is it possible to create a PyPI release please? Currently, the wheels are being built as part of the GH Actions build, but not uploaded anywhere.
How the tittle says, can we add to the readme, after concerting the G-code to binary, how can we send it to the printer, and how will the printer use it?
The specification describes a binary format, but does not specify its endianness. What is it?
For instance:
The nice features of a schema-driven format would be:
a) It would be easier to generate and parse from different languages
b) It would simplify testing both encoders and decoders. For instance, protobuf3 can be represented as JSON, so it might be useful to have "golden test inputs/outputs" stored as JSON files (human-readable), and the test frameworks could use the same JSONs converted to the binary
c) It would reduce the errors when parsing the files. As the actual "parsers and encoders" would be autogenerated, it would be impossible to set a non-existing field or something like that.
d) It would simplify schema evolution as the tools have evolution covered by default.
Could you make the library to request an underlying OS to install these necesary packages?
Originally posted by @kdobrev in #39 (comment)
I wrote my own binary gcode parser ThumbnailBinaryGCode.m because your sample code
core.cpp had too many dependencies for me to get it to compile. In that core.cpp, block_parameters_size() claims that the block header for a Thumbnail block is longer than the others by two sizeof(uint16_t). This matches what the thumbnail spec says, but it isn't actually true.
In reality, a thumbnail block, is the same length as all the other blocks. The following data is either a PNG file, a QOI file, or a JPG file. Following the compressed data are 4 bytes that I don't know what they are, and the next block starts after that. (both PNG and QOI start with a 4 byte magic number, then the width and height.
Since core.cpp doesn't actually try to show the thumbnail, that code doesn't detect the error.
Please correct the thumbnail spec to say that there is a single sizeof(uint16_t) parameter at the end of the block, then the uncompressed data of the thumbnail, then 4 bytes of ?what?, and then the next block.
I'm assuming the library is responsible as I have used the conversion function from the latest 2.7.0 PrusaSlicer to convert a bgcode file to a gcode file for the XL.
The absorbing heat step (G29 G
) is converted without a space, to G29G
which I believe is invalid for gcode formatting in general. I have attached the file I did the conversion on showing the before and after.
Even if the code can be handled by the printer, it really should be formatted properly.
I believe a structured encoding for key-value might be easier to parse.
For instance:
unit16_t keySize;
char* key;
uint16_t valueSize;
char* value;
The specification might add a limit like 128 bytes for the maximum key and value, so the consumers can reject files if a key or value exceeds 128.
The following tests fail in big-endian architectures:
The following tests FAILED:
1 - File transversal (Failed)
2 - Search for GCode blocks (Failed)
3 - Convert from binary to ascii (Failed)
4 - Convert from ascii to binary (Failed)
Downstream bug report in Debian: https://bugs.debian.org/cgi-bin/bugreport.cgi?bug=1062014
Some debugging reveals that this error happens due to mismatched endianness when reading data files.
diff --git a/src/LibBGCode/core/core.cpp b/src/LibBGCode/core/core.cpp @@ -126,8 +127,10 @@ EResult FileHeader::read(FILE& file, const uint32_t* const max_version) { if (!read_from_file(file, &magic, sizeof(magic))) return EResult::ReadError; + printf("data=0x%08x MAGIC=0x%08x\n", magic, MAGICi32); if (magic != MAGICi32) return EResult::InvalidMagicNumber; -> File:/build/libbgcode/tests/data/mini_cube_b.bgcode data=0x47434445 MAGIC=0x45444347
It's likely that there are many more similar problems lurking in the source code.
If this issue can't be fixed, the package should probably be disabled on big endian architectures.
Sample build logs:
https://github.com/prusa3d/libbgcode/blob/bc390aab4427589a6402b4c7f65cf4d0a8f987ec/doc/specifications.md is a bit unclear in my opinion: It says that several of the blocks are "tables of key-value pairs". No where is the format of that table defined.
I'm guessing based on the encoding "0 = INI encoding" that it it is encoded as an .ini file would be. But what are the valid keys and values? What are the valid section headers? Are keys before the first section allowed? Are sections even allowed? What line ending should be used (LF, CRLF)? Should the lines be formatted as key=value
or key = value
? How is non-ASCII encoded (UTF-8)?
There is probably a ton more ambiguity than the questions I listed above.
The specification here is clearly somewhat underspecified.
What license is the binarized G-code specification released under?
The license of this project is AGPL but that isn't very suitable for a standard, especially not a standard that wants to achieve wide adoption.
Something like the CC0 + OWFa that Mozilla suggest, or a similar other alternative, would be preferable.
A declarative, efficient, and flexible JavaScript library for building user interfaces.
๐ Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. ๐๐๐
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google โค๏ธ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.