Giter VIP home page Giter VIP logo

quick-protobuf's Introduction

quick-protobuf

A pure Rust library to serialize/deserialize protobuf files.

Documentation

Description

This library intends to provide a simple yet fast (minimal allocations) protobuf parser implementation.

It provides both:

  • pb-rs, a code generation tool:
    • each .proto file will generate a minimal rust module (one function to read, one to write, and one to compute the size of the messages)

    • each message will generate a rust struct where:

      Proto Rust
      bytes Cow<'a, [u8]>
      string Cow<'a, str>
      other scalars rust primitive
      repeated Vec
      repeated, packed, fixed size Cow<'a, [M]>
      optional Option
      message struct
      enum enum
      map HashMap
      oneof Name OneOfName enum
      nested m1 mod_m1 module
      package a.b mod_a::mod_b modules
      import file_a.proto use super::file_a::*
    • no need to use google protoc tool to generate the modules

  • quick-protobuf, a protobuf file parser:
    • this is the crate that you will typically refer to in your library. The generated modules will assume it has been imported.
    • it acts like an event parser, the logic to convert it into struct is handled by pb-rs

Example: protobuf_example project

    1. Install pb-rs binary to convert your proto file into a quick-protobuf compatible source code
cargo install pb-rs
pb-rs /path/to/your/protobuf/file.proto
# will generate a 
# /path/to/your/protobuf/file.rs
    1. Add a dependency to quick-protobuf
# Cargo.toml
[dependencies]
quick-protobuf = "0.8.0"
    1. Have fun
extern crate quick_protobuf;

mod foo_bar; // (see 1.)

use quick_protobuf::Reader;

// We will suppose here that Foo and Bar are two messages defined in the .proto file
// and converted into rust structs
//
// FooBar is the root message defined like this:
// message FooBar {
//     repeated Foo foos = 1;
//     repeated Bar bars = 2;
// }
// FooBar is a message generated from a proto file
// in parcicular it contains a `from_reader` function
use foo_bar::FooBar;
use quick_protobuf::{MessageRead, BytesReader};

fn main() {
    // bytes is a buffer on the data we want to deserialize
    // typically bytes is read from a `Read`:
    // r.read_to_end(&mut bytes).expect("cannot read bytes");
    let mut bytes: Vec<u8>;
    # bytes = vec![];

    // we can build a bytes reader directly out of the bytes
    let mut reader = BytesReader::from_bytes(&bytes);

    // now using the generated module decoding is as easy as:
    let foobar = FooBar::from_reader(&mut reader, &bytes).expect("Cannot read FooBar");

    // if instead the buffer contains a length delimited stream of message we could use:
    // while !r.is_eof() {
    //     let foobar: FooBar = r.read_message(&bytes).expect(...);
    //     ...
    // }
    println!("Found {} foos and {} bars", foobar.foos.len(), foobar.bars.len());
}

Examples directory

You can find basic examples in the examples directory.

Message <-> struct

The best way to check for all kinds of generated code is to look for the codegen_example data:

Proto definition

enum FooEnum {
    FIRST_VALUE = 1;
    SECOND_VALUE = 2;
}
    
message BarMessage {
    required int32 b_required_int32 = 1;
}

message FooMessage {
    optional int32 f_int32 = 1;
    optional int64 f_int64 = 2;
    optional uint32 f_uint32 = 3;
    optional uint64 f_uint64 = 4;
    optional sint32 f_sint32 = 5;
    optional sint64 f_sint64 = 6;
    optional bool f_bool = 7;
    optional FooEnum f_FooEnum = 8;
    optional fixed64 f_fixed64 = 9;
    optional sfixed64 f_sfixed64 = 10;
    optional fixed32 f_fixed32 = 11;
    optional sfixed32 f_sfixed32 = 12;
    optional double f_double = 13;
    optional float f_float = 14;
    optional bytes f_bytes = 15;
    optional string f_string = 16;
    optional FooMessage f_self_message = 17;
    optional BarMessage f_bar_message = 18;
    repeated int32 f_repeated_int32 = 19;
    repeated int32 f_repeated_packed_int32 = 20 [ packed = true ];
}

Generated structs

#[derive(Debug, PartialEq, Eq, Clone, Copy)]
pub enum FooEnum {
    FIRST_VALUE = 1,
    SECOND_VALUE = 2,
}

#[derive(Debug, Default, PartialEq, Clone)]
pub struct BarMessage {                                 // all fields are owned: no lifetime parameter
    pub b_required_int32: i32,
}

#[derive(Debug, Default, PartialEq, Clone)]
pub struct FooMessage<'a> {                             // has borrowed fields: lifetime parameter
    pub f_int32: Option<i32>,
    pub f_int64: Option<i64>,
    pub f_uint32: Option<u32>,
    pub f_uint64: Option<u64>,
    pub f_sint32: Option<i32>,
    pub f_sint64: Option<i64>,
    pub f_bool: Option<bool>,
    pub f_FooEnum: Option<FooEnum>,
    pub f_fixed64: Option<u64>,
    pub f_sfixed64: Option<i64>,
    pub f_fixed32: Option<u32>,
    pub f_sfixed32: Option<i32>,
    pub f_double: Option<f64>,
    pub f_float: Option<f32>,
    pub f_bytes: Option<Cow<'a, [u8]>>,                 // bytes  -> Cow<[u8]>
    pub f_string: Option<Cow<'a, str>>                  // string -> Cow<str>
    pub f_self_message: Option<Box<FooMessage<'a>>>,    // reference cycle -> Boxed message
    pub f_bar_message: Option<BarMessage>,
    pub f_repeated_int32: Vec<i32>,                     // repeated: Vec
    pub f_repeated_packed_int32: Vec<i32>,              // repeated packed: Vec
}

Leverage rust module system

Nested Messages

message A {
    message B {
        // ...
    }
}

As rust does not allow a struct and a module to share the same name, we use mod_Name for the nested messages.

pub struct A {
    //...
}

pub mod mod_A {
    pub struct B {
        // ...
    }
}

Package

package a.b;

Here we could have used the same name, but for consistency with nested messages, modules are prefixed with mod_ as well.

pub mod mod_a {
    pub mod mod_b {
        // ...
    }
}

Why not rust-protobuf

This library is an alternative to the widely used rust-protobuf.

Pros / Cons

  • Pros

    • Much faster, in particular when working with string, bytes and repeated packed fixed size fields (no extra allocation)
    • No need to install protoc on your machine
    • No trait objects: faster/simpler parser
    • Very simple generated modules (~10x smaller) so you can easily understand what is happening
  • Cons

    • Less popular
      • most rust-protobuf tests have been migrated here (see v2 and v3)
      • quick-protobuf is being used by many people now and is very reliable
      • some missing functionalities
    • Not a drop-in replacement of rust-protobuf
      • everything being explicit you have to handle more things yourself (e.g. Option unwrapping, Cow management)

Codegen

Have a look at the different generated modules for the same .proto file:

Benchmarks

See perftest, an adaptation of rust protobuf's perftest. Depending on your scenario each crate has its merit. quick-protobuf is particularly good at reading large bytes.

Contribution

Any help is welcome! (Pull requests of course, bug reports, missing functionality etc...)

Licence

MIT

quick-protobuf's People

Contributors

boncheolgu avatar dheeraj-grasshopper avatar edef1c avatar fauxfaux avatar foesi avatar jackalcooper avatar jimblandy avatar jozn avatar jpochyla avatar jpopesculian avatar koivunej avatar lexxvir avatar moxinilian avatar mre avatar mullr avatar nerdrew avatar overvenus avatar ralfbiedert avatar runfalk avatar semtexzv avatar snproj avatar spacemeowx2 avatar stepancheg avatar stevedonovan avatar tafia avatar therealprof avatar thog92 avatar xoloki avatar yzsolt avatar zutils avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

quick-protobuf's Issues

Codegen: add `into_owned` for messages with Cow fields

Generating these will be much easier than having users to write such:

fn into_owned(self) -> TypeName<'static> {
    TypeName {
        plain_cow: Cow::Owned(self.plain_cow.to_owned()),
        opt_cow: self.opt_cow.map(Cow::to_owned).map(Cow::Owned),
        copy_field: self.copy_field.
        message_field: self.message_field.into_owned()
     }
}

Would it support service?

Would it support service one day, it seems not work for the following situation:

syntax = "proto3";
package grpc;

service HelloService {
  rpc SayHello (HelloRequest) returns (HelloResponse);
}

message HelloRequest {
  string greeting = 1;
}

message HelloResponse {
  string reply = 1;
}

No messages or enums were read; either there was no input or there were only unsupported structures

I'm getting the following error:

Error: Could not convert vector_tile.proto into vector_tile.rs
Caused by: No messages or enums were read; either there was no input or there were only unsupported structures

When trying to parse this not so complicated proto file: https://github.com/mapbox/vector-tile-spec/blob/master/2.1/vector_tile.proto

Is there something in that proto file that's not supported?

Codegen: `required` fields are not required

Testcases can be found at end of gist: https://gist.github.com/koivunej/6efeecef4f251685b0e032245c1dc7dc

It'd seem that the currently emitted code does not validate that required field are set or become serialized. I would assume this required means "must be found" in the protobuf sense? With the current implementation, validating required fields becomes users responsibility.

Preferred solution would be to generate structs that ensure that required fields are in fact there. This means dropping the use of Option<T> for required fields, also dropping deriving Default. This will make the (de)serialization code more complicated having to define all fields first as local variables as Option<T> or T depending on whether they are Default types and then finally move them into struct.

Next thing that comes to mind is: can the same tag appear multiple times while deserializing? I would expect that to be an error, but the current while loop will not fail that.

Need help: read messages of unknown size from stream

I might be missing something, and need some help.
Reader type has 3 constructor each working only with data of predefined size.

impl Reader {
/// Creates a new `Reader`
pub fn from_reader<R: Read>(mut r: R, capacity: usize) -> Result<Reader> {
let mut buf = Vec::with_capacity(capacity);
unsafe {
buf.set_len(capacity);
}
buf.shrink_to_fit();
r.read_exact(&mut buf)?;
Ok(Reader::from_bytes(buf))
}
/// Creates a new `Reader` out of a file path
pub fn from_file<P: AsRef<Path>>(src: P) -> Result<Reader> {
let len = src.as_ref().metadata().unwrap().len() as usize;
let f = File::open(src)?;
Reader::from_reader(f, len)
}
/// Creates a new reader consuming the bytes
pub fn from_bytes(bytes: Vec<u8>) -> Reader {
let reader = BytesReader {
start: 0,
end: bytes.len(),
};
Reader {
buffer: bytes,
inner: reader,
}
}

Please adwise: For situation where there is a stream and I have to read different messages and raw data from it. ( http://wiki.openstreetmap.org/wiki/PBF_Format#File_format )

How to use quick-protobuf correctly?

- read message 1 of unknown size
- read raw data
- read message 2 of unknown size

Possibility of a MessageRead trait?

This isn't a big problem, but I was wondering if there's any technical reason why from_reader is generated as an independent method rather than on some MessageRead trait.

If it were a trait, it would be possible to simplify reading code - and not have to specify TypeName::from_reader as an argument to BytesReader.read_message. It would also allow consumers to write generic utility methods useful for all of their messages.

Is there a specific advantage to the way quick-protobuf does this now that I'm not seeing?

Not supported

List of items not supported for the moment (that I found, it may not be exhaustive)

  • imports (#24)
  • package (#25)
  • option (ignored when parsed)
  • nested types (#26, #30, #32)
  • map (#28)
  • google common messages (empty etc ...)
  • groups ignored (deprecated by google so I guess there is very little incentive to support it)
  • reserved fields (#22)
  • extensions
  • nested extensions
  • oneofs (#33)
  • service (ignored when parsed: 6cea56b)
  • deprecated (88007b5)
  • aliases
  • any message
  • json conversion
  • packed should work only for primitive numeric fields (#111 )

Add more options for pb-rs

Example:

  • generate read-only, read-write or write-only
  • generate Cow or owned version
  • nested types and import use mod_Message or eventually a rust-protobuf like solution in replacing all hierarchical calls a.b.c into a_b_c
  • be verbose
  • work on a directory
  • change destination filename

pb-rs: Does it support reserved?

Hi,
I tried use tool pb-rs to generate rust code for message with reserved field, but meet the following phenomenon:

version

pb-rs 0.2.0

input1

cat test.proto                                                                                                                                              
syntax = "proto3";
package test;

import "gogoproto/gogo.proto";

option (gogoproto.marshaler_all) = true;
option (gogoproto.sizer_all) = true;
option (gogoproto.unmarshaler_all) = true;

message Sample {
   reserved 4;
   uint64 age =1;
   bytes name =2;
}

input2

cat test2.proto                                                                                                                                              
syntax = "proto3";
package test;

import "gogoproto/gogo.proto";

option (gogoproto.marshaler_all) = true;
option (gogoproto.sizer_all) = true;
option (gogoproto.unmarshaler_all) = true;

message Sample {
   uint64 age =1;
   bytes name =2;
}

input1 output an empty file while input2 output what I expected. It seems reserved coud not be identified?

Is there anything wrong?

Also, I meet the following error in my test:

  1. Param --include: It seems we can not include more than one directory ? (caused in -I, --include <INCLUDE_PATH> Path to search for imported protobufs
  2. Param --output seems not work
    (-o, --output <OUTPUT> Generated file name, defaults to INPUT with 'rs' extension)

Wish for your help, Thank you.

A enum field as default won't be checked

to_be_import.proto

enum Enum1 {
  a = 0;
  b = 1;
  c = 2;
}
import "to_be_import.proto"
message Msg1 {
  optional Enum1 x = 9 [default = a]; // a field exists
  optional Enum1 y = 9 [default = does_not_exist]; // a field doesn't exist
}

I also check google protoc out. It doesn't compile if this happens.

Codegen: proto3 fields shouldn't have Option<> wrappers

As you might know, proto3 drops the concepts of default values and even of a field being present or absent at all. You can see the C++ API no longer has a has_foo accessor. The idea is that stuff can be represented with a simple struct.

I'd love for quick-protobuf to fully support this. In tests/rust_protobuf/v3/test_basic_pb.proto, this message:

message Test1 {
    int32 a = 1;
}

currently maps to this generated struct:

#[derive(Debug, Default, PartialEq, Clone)]
pub struct Test1 {
    pub a: Option<i32>,
}

the best proto3 representation leaves out that Option:

#[derive(Debug, Default, PartialEq, Clone)]
pub struct Test1 {
    pub a: i32,
}

with MessageRead and MessageWrite implementations simplified to match. A few reasons I prefer this representation:

  • it better matches intended semantics / improves interoperability. Any time you're using the equivalent of has_*, you're assigning a meaning that can't be represented in other languages, so you're losing portability.
  • it's simpler to deal with.
  • it's more memory-compact. IIUC, even with Rust's fancy field reordering, the whole Option has to be together, so I think it basically doubles the size of the in-memory representation.

fwiw, rust-protobuf drops the Option<> as I've suggested (but doesn't have Cow support, which is what led me to quick-rs).

#[derive(PartialEq,Clone,Default)]
pub struct Test1 {
    // message fields
    pub a: i32,
    // special fields
    unknown_fields: ::protobuf::UnknownFields,
    cached_size: ::protobuf::CachedSize,
}

(I also prefer having unknown_fields, but that's a separate issue.)

Documentation

  • Disallow missing docs ...
  • better reader
  • examples all over the place

Add library crate version of pb-rs

Right now this is one of the main issues prevent me from using quick-protobuf (along with #68 and a couple things in #12). It would be very useful to abstract pb-rs into its own library usable in build.rs scripts.

You can sort of do this now by installing pb-rs and executing it as a command-line tool, but it's clunky and more failure prone from a user-perspective.

upgrade to v0.5.0 have error

error[E0277]: the trait bound `example::dto::Person<'_>: quick_protobuf::MessageWrite` is not satisfied
  --> src/main.rs:99:20
   |
99 |             writer.write_message(&person).expect("Cannot write message!");
   |                    ^^^^^^^^^^^^^ the trait `quick_protobuf::MessageWrite` is not implemented for `example::dto::Person<'_>`

version 0.4.0 is ok!

unnecessary boxing for enums

Consider:

enum City {
    LONDON = 0;
    PARIS = 1;
}

message Address {
    City city = 1;
}

message Person {
    Address address = 2;
}

Generates:

pub struct Person {
    pub address: Option<Box<Address>>,
}

Expected code:

pub struct Person {
    pub address: Option<Address>,
}

Also consider:

message Address {
    enum City {
        LONDON = 0;
        PARIS = 1;
    }
    required City city = 1;
}

message Person {
    oneof specification {
        Address address = 1;
    }
}

Generates:

#[derive(Debug, Default, PartialEq, Clone)]
pub struct Person {
    pub specification: mod_Person::OneOfspecification,
}

impl Person {
    pub fn from_reader(r: &mut BytesReader, bytes: &[u8]) -> Result<Self> {
        let mut msg = Self::default();
        while !r.is_eof() {
            match r.next_tag(bytes) {
                Ok(10) => msg.specification = mod_Person::OneOfspecification::address(Box::new(r.read_message(bytes, Address::from_reader)?)),
                Ok(t) => { r.read_unknown(bytes, t)?; }
                Err(e) => return Err(e),
            }
        }
        Ok(msg)
    }
}

Which doesn't compile. The specification of Person doesn't contain a box, but on the line starting with Ok(10) it attempts to box something.

Tests

Try to look at rust-protobuf and see which tests can be migrated here

Length prefix is 8 bits, but shouldn't it be 32?

Not a big deal, I have yet to have an issue because the device I am working with has yet to need more than 8 bits for sending a message, however, the device uses a 32bit structure for sending and receiving and I can only assume that is how google's protobuf works.

This is an example write:
[63, 35, 35, 0, 29, 0, 0, 0, 12, 18, 8, 69, 116, 104, 101, 114, 101, 117, 109, 24, 1]

Notice, [0, 29] is the code and [0, 0, 0, 12] is the send... so I am adding [0, 0, 0] prior to the 12.

I'll try to look into this more to ensure that google protobuf requires this.

Perf

A probably easy perf boost could be done for packed fields with wire_type 1 or 5 (fixed size) ...
We could return for instance a Cow<[f64]> instead of a Vec<f64>

Codegen: unknown fields should be retained

If I parse a message with unknown fields and then serialize it, I'd like them to be written back out. This is a big safety improvement; it means that if there are still older clients accessing a shared database, they don't break things they don't understand.

This matches the behavior of the official C++ implementation for both proto2 and proto3. (proto3 originally dropped support for this, but it was added back after a huge backlash.)

problem finding types with the "_pb" suffix

I really wanted to use this library because of the speed and the succinctness of the generated code, but I had a couple of problems generating the protocol buffers in this repo.

  1. There's code in there to add the "_pb" suffix to any Rust keywords in case they come up, but the code that then finds those names seems to be flawed. The StarCraft II protos have an enum named Result, which conflicts with Rust's Result. This is fine because it should just add the "_pb" suffix, but when I generate this simplified example:
syntax = "proto2";

enum Result {
    Worked = 1;
}

message Msg {
    optional Result result = 1;
}

I get the following error:

     Running `target/debug/pb-rs -d ../examples/ ../examples/result.proto`
Found 1 messages, and 1 enums
Writing enum Result_pb
Writing message Msg
Error: Could not convert ../examples/result.proto into ../examples/result.rs
Caused by: Could not find enum Result in [Enumerator { name: "Result_pb", fields: [("Worked", 1)], imported: false, package: "", module: "result" }]

I'll attach a small pull request to fix the bug. I'm pretty new to Rust, though, so it's probably not an ideal fix, but it'll at least point you guys in the right direction.

Use genio::{Read, Write}

I suggest using genio for reading and writing. That allows abstracting over bytes and io. It should also help with no_std.

Simplify codegen for MessageWrite

This is not a big deal but some lines tend to be very longs (in particular when working with packed fields).

As a start, we could rely on some new sizeofs.

Lifetime design problem for Read object.

Hi,
In my case I have a stream and need to read protobuf messages along with raw data.

|message|data|message|data|...|

Once I created Read object for my stream
Reader::from_reader<R: Read>(mut r: R, capacity: usize)
it !moves! ownership to protobuf Reader object so that stream Read object can't be used anymore.

protobbuf-Reader can't release ownersip in any way, and deletes stream-Reader in the end.

Could you please change protobuf reader to borrow reader?

from_reader<R: Read>(mut& r: R, capacity: usize)

no_std support

It has already been mentioned on reddit, but there wasn't an issue yet: no_std would naturally be great for embedded and driver development!

Custom Cow type support

I've been successfully using quick-protobuf with tokio. However there are a few problems with tokio-io codecs which only allow the use of 'static data. This means I need to convert all parsed structs into owned, which much manual labour or with my new derive-into-owned.

Optimal way to parse protobuf messages with tokio would be to allow reading straight from BytesMut and to store any values as some kind of wrappers against BytesMut (or even downgraded readonly version, does not really matter) or even as BytesMut or Vec length on-stack versions (both are something like 3 usizes, which would be 12 bytes for 32-bit or even more for 64-bit architectures).

This would however require side-stepping the plain std::borrow::Cow a lot. Since there might be even more interesting user specific needs (for example, using some mmap resolved slices) in addition to tokio/bytes usage, there should probably be an option to use user provided Cow-type.

Quickly looking at the generated code, the "custom Cow type" would have to be something like:

  • fn len(&self) -> usize for get_size
  • AsRef<[u8]> for write_message
  • std::default::Default
  • probably have some fn read<B: BytesReader>(len: usize, reader: &mut B) for creating

These could probably all be just duck-typed by specifying the "CowType" as a command line option but BytesReader would have to be enhanced to split_to or at least allow access to the underlying container. It might already have a get_ref or something like that.

Filing this issue here to discuss this more. Would changes supporting this kind of option be welcome?

Codegen: do not use super::*

Since #59 has been merged, codegen will now import all super::*. It will also write in the parent module many times using a trick (checking about a special HEADER line.).

There may be a way not to do it all by having a smarter analysis up front and write just the necessary data once.

Use benchcmp to compare with rust-protobuf

Not very urgent but It could be better to leverage on benchcmp to compare between quick-protobuf and rust-protobuf (using features to run either with quick-protobuf or with rust-protobuf).

It would result in a much cleaner code.

Add examples

Like a openstreetmap parsing? (@astro, I'm curious about your opinion on this if you don't mind)

generated oneof structs must be unique

I have some proto definitions which use oneof but re-use the same name within different messages. So the generated structs need to be namespaced in some way, ideally with name of enclosing message.

Problem with Multiple Proto Files

Looking good so far, but there's a hurdle to be jumped: when multiple proto files are defined, the lookup is ambiguous. Consider package foo.one (by convention the filename would be one.proto but not necessarily.) Say there's also packages foo.two and foo.threeandfoo.onerefers to messages in them. They are in a directorytest- the full absolute crate path of anything in packagefoo.oneis::test::foo::mod_foo::mod_one. For references in one.rsthe strategy is to bring inmod_foofromtwoandmod_foofromthree. But mod_foois a _nested_ module of::test::foo::twoand a _different_ nested module of::test::foo::three`. Hence the ambiguity!

One solution is always to emit full paths, like ::test::two::mod_foo::mod_two::Two, and that would work. But the paths are starting to get ugly for human programmers! So another solution is not to create those nested mod_NAME modules and instead rely on the usual rules. That is, the .rs files are created in a directory foo, and there's a generated mod.rs that refers to them. The full path would now be simply ::foo::two::Two. (Without the mod_ but with usual escaping rules for keywords) That's simple enough to be used directly in generated code.

I'm happy to work on a PR but this is a disruptive change, so I thought that some discussion is needed on the way forward.

Please add a README

Please write some usage instructions. I would like to try your code with OpenStreetMap dumps.

codegen issue for repeated strings

They are indeed vectors of Cows, but when reading them we don't do the .map(Cow::Borrowed) necessary to get the returned &str from read_string into the correct type.

Why add msg size first?

self.write_varint(len as u64)?;

on pb-rs generate code, not read msg first,
so it can not read.

impl<'a> MessageRead<'a> for Job<'a> {
    fn from_reader(r: &mut BytesReader, bytes: &'a [u8]) -> Result<Self> {
        let mut msg = Self::default();
// todo: need read varint u64 first
// or remove line https://github.com/tafia/quick-protobuf/blob/90232e79978c512cf87113d1ee247196cb34f746/src/writer.rs#L212
        while !r.is_eof() {
            match r.next_tag(bytes) {
                Ok(8) => msg.job_type = r.read_int32(bytes)?,
                Ok(16) => msg.device_id = r.read_int32(bytes)?,
                Ok(26) => {
                    let (key, value) = r.read_map(bytes, |r, bytes| r.read_string(bytes).map(Cow::Borrowed), |r, bytes| r.read_string(bytes).map(Cow::Borrowed))?;
                    msg.values.insert(key, value);
                }
                Ok(t) => { r.read_unknown(bytes, t)?; }
                Err(e) => return Err(e),
            }
        }
        Ok(msg)
    }
}

Clearer error messages for pb-rs

At the moment, detecting where errors occur in .proto files is not easy. I'm happy to work towards a PR implementing basic error handling in the parser, particularly line info.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.