jhelovuo / rustdds Goto Github PK
View Code? Open in Web Editor NEWRust implementation of Data Distribution Service
License: Apache License 2.0
Rust implementation of Data Distribution Service
License: Apache License 2.0
You can copy a shapes demo implementation almost directly from here: https://github.com/jhelovuo/dds-rtps/tree/master/RustDDS
ROS2 turtle example should be completely rewritten with more or less the same functionality. The shapes demo could serve as a starting point. There should be no need for several modules, multiple threads, or internal communication channels. The current implementation is unnecessary convoluted because of earlier design mistakes (now fixed) in RustDDS and other obscure reasons.
See failing examples by cargo test --doc
Fix example codes to work with current API.
The PL_CDR format (de)serializer could be simplified. Now it is a large mass of copy-paste code.
These are from DDS Spec 2.2.4.1 Communication Status (table)
Topic
Subscriber
poll()
-based anyway.DataReader
DataWriter
This API call should cause a livelness message to be sent over RTPS.
See RTPS spec
DDS Spec 2.2.3.11 LIVELINESS
There already exists a mechanism for asserting liveness from DataWriter. Need to check that this is correctly implemented, e.g. actually sends data updates and not only hearbeat.
See #4 and
DDS spec Section 2.2.2.2.3 DomainParticipantListener Interface.
Love the project. I hope to contribute.
I ran the shapes demo with cargo run --example shapes_demo
and nothing happened. Is it functional? I wanted to use it start understanding the crate in case there were opportunities I could help. Thanks in advance.
Apps may sometimes be required to work collaboratively with respect to a common frame of time, which would be useful for making timestamps on individual machines meaningful.
To my understanding, this mechanism might be like an improved NTP protocol, maintaining an inner clock in the middleware level rather than modifying the machine's OS clock. I personally think it would be more advantageous than using traditional NTP because traditional NTP requires direct socket level programming.
Would you agree with that and plan to implement such a time synchronization mechanism on RustDDS?
Either replace with more graceful error-handling or write comments to justify why panic cannot occur.
A same RustDDS-Application, which runs well on linux, oddly panicked on Win10 at the very beginning of the codes, saying
...panicked at UDPSender construction fail: Os { code: 10049, kind: AddrNotAvailable, message [The requested address is not valid in its context error]}
generated at
rustdds-0.4.13\src\dds\dp_event_loop.rs :: 177:8
rustdds-0.4.13\src\network\udp_sender.rs :: 25:5
It appears that IP adress 0.0.0.0 was not working well on Windows OS.
In my tests, one computer (A) using Windows panicked, and the other one (B) using Windows ran without panicks.
However, if run multiple domain_participants on computer B, they won't comminucate with each other.
In src/dds/pubsub.rs
from commit #3117e31b1f1ecbc24383357b02b7372c26e1f629 the size of the mio channels are set to 4.
// Data samples from DataWriter to HistoryCache
let (dwcc_upload, hccc_download) = mio_channel::sync_channel::<WriterCommand>(4);
// Status repoerts back from Writer to DataWriter
let (status_sender, status_receiver) = mio_channel::sync_channel(4);
It was previously set to 100. Is there a reason for this to be limited so small? I am trying to create more than 4 subscribers and this is causing an error:
2021-04-12 20:01:40,834 WARN [rustdds::dds::with_key::datawriter] Failed to write new data. Full
2021-04-12 20:01:40,834 ERROR [rustdds::discovery::discovery] Unable to write new topic info.
Large parts of Topic discovery (independent of remote Reader/Writer discovery) is missing.
E.g. local Topic creation is not broadcast at all, unless we have Reader or Writer on it.
There is a bunch of things that seem completely useless:
These should be removed.
are you targeting a minimum compiler version?
looks like the current MSRV is 1.48.0
The panic!()
macro should be replaced by something catchable, such as returning a Result, or documenting in comments why the panic is unreachable (and replacing by unreachable!() macro).
The current object size limit is litmied by UDP payload size minus RTPS headers.
See RTPS spec
Line 587 in 37fdd1c
Problem is exactly the same as described in this issue: libp2p/rust-libp2p#1447
Problem is the dependency crate called get_if_addrs
, which uses link #[link(name = "Iphlpapi")]
but not #[link(name = "iphlpapi")]
One of the ways to mitigate this problem would be to replace get_if_addrs
with some other dependency, such as if-addrs, which does not have this problem
As in #4.
Spec Section 2.2.2.4.4 DataWriterListener Interface.
There is no need to notify on data available, but on various status changes.
Are all of the quality of service configuration settings fully supported?
Unit tests at the end of code modules are out of sync w.r.t. current implementation, so tests do not even compile. They should be updated to be compatible with current code.
And use it where appropriate.
The Bytes library can be used to implement zero-copy data manipulation. It should be possible to use Bytes to manage received data from UDP socket to data deserialization stage.
there's a number of locations that the wrong 'self' convention is used.
there's a reference here - https://rust-lang.github.io/rust-clippy/master/index.html#wrong_self_convention
for example, the Key
trait has the method fn into_hash_key(&self) -> KeyHash;
in order to follow convention, this should either move and consume self
(ie. fn into_hash_key(self) -> KeyHash
) or use a different name, such as fn as_hash_key(&self) -> KeyHash
or simply fn hash_key(&self) -> KeyHash
similarly for KeyHash::into_cdr_bytes
, Timestamp::to_ticks
, and a few others
See #4 and
DDS Spec Section "2.2.2.5.7 DataReaderListener Interface".
DDS Spec 2.2.2.4.1.12 wait_for_acknowledgments
Line 285 in 37fdd1c
and
DDS Spec 2.2.2.4.2.15 wait_for_acknowledgments
RustDDS/src/dds/with_key/datawriter.rs
Line 333 in 37fdd1c
serialisation in this crate uses a mixture of Speedy and Serde. Ideally, you'd want to converge on a single solution.
Another goal is to decouple the types from their "on-the-wire" representations. I want to help resolve this (if I can!).
Since this is a big task, requiring a lot of iteration, I would suggest creating a new branch for experimentation? Have you got any thoughts about how to approach this @jhelovuo?
#![allow(dead_code)]
#![allow(non_camel_case_types)]
#![allow(non_snake_case)]
#![allow(non_upper_case_globals)]
Those compilation flags suppressing warnings in lib.rs should be moved to smaller scope.
Especailly dead code should be handled wherever it occurs. There are good reasons to have dead code around, for example, as placeholders for DDS features that are not yet implemented, but that should be marked wherever such declarations are made, not globally.
i think a similar refactor is possible for Reliability
as for Locator
. It doesn't make sense to have both Reliability
and ReliabilityKind
enums- this is just a hangover from the spec being written for languages that don't have discriminated unions. It's not totally obvious to me how to refactor the 'serializer' to do this though.
Are there any plans to make async
versions of the read
, take
, write
methods of DataReader
and DataWriter
?
Design and implement API to notify the application about topic events.
Specified in "2.2.2.3.5 TopicListener Interface"
The mechanics of the interface should be implemented using mio (0.6.x) Evented interface and e.g. mio channels to implement asynchronous reads from DDS.
DDS Spec 2.2.3.13 PARTITION
RustDDS/src/dds/message_receiver.rs
Line 74 in 474c55a
just hoping for some clarification. is there a reason the default receiver lists are populated with an 'invalid' address, as opposed to an empty Vec
? is this part of the spec?
the BuiltInDataSerializer
is incredibly verbose. It's also quite an unusual construction.
What is this object doing exactly? You've opted for an imperative style- is there a reason why the usual declarative serde
approach doesn't work here? It looks like most of the fields are None
, most of the time. Maybe the variants should be different objects?
the implementation of SequenceNumberSet
in this crate allows for a variable offset range, defined by a u32
. The specification calls for a maximum offset range of 256 (256 is significantly smaller that u32::MAX
!)
can you talk me through this implementation?
pub type SequenceNumberSet = NumberSet<SequenceNumber>;
pub type FragmentNumberSet = NumberSet<FragmentNumber>;
// ---------------------------------------------------------------
#[derive(Clone, Debug, Hash, PartialEq, Eq)]
pub struct NumberSet<N>
where N: Clone + Debug + Hash + PartialEq + Eq + NumOps + From<i64>
{
bitmap_base: N,
num_bits: u32,
bitmap: Vec<u32>, // .len() == (numBits+31)/32
// bitmap bits are numbered from MSB to LSB. Bit 0 (MSB of bitmap[0])
// represents SequenceNumber bitmap_base. Bit 31 (LSB of bitmap[0])
// represents SequenceNumber (bitmap_base + 31).
// When num_bits == 0 , bitmap.len() == 0
}
since the maximum offset range is 256, it seems to me that you could encode this in the type system with something like
pub struct SequenceNumberSet {
base: NonZeroU64,
offsets: BTreeSet<u8>,
}
here the offset range can never be greater than 256, since u8::MAX == 255
would that work?
(yes i know a Vec
is probably faster than a BTreeMap
here for small collections due to cache misses)
src/dds/participant.rs
sorry, this is more a question than anything else. I wanted to understand the rationale behind using such a low-level crate as mio
, rather than the easier to consume, higher-level async
APIs provided by tokio/async-std? Is the intention to run this crate in a no-std
environment? if so, does it make sense to have a higher-level wrapper around this crate, similar to the relationship between mio
and tokio
, or hyper
and reqwest
?
the ROS2 turtle example is pretty intimidating. I feel this could be dramatically simplified with async
.
Key
trait needs to know if the CDR-serialized representation of the key always fits into 16 bytes (128 bits) or not. RTPS specification says this is used to select how the key hash is encoded over the network. Types fitting into 16 bytes are sent as-is. Larger ones are MD5-hashed. Note that this does not depend on individual key instances, but the decision is made on data type level.
Currently, there is placeholder implementation that is hardwired to assume key always fits into 16 bytes. (method may_exceed_128_bits()
), which is obviously wrong.
Implementation alternatives:
may_exceed_128_bits()
from provided to required method. Places the burden of knowing this to the user.Key
and then decide if it always fits to 16 bytes or not. The size rules can be read from the OMG CDR encoding spec. Note: This must still be overridable by the user (application), because someone may want to use another encoding in place of CDR. In such case the decision can be left to the user.I am messing around with the Shapes demo. I am able to publish a square from the eProsima gui, and the shapes demo that you wrote is able to receive it. However, if I have two shapes demo processes running, in the same domain to receive squares, only one is able to receive. The other process that is unable to receive prints the following error. Does the current RustDDS implementation support this topology?
ERROR rustdds::dds::participant] Cannot join multicast, possibly another instance running on this machine.
what is says on the tin. cargo test
fails to compile. suspect the tests are no longer aligned with the code? did they ever work?
Since I may want to use this in my project, I am reviewing the codes and trying to make it more stable.
I found that the code base in master branch cannot pass the testcase.
Moreover, it may be great for adding some guideline about how to contribute (which toolchain version? the roadmap? etc)
failures:
---- dds::reader::tests::rtpsreader_handle_gap stdout ----
thread 'dds::reader::tests::rtpsreader_handle_gap' panicked at 'called `Option::unwrap()` on a `None` value', src/dds/reader.rs:303:53
---- dds::reader::tests::rtpsreader_handle_data stdout ----
thread 'dds::reader::tests::rtpsreader_handle_data' panicked at 'assertion failed: rec.try_recv().is_ok()', src/dds/reader.rs:1083:5
note: run with `RUST_BACKTRACE=1` environment variable to display a backtrace
---- dds::reader::tests::rtpsreader_notification stdout ----
thread 'dds::reader::tests::rtpsreader_notification' panicked at 'assertion failed: rec.try_recv().is_ok()', src/dds/reader.rs:1024:5
---- structure::dds_cache::tests::create_dds_cache stdout ----
thread 'structure::dds_cache::tests::create_dds_cache' panicked at 'assertion failed: `(left == right)`
left: `2`,
right: `3`', src/structure/dds_cache.rs:355:5
---- structure::inline_qos::tests::inline_qos_key_hash stdout ----
thread 'structure::inline_qos::tests::inline_qos_key_hash' panicked at 'called `Result::unwrap()` on an `Err` value: Eof', src/structure/inline_qos.rs:143:51
failures:
dds::reader::tests::rtpsreader_handle_data
dds::reader::tests::rtpsreader_handle_gap
dds::reader::tests::rtpsreader_notification
structure::dds_cache::tests::create_dds_cache
structure::inline_qos::tests::inline_qos_key_hash
I need to implement a message on keyed topic where multiple (nested) data fields are keys.
My struct has a field for key that is of another self-defined type that consists of two primitive type numbers. I have implemented the keyed trait for both structs but the problem occurs with implementing Keyed
trait for multiple fields.
Here is an example
#[derive(Serialize, Deserialize)]
pub struct MyMeasurment {
pub my_measurment_id: MyIdType, // key
pub my_timestamp: MyTimeType,
pub my_measurment_value: MyMeasurmentType,
}
impl Keyed for MyMeasurment {
type K = MyIdType;
fn get_key(&self) -> Self::K {
self.my_measurment_id
}
}
#[derive(Serialize, Deserialize, Clone, Copy)]
pub struct MyIdType {
pub sender_id: i32,
pub instance_id: i32,
}
impl Keyed for MyIdType {
type K = MyIdType;
fn get_key(&self) -> Self::K {
MyIdType {
sender_id: self.sender_id,
instance_id: self.instance_id,
}
}
}
Now, when I go to
publisher
.create_datawriter::<MyMeasurment, CDRSerializerAdapter<MyMeasurment>>(
self.topic.clone(),
None,
)
.unwrap();
I get
the trait bound `MyIdType: rustdds::dds::traits::Key` is not satisfied
the trait `rustdds::dds::traits::Key` is not implemented for `MyIdType`rustcE0277
The same happens if the Keyed trait implementation is done through self.clone()
impl Keyed for MyIdType {
type K = MyIdType;
fn get_key(&self) -> Self::K {
self.clone()
}
}
This probably happens because of the recursive reference to MyIdType
in the Keyed implementation.
When The final Key returned is primitive type then everything works
impl Keyed for MyIdType {
type K = i32;
fn get_key(&self) -> Self::K {
self.instance_id,
}
}
How should one approach to implementing multiple fields of a struct as keys?
In the shapes example there's also only one key of type String
May I ask a question please? Suppose two machine (say M1, M2) both have multiple NICs, and they two have already established connections in three ways concurrently, as
|--(network A)--|
M1|--(network B)--|M2
|--(network C)--|
Is it needed/possible to manually designate just one of them in RustDDS? What would happen if working without configuration?
Thank you.
These are suspend_publications
and resume_publications
in
Line 261 in 37fdd1c
A declarative, efficient, and flexible JavaScript library for building user interfaces.
๐ Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. ๐๐๐
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google โค๏ธ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.