Giter VIP home page Giter VIP logo

zenoh-plugin-ros2dds's Introduction

CI Discussion Discord License License

Eclipse Zenoh

The Eclipse Zenoh: Zero Overhead Pub/sub, Store/Query and Compute.

Zenoh (pronounce /zeno/) unifies data in motion, data at rest and computations. It carefully blends traditional pub/sub with geo-distributed storages, queries and computations, while retaining a level of time and space efficiency that is well beyond any of the mainstream stacks.

Check the website zenoh.io and the roadmap for more detailed information.


A Zenoh bridge for ROS 2 over DDS

ROS (the Robot Operating System) is a set of software libraries and tools allowing to build robotic applications. In its version 2, ROS 2 relies mostly on O.M.G. DDS as a middleware for communications. This plugin bridges all ROS 2 communications using DDS over Zenoh.

While a Zenoh bridge for DDS already exists and helped lot of robotic use cases to overcome some wireless connectivity, bandwidth and integration issues, using a bridge dedicated to ROS 2 brings the following advantages:

  • A better integration of the ROS graph (all ROS topics/services/actions can be seen across bridges)
  • A better support of ROS toolings (ros2, rviz2...)
  • Configuration of a ROS namespace on the bridge, instead of on each ROS Nodes
  • Easier integration with Zenoh native applications (services and actions are mapped to Zenoh Queryables)
  • More compact exchanges of discovery information between the bridges

Plugin or bridge ?

This software is built in 2 ways to choose from:

  • zenoh-plugin-ros2dds: a Zenoh plugin - a dynamic library that can be loaded by a Zenoh router
  • zenoh-bridge-ros2dds: a standalone executable

The features and configurations descibed in this document applies to both. Meaning the "plugin" and "bridge" words are interchangeables in the rest of this document.

How to install it

To install the latest release of either the DDS plugin for the Zenoh router, either the zenoh-bridge-ros2dds standalone executable, you can do as follows:

Manual installation (all platforms)

All release packages can be downloaded from:

Each subdirectory has the name of the Rust target. See the platforms each target corresponds to on https://doc.rust-lang.org/stable/rustc/platform-support.html

Choose your platform and download:

  • the zenoh-plugin-ros2dds-<version>-<platform>.zip file for the plugin.
    Then unzip it in the same directory than zenohd or to any directory where it can find the plugin library (e.g. /usr/lib)
  • the zenoh-bridge-ros2dds-<version>-<platform>.zip file for the standalone executable.
    Then unzip it where you want, and run the extracted zenoh-bridge-ros2dds binary.

Linux Debian

Add Eclipse Zenoh private repository to the sources list:

echo "deb [trusted=yes] https://download.eclipse.org/zenoh/debian-repo/ /" | sudo tee -a /etc/apt/sources.list > /dev/null
sudo apt update

Then either:

  • install the plugin with: sudo apt install zenoh-plugin-ros2dds.
  • install the standalone executable with: sudo apt install zenoh-bridge-ros2dds.

Docker images

The zenoh-bridge-ros2dds standalone executable is also available as a Docker images for both amd64 and arm64. To get it, do:

  • docker pull eclipse/zenoh-bridge-ros2dds:latest for the latest release
  • docker pull eclipse/zenoh-bridge-ros2dds:nightly for the main branch version (nightly build)

Nightly builds

The "Release" action builds packages for most most of OSes. You can download those from the "Artifacts" section in each build.
Just download the package for your OS and unzip it. You'll get 3 zips: 1 for the plugin, 1 for the plugin as debian package and 1 for the bridge. Unzip the zenoh-bridge-ros2dds-<platform>.zip file, and you can run ./zenoh-bridge-ros2dds

How to build it

⚠️ WARNING ⚠️ : Zenoh and its ecosystem are under active development. When you build from git, make sure you also build from git any other Zenoh repository you plan to use (e.g. binding, plugin, backend, etc.). It may happen that some changes in git are not compatible with the most recent packaged Zenoh release (e.g. deb, docker, pip). We put particular effort in mantaining compatibility between the various git repositories in the Zenoh project.

⚠️ WARNING ⚠️ : As Rust doesn't have a stable ABI, the plugins should be built with the exact same Rust version than zenohd, and using for zenoh dependency the same version (or commit number) than 'zenohd'. Otherwise, incompatibilities in memory mapping of shared types between zenohd and the library can lead to a "SIGSEV" crash.

In order to build the zenoh bridge for DDS you need first to install the following dependencies:

  • Rust. If you already have the Rust toolchain installed, make sure it is up-to-date with:

    $ rustup update
  • On Linux, make sure the llvm and clang development packages are installed:

    • on Debians do: sudo apt install llvm-dev libclang-dev
    • on CentOS or RHEL do: sudo yum install llvm-devel clang-devel
    • on Alpine do: apk install llvm11-dev clang-dev
  • CMake (to build CycloneDDS which is a native dependency)

Once these dependencies are in place, you may clone the repository on your machine:

$ git clone https://github.com/eclipse-zenoh/zenoh-plugin-ros2dds.git
$ cd zenoh-plugin-ros2dds
$ cargo build --release

The standalone executable binary zenoh-bridge-ros2dds and a plugin shared library (*.so on Linux, *.dylib on Mac OS, *.dll on Windows) to be dynamically loaded by the zenoh router zenohd will be generated in the target/release subdirectory.

ROS 2 package

You can also build zenoh-bridge-ros2dds as a ROS package running:

rosdep install --from-paths . --ignore-src -r -y
colcon build --packages-select zenoh_bridge_ros2dds --cmake-args -DCMAKE_BUILD_TYPE=Release

The rosdep command will automatically install Rust and clang as build dependencies.

If you want to cross-compile the package on x86 device for any target, you can use the following command:

rosdep install --from-paths . --ignore-src -r -y
colcon build --packages-select zenoh_bridge_ros2dds --cmake-args -DCMAKE_BUILD_TYPE=Release  --cmake-args -DCROSS_ARCH=<target>

where <target> is the target architecture (e.g. aarch64-unknown-linux-gnu). The architechture list can be found here.

The cross-compilation uses zig as a linker. You can install it with instructions in here. Also, the zigbuild package is required to be installed on the target device. You can install it with instructions in here.


Usage

A typical usage is to run 1 bridge in a robot, and 1 bridge in another host monitoring and operating the robot.

⚠️ The bridge relies on CycloneDDS and has been tested with RMW_IMPLEMENTATION=rmw_cyclonedds_cpp. While the DDS implementations are interoperable over UDP multicast and unicast, some specific and non-standard features of other DDS implementations (e.g. shared memory) might cause some issues.

It's important to make sure that NO DDS communication can occur between 2 hosts that are bridged by zenoh-bridge-ros2dds. Otherwise, some duplicate or looping traffic can occur.
To make sure of this, you can either:

  • define ROS_LOCALHOST_ONLY=1.
    Preferably, enable MULTICAST on the loopback interface with this command (on Linux): sudo ip l set lo multicast on
  • use different ROS_DOMAIN_ID on each hosts
  • use a CYCLONEDDS_URI that configures CycloneDDS to only use internal interfaces to the robot. This configuration has to be used for all ROS Nodes as well as for the bridge.
    For instance for Turtlebot4 which embeds 2 hosts interconnected via USB:
    <CycloneDDS>
     <Domain>
         <General>
             <Interfaces>
                 <NetworkInterface name="usb0"/>
                 <!-- For less traffic, force multicast usage on loopback even if not configured.         -->
                 <!-- All ROS Nodes and bridges must have this same config, otherwise they won't discover -->
                 <NetworkInterface address="127.0.0.1" multicast="true"/> 
             </Interfaces>
             <DontRoute>true</DontRoute>
         </General>
     </Domain>
```

On the robot, run:

  • zenoh-bridge-ros2dds

On the operating host run:

  • zenoh-bridge-ros2dds -e tcp/<robot-ip>:7447
  • check if the robot's ROS interfaces are accessible via:
    • ros2 topic list
    • ros2 service list
    • ros2 action list

Other interconnectivity between the 2 bridges can be configured (e.g. automatic discovery via UDP multicast, interconnection via 1 or more Zenoh routers...). See the Zenoh documentation to learn more about the possibile deployments allowed by Zenoh.

Configuration

zenoh-bridge-ros2dds can be configured via a JSON5 file passed via the -c argument. You can see a commented and exhaustive example of such configuration file: DEFAULT_CONFIG.json5.

The "ros2dds" part of this same configuration file can also be used in the configuration file for the zenoh router (within its "plugins" part). The router will automatically try to load the plugin library (zenoh-plugin_dds) at startup and apply its configuration.

zenoh-bridge-ros2dds also allows some of those configuration values to be configured via command line arguments. Run this command to see which ones:

  • zenoh-bridge-ros2dds -h

The command line arguments overwrite the equivalent keys configured in a configuration file.

Connectivity configurations

DDS communications

The bridge discovers all ROS 2 Nodes and their topics/services/actions running on the same Domain ID (set via ROS_DOMAIN_ID or 0 by default) via UDP multicast, as per DDS specification.

As the bridge relies on CycloneDDS, it's DDS communications can be configured via a CycloneDDS XML configuration file as explained here.

Zenoh communications

Starting from v0.11.0, the zenoh-bridge-ros2dds is by default started in router mode (See the difference between modes in Zenoh documentation: https://zenoh.io/docs/getting-started/deployment/).
This means it's listening for incoming TCP connections by remote bridges or any Zenoh application on port 7447 via any network interface. It does perform discovery via scouting over UDP multicast or gossip protocol, but doesn't auto-connect to anything.
As a consequence the connectivity between bridges has to be statically configured in one bridge connecting to the other (or several other bridges) via the -e command line option, or via the connect section in configuration file.

If required, the automatic connection to other discovered bridges (also running in router mode) can be enabled adding such configuration:

  scouting: {
    multicast: {
      autoconnect: { router: "router" }
    },
    gossip: {
      autoconnect: { router: "router" }
    }
  },

Prior to v0.11.0, the zenoh-bridge-ros2dds was by default started in peer mode.
It was listening for incoming TCP connections on a random port (chosen by the OS), and was automatically connecting to any discovered brige, router or peer.

Easy multi-robots via Namespace configuration

Deploying a zenoh-bridge-ros2dds in each robot and configuring each with its own namespace brings several benefits:

  1. No need to configure each ROS Node with a namespace. As the DDS traffic between all Nodes of a single robot remains internal to the robot, no namespace needs to be configured
  2. Configuring each zenoh-bridge-ros2dds with namespace: "/botX" (where 'X' is a unique id), each topic/service/action name routed to Zenoh is prefixed with "/botX". Robots messages are not conflicting with each other.
  3. On a monitoring/controlling host, you have 2 options:
    • Run a zenoh-bridge-ros2dds with namespace: "/botX" corresponding to 1 robot. Then to monitor/operate that specific robot, just any ROS Node without namespace.
      E.g.: rviz2
    • Run a zenoh-bridge-ros2dds without namespace. Then you can monitor/operate any robot remapping the namespace to the robot's one, or each topic/service/action name you want to use adding the robot's namespace as a prefix.
      E.g.: rviz2 --ros-args -r /tf:=/botX2/tf -r /tf_static:=/botX/tf_static

NOTE: the bridge prefixes ALL topics/services/actions names with the configured namespace, including /rosout, /parameter_events, /tf and /tf_static.

Admin space

The bridge exposes some internal states via a Zenoh admin space under @ros2/<id>/**, where <id> is the unique id of the bridge (configurable).
This admin space can be queried via Zenoh get() operation. If the REST plugin is configured for the bridge for instance via --rest-http-port 8000 argument, those URLs can be queried:

zenoh-plugin-ros2dds's People

Contributors

diogomatsubara avatar eclipse-zenoh-bot avatar fuzzypixelz avatar gabrik avatar jenoch avatar kydos avatar mallets avatar milyin avatar olivierhecart avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

zenoh-plugin-ros2dds's Issues

[Bug] --queries-timeout 60.0 invalid type: floating point

Describe the bug

Running the bridge I get following error with the queries timeout arg:

ros2 run zenoh_bridge_ros2dds zenoh_bridge_ros2dds \
        -l tcp/0.0.0.0:7447 \
        --no-multicast-scouting \
        --queries-timeout 60.0
[2023-12-10T13:34:21Z INFO  zenoh_bridge_ros2dds] zenoh-bridge-ros2dds veb66be5 built with rustc 1.70.0 (90c541806 2023-05-31) (built from a source tarball)
[2023-12-10T13:34:21Z INFO  zenoh::net::runtime] Using PID: 6b8adfe3b7ac3d9f50bb8cce3dcdc2c4
[2023-12-10T13:34:21Z INFO  zenoh::net::runtime::orchestrator] Zenoh can be reached at: tcp/192.168.178.32:7447
[2023-12-10T13:34:21Z INFO  zenoh::net::runtime::orchestrator] Zenoh can be reached at: tcp/172.17.0.1:7447
[2023-12-10T13:34:21Z INFO  zenoh::net::runtime::orchestrator] Zenoh can be reached at: tcp/172.22.0.1:7447
thread 'main' panicked at 'called `Result::unwrap()` on an `Err` value: Plugin `ros2dds` configuration error: invalid type: floating point `60`, expected struct QueriesTimeouts at zenoh-plugin-ros2dds/src/lib.rs:130.', zenoh-bridge-ros2dds/src/main.rs:220:66
stack backtrace:
   0:     0x61183eabcef7 - std::backtrace_rs::backtrace::libunwind::trace::hc8c748621e1717dc
                               at /build/rustc-wAuwbs/rustc-1.70.0+dfsg0ubuntu1~bpo2/library/std/src/../../backtrace/src/backtrace/libunwind.rs:93:5
   1:     0x61183eabcef7 - std::backtrace_rs::backtrace::trace_unsynchronized::h1657ffb548bcd1c0
                               at /build/rustc-wAuwbs/rustc-1.70.0+dfsg0ubuntu1~bpo2/library/std/src/../../backtrace/src/backtrace/mod.rs:66:5
   2:     0x61183eabcef7 - std::sys_common::backtrace::_print_fmt::ha3920362e42412b1
                               at /build/rustc-wAuwbs/rustc-1.70.0+dfsg0ubuntu1~bpo2/library/std/src/sys_common/backtrace.rs:65:5
   3:     0x61183eabcef7 - <std::sys_common::backtrace::_print::DisplayBacktrace as core::fmt::Display>::fmt::h55a64e3639141d5c
                               at /build/rustc-wAuwbs/rustc-1.70.0+dfsg0ubuntu1~bpo2/library/std/src/sys_common/backtrace.rs:44:22
   4:     0x61183e8c9bdf - core::fmt::write::h8a6836b78d5c6e17
                               at /build/rustc-wAuwbs/rustc-1.70.0+dfsg0ubuntu1~bpo2/library/core/src/fmt/mod.rs:1254:17
   5:     0x61183eaca8f4 - std::io::Write::write_fmt::h74083ba874a8bcaf
                               at /build/rustc-wAuwbs/rustc-1.70.0+dfsg0ubuntu1~bpo2/library/std/src/io/mod.rs:1698:15
   6:     0x61183eabcb9f - std::sys_common::backtrace::_print::he2a2a37b05e7e764
                               at /build/rustc-wAuwbs/rustc-1.70.0+dfsg0ubuntu1~bpo2/library/std/src/sys_common/backtrace.rs:47:5
   7:     0x61183eabcb9f - std::sys_common::backtrace::print::hf1818e0fa9ba5bd4
                               at /build/rustc-wAuwbs/rustc-1.70.0+dfsg0ubuntu1~bpo2/library/std/src/sys_common/backtrace.rs:34:9
   8:     0x61183eac80ee - std::panicking::default_hook::{{closure}}::h36f8b00cf3f70373
                               at /build/rustc-wAuwbs/rustc-1.70.0+dfsg0ubuntu1~bpo2/library/std/src/panicking.rs:269:22
   9:     0x61183eac8c41 - std::panicking::default_hook::h0999e9f322268afa
                               at /build/rustc-wAuwbs/rustc-1.70.0+dfsg0ubuntu1~bpo2/library/std/src/panicking.rs:288:9
  10:     0x61183eac8c41 - std::panicking::rust_panic_with_hook::hf0f1736a09665bd3
                               at /build/rustc-wAuwbs/rustc-1.70.0+dfsg0ubuntu1~bpo2/library/std/src/panicking.rs:691:13
  11:     0x61183eabd304 - std::panicking::begin_panic_handler::{{closure}}::h9c059812511f285d
                               at /build/rustc-wAuwbs/rustc-1.70.0+dfsg0ubuntu1~bpo2/library/std/src/panicking.rs:582:13
  12:     0x61183eabd266 - std::sys_common::backtrace::__rust_end_short_backtrace::haeaaa791abdbfc5f
                               at /build/rustc-wAuwbs/rustc-1.70.0+dfsg0ubuntu1~bpo2/library/std/src/sys_common/backtrace.rs:150:18
  13:     0x61183eac84f1 - rust_begin_unwind
                               at /build/rustc-wAuwbs/rustc-1.70.0+dfsg0ubuntu1~bpo2/library/std/src/panicking.rs:578:5
  14:     0x61183e773fc2 - core::panicking::panic_fmt::h42c1b10c64151f1e
                               at /build/rustc-wAuwbs/rustc-1.70.0+dfsg0ubuntu1~bpo2/library/core/src/panicking.rs:67:14
  15:     0x61183e773e72 - core::result::unwrap_failed::h4b24ec377bba078a
                               at /build/rustc-wAuwbs/rustc-1.70.0+dfsg0ubuntu1~bpo2/library/core/src/result.rs:1687:5
  16:     0x61183e7af5d1 - <async_std::task::builder::SupportTaskLocals<F> as core::future::future::Future>::poll::h31d76bd20aab43e1
  17:     0x61183e8330d4 - zenoh_bridge_ros2dds::main::h0c723d0e67799cff
  18:     0x61183e7f15b3 - std::sys_common::backtrace::__rust_begin_short_backtrace::h1418525716f9748f
  19:     0x61183e8343d3 - main
  20:     0x71ab3632ad90 - __libc_start_call_main
                               at ./csu/../sysdeps/nptl/libc_start_call_main.h:58:16
  21:     0x71ab3632ae40 - __libc_start_main_impl
                               at ./csu/../csu/libc-start.c:392:3
  22:     0x61183e7a85b5 - _start
  23:                0x0 - <unknown>
[ros2run]: Aborted
make: *** [Makefile:54: zenoh] Error 250

To reproduce

Run

ros2 run zenoh_bridge_ros2dds zenoh_bridge_ros2dds \
		-l tcp/0.0.0.0:7447 \
		--no-multicast-scouting \
		--queries-timeout 60.0

System info

  • Ubuntu 22.04 in docker image based on osrf/ros:humble-desktop-full
  • zenoh-plugin-ros2dds hash: eb66be5

[Bug] remap node ID with namespace causes introspection issues

Describe the bug

I am interested in running the zenoh-bridge-ros2dds docker image with the nodeID to be /robot3/cc_router, but without zenoh changing the namespace prefix of actions/services/topics that are sent through the zenoh-bridge, as the namespacing is already preconfigured internally.
e.g.

--ros-args -r __node:=cc_router -r __ns:=/robot3
ROS:/robot3/SearchArea <-> Zenoh:robot3/robot3/SearchArea/*

is undesirable.

I tried changing using the ros2 __node remap, e.g. --ros-args -r __node:=robot3/cc_router , which shows the desired node name when calling ros2 node list, and the zenoh behaviour works without changing the prefixes as desired.
However, issuing ros2 node info /robot3/cc_router no longer works:

root@tom:~# ros2 node list
/foxglove_bridge
/foxglove_bridge_component_manager
/robot3/cc_router

root@tom:~# ros2 node info /robot3/cc_router 
/robot3/cc_router
Traceback (most recent call last):
  File "/opt/ros/humble/bin/ros2", line 33, in <module>
    sys.exit(load_entry_point('ros2cli==0.18.9', 'console_scripts', 'ros2')())
  File "/opt/ros/humble/lib/python3.10/site-packages/ros2cli/cli.py", line 91, in main
    rc = extension.main(parser=parser, args=args)
  File "/opt/ros/humble/lib/python3.10/site-packages/ros2node/command/node.py", line 37, in main
    return extension.main(args=args)
  File "/opt/ros/humble/lib/python3.10/site-packages/ros2node/verb/info.py", line 59, in main
    subscribers = get_subscriber_info(
  File "/opt/ros/humble/lib/python3.10/site-packages/ros2node/api/__init__.py", line 85, in get_subscriber_info
    return get_topics(
  File "/opt/ros/humble/lib/python3.10/site-packages/ros2node/api/__init__.py", line 76, in get_topics
    names_and_types = func(node.name, node.namespace)
  File "/usr/lib/python3.10/xmlrpc/client.py", line 1122, in __call__
    return self.__send(self.__name, args)
  File "/usr/lib/python3.10/xmlrpc/client.py", line 1464, in __request
    response = self.__transport.request(
  File "/usr/lib/python3.10/xmlrpc/client.py", line 1166, in request
    return self.single_request(host, handler, request_body, verbose)
  File "/usr/lib/python3.10/xmlrpc/client.py", line 1182, in single_request
    return self.parse_response(resp)
  File "/usr/lib/python3.10/xmlrpc/client.py", line 1354, in parse_response
    return u.close()
  File "/usr/lib/python3.10/xmlrpc/client.py", line 668, in close
    raise Fault(**self._stack[0])
xmlrpc.client.Fault: <Fault 1: "<class 'rclpy._rclpy_pybind11.NodeNameNonExistentError'>:cannot get subscriber names and types for nonexistent node: error not set">

To reproduce

services:
  zenoh_namespace:
    image: eclipse/zenoh-bridge-ros2dds:latest
    init: true
    networks:
      - rosnet-1
    environment:
      - ROS_DISTRO=humble
      - ROS_DOMAIN_ID=0
      - RMW_IMPLEMENTATION=rmw_cyclonedds_cpp
    command: >
      --mode router
      --id cc_router
      --ros-args -r __node:=robot3/cc_router

  zenoh:
    image: eclipse/zenoh-bridge-ros2dds:latest
    init: true
    networks:
      - rosnet-1
    environment:
      - ROS_DISTRO=humble
      - ROS_DOMAIN_ID=0
      - RMW_IMPLEMENTATION=rmw_cyclonedds_cpp
    command: >
      --mode router
      --id cc_router_no_namespace
      --ros-args -r __node:=cc_router_no_namespace

  introspector:
    image: osrf/ros:humble-desktop
    networks:
      - rosnet-1
    restart: always
    tty: true
    command: 
      - bash
      - -c
      - |
        source /opt/ros/humble/setup.bash
        ros2 node list
        sleep 2
        ros2 node info /robot3/cc_router
        ros2 node info /cc_router_no_namespace


networks:
  rosnet-1:

System info

  • Platform: Ubuntu 22.04 AMD64 & Raspberry Pi 5]
  • CPU: AMD Ryzen 5 5600
  • Zenoh Image eclipse/zenoh-bridge-ros2dds:0.11.0-rc.3

Improve documentation on integration

Improve documentation on integration

I want to build a zenoh-native application that uses this plugin for compatibility with ROS2, but I could not find any documentation on how exactly the ROS2 services and actions are mapped to Zenoh Queryables.

More specifically, I plan to build a python library for using ROS actions from "outside" of a ROS environment. AFAIK this plugin does exactly what I need for that purpose, but unfortunately am having difficulties finding the required details details in the code.
(as mentioned in "Easier integration with Zenoh native applications (services and actions are mapped to Zenoh Queryables)"

e.g. are parameters (such as goal_id) of the actions mapped to the Zenoh Selector, or are they just appended to the query as a value?

[Bug] With `mode: "client"` in configuration `zenoh-bridge-ros2dds` remain in peer mode

Describe the bug

Using such a config file:

{ mode: "client" }

zenoh-bridge-ros2dds is not using the client mode, but the peer mode.

Bug introduced in #49

To reproduce

  1. create a conf.json5 file as such:
    { mode: "client" }
  2. Start a Zenoh router: zenohd
  3. Start a bridge with the config file: zenoh-bridge-ros2dds -c conf.json5
  4. Check the bridge mode as discovered by the bridge at http://localhost:8000/@/router/local (see "whatami" value)

System info

[Bug] mismatched types when cross compile

Describe the bug

zenoh-plugin-ros2dds reports 6 errors when i try to cross comile for aarch64-linux-android

error[E0308]: mismatched types
   --> zenoh-plugin-ros2dds/src/dds_types.rs:135:40
    |
135 |                 ddsrt_iov_len_to_usize(self.data.iov_len).unwrap(),
    |                 ---------------------- ^^^^^^^^^^^^^^^^^ expected `usize`, found `u64`
    |                 |
    |                 arguments to this function are incorrect
    |
note: function defined here
   --> zenoh-plugin-ros2dds/src/dds_utils.rs:40:8
    |
40  | pub fn ddsrt_iov_len_to_usize(len: ddsrt_iov_len_t) -> Result<usize, String> {
    |        ^^^^^^^^^^^^^^^^^^^^^^ --------------------
help: you can convert a `u64` to a `usize` and panic if the converted value doesn't fit
    |
135 |                 ddsrt_iov_len_to_usize(self.data.iov_len.try_into().unwrap()).unwrap(),
    |                                                         ++++++++++++++++++++

error[E0308]: mismatched types
   --> zenoh-plugin-ros2dds/src/dds_types.rs:150:40
    |
150 |                 ddsrt_iov_len_to_usize(self.data.iov_len).unwrap(),
    |                 ---------------------- ^^^^^^^^^^^^^^^^^ expected `usize`, found `u64`
    |                 |
    |                 arguments to this function are incorrect
    |
note: function defined here
   --> zenoh-plugin-ros2dds/src/dds_utils.rs:40:8
    |
40  | pub fn ddsrt_iov_len_to_usize(len: ddsrt_iov_len_t) -> Result<usize, String> {
    |        ^^^^^^^^^^^^^^^^^^^^^^ --------------------
help: you can convert a `u64` to a `usize` and panic if the converted value doesn't fit
    |
150 |                 ddsrt_iov_len_to_usize(self.data.iov_len.try_into().unwrap()).unwrap(),
    |                                                         ++++++++++++++++++++

error[E0308]: mismatched types
   --> zenoh-plugin-ros2dds/src/dds_types.rs:179:32
    |
179 |         ddsrt_iov_len_to_usize(self.data.iov_len).unwrap()
    |         ---------------------- ^^^^^^^^^^^^^^^^^ expected `usize`, found `u64`
    |         |
    |         arguments to this function are incorrect
    |
note: function defined here
   --> zenoh-plugin-ros2dds/src/dds_utils.rs:40:8
    |
40  | pub fn ddsrt_iov_len_to_usize(len: ddsrt_iov_len_t) -> Result<usize, String> {
    |        ^^^^^^^^^^^^^^^^^^^^^^ --------------------
help: you can convert a `u64` to a `usize` and panic if the converted value doesn't fit
    |
179 |         ddsrt_iov_len_to_usize(self.data.iov_len.try_into().unwrap()).unwrap()
    |                                                 ++++++++++++++++++++

error[E0308]: mismatched types
   --> zenoh-plugin-ros2dds/src/dds_utils.rs:191:22
    |
191 |             iov_len: size,
    |                      ^^^^ expected `u64`, found `usize`

error[E0308]: mismatched types
   --> zenoh-plugin-ros2dds/src/ros_discovery.rs:341:26
    |
341 |                 iov_len: size,
    |                          ^^^^ expected `u64`, found `usize`

error[E0308]: mismatched types
   --> zenoh-plugin-ros2dds/src/route_subscriber.rs:377:22
    |
377 |             iov_len: size,
    |                      ^^^^ expected `u64`, found `usize`

For more information about this error, try `rustc --explain E0308`.
error: could not compile `zenoh-plugin-ros2dds` (lib) due to 6 previous errors
warning: build failed, waiting for other jobs to finish...

To reproduce

  1. rustup target add aarch64-linux-android

  2. download android ndk-23c and unzip to /opt directory

  3. create aarch64-linux-android-ar link

cd /opt/android-ndk-r23c/toolchains/llvm/prebuilt/linux-x86_64/bin
sudo ln -s llvm-ar  aarch64-linux-android-ar 
  1. add cargo config
[target.aarch64-linux-android]
linker = "aarch64-linux-android27-clang"
  1. export ANDROID_NDK
export ANDROID_NDK=/opt/android-ndk-r23c
  1. cargo build
cargo build --target aarch64-linux-android --release -p zenoh-bridge-ros2dds 

System info

  • Platform: [Ubuntu-20,04]
  • CPU: AMD-R7-6800u

[Bug] Multiple network interfaces connection issues

Describe the bug

This is my hardware setup:
zenoh-plugin-ros2dds

I have tried 2 methods:

# Robot
./zenoh-bridge-ros2dds -l tcp/0.0.0.0:7447

# Station
./zenoh-bridge-ros2dds -e tcp/192.168.1.1:7447 -l tcp/192.168.2.1:7448

# PC
./zenoh-bridge-ros2dds -e tcp/192.168.2.1:7448

and

# Robot
./zenoh-bridge-ros2dds -e tcp/192.168.1.5:7447

# Station
./zenoh-bridge-ros2dds -l tcp/0.0.0.0:7447

# PC
./zenoh-bridge-ros2dds -e tcp/192.168.2.1:7447

But both report warning messages that PC and Robot can't connect to each other

# PC
[WARN  zenoh::net::runtime::orchestrator] Unable to connect to any locator of scouted peer 576a28576abe5550b3c150c051e9e046:[tcp/192.168.1.1:33642]

# Robot
[WARN  zenoh::net::runtime::orchestrator] Unable to connect to any locator of scouted peer 970647dbcbbccaf605e3f6c9604ce5e1:[tcp/192.168.2.10:2133]

To reproduce

  1. Start zenoh-plugin-ros2dds as above

System info

  • Robot: arm64
  • Station: arm64
  • PC: x86-64

[Bug] Deb install do not found systemctl even if it's present

Describe the bug

The installation of the zenoh-bridge-ros2dds 0.11.0-rc2 fails to found systemctl command, even if the command is found

To reproduce

command -v systemctl
/usr/bin/systemctl
echo $?
0
sudo dpkg -i zenoh-plugin-ros2dds_0.11.0-rc.2_amd64.deb
....
Setting up zenoh-bridge-ros2dds (0.11.0-rc.2) ...
WARNING: 'systemctl' not found - cannot install zenoh-bridge-ros2dds as a service.
/usr/bin/systemctl
``

### System info

- Platform: ubuntu 22.04.3 server
- CPU:  Intel(R) Core(TM) i7-10710U
- Zenoh version: 0.11.0-rc2 (deb)

[Bug] ROS 2 node crashes while querying admin space

Describe the bug

While running ROS 2 node, using curl to query admin space will cause the node to crash.

To reproduce

  1. In terminal 1, run talker
ros2 run demo_nodes_cpp talker
  1. In terminal 2, run zenoh-bridge-ros2dds
./target/release/zenoh-bridge-ros2dds --rest-http-port 8000
  1. Query the admin space
curl http://127.0.0.1:8000/\*\*
  1. terminal 1 will crash
terminate called after throwing an instance of 'std::bad_alloc'
  what():  std::bad_alloc

System info

Platform: Ubuntu 22.04
ROS version: Humble
Bridge version: main branch

[Bug] Not able to configure zenoh-bridge to only listen for incoming connections

Describe the bug

Im not sure if this is a bug, or if it is not clear what configuration we need to achieve what we want to achieve.

We have a wifi network with multiple robots, and each robot runs its own zenoh-bridge-ros2dds process. We would like to be able to run a zenoh bridge locally on a development laptop, and then connect that bridge to a single robot. This works fine when only one robot is online. However, when multiple robots are online in the same network, all robot bridges connect to each other. So they can all access eachothers data. We want to prevent this from happening. So the bridges on the robots should not actively look for other bridges, but instead should just wait for another bridge (from a development laptop) to connect to one of their open ports.

We've tried multiple configuration options, but we can't seem to get to a configuration where the robots ignore each other, without adding extra namespaces on all bridges.

Thank you in advance

To reproduce

We are running the following docker compose service:

zenoh:
image: 'eclipse/zenoh-bridge-ros2dds:release-0.10.1-rc2'
container_name: zenoh
network_mode: host
init: true
privileged: true
environment:
ROS_DISTRO: humble
RMW_IMPLEMENTATION=rmw_cyclonedds_cpp
CYCLONEDDS_URI: "/etc/avular/dds/cyclone_loopback.xml"
ROS_DOMAIN_ID=${ROS_DOMAIN_ID}
command: '-d ${ROS_DOMAIN_ID} -l tcp/192.168.0.1:7447'

We can see the bridge connects to another bridge, which is running a similiar configuration:
[2024-02-29T14:57:22Z INFO zenoh_plugin_ros2dds] Remote bridge 4315b9ff5a51266446e4cd2bb11b13d2 announces Service Server ros_bridge/get_parameter_types
[2024-02-29T14:57:22Z INFO zenoh_plugin_ros2dds::routes_mgr] Route Service Client (ROS:/ros_bridge/get_parameter_types <-> Zenoh:ros_bridge/get_parameter_types) created

And afterwards a lot more topics and services from the other robot follow.

System info

  • Platform: ubuntu 22.04
  • Hardware: Jetson Xavier NX (arm64)
  • eclipse/zenoh-bridge-ros2dds:release-0.10.1-rc2'

[Bug] Upgrading debian/ubuntu packaged removes the zenoh-bridge-ros2dds user and /etc/zenoh-bridge-ros2dds directory.

Describe the bug

Using apt to install an update of zenoh-bridge-ros2dds triggers a removal of the zenoh-bridge-ros2dds user and the /etc/zenoh-bridge-ros2dds directory.

The upgrade fails if the service is in use by the zenoh-bridge-ros2dds user.

The upgrade should not remove the existing conf.json5 file. (It contained hard earned configuration settings :/ ).

The default file config file could possibly installed in /lib/share/zenoh-bridge-ros2dds/default_conf.json5

If a /etc/zenoh-bridge-ros2dds/conf.json5 does not exist then it might fallback to the installed /lib/share/zenoh-bridge-ros2dds/default_conf.json5. I am making this up. There is probably a proper debian way to handle configuration files and service restarts during upgrades.

To reproduce

  1. Attempt to upgrade a running zenoh-bridge-ros2dds service with a customised config.
  2. The upgrade will fail until the service is stopped.
  3. Any previous config will be replaced with a default config.

System info

  • Ubuntu 22.04.3 LTS
  • [2024-02-02T03:05:46Z INFO zenoh_bridge_ros2dds] zenoh-bridge-ros2dds v0.10.1-rc.2 built with rustc 1.72.0 (5680fa18f 2023-08-23

[Bug] Iron and Rolling: errors on ros_discovery_info messages

Describe the bug

Following ros2/rmw_dds_common#68 the Gid type used in ros_discovery_info topic changed from char[24] data to char[16] data.

This leads the bridge to be incompatible with Iron and latest Rolling.
The incompatibility can be seen with such logs in the bridge:
[2023-11-29T14:51:27Z WARN zenoh_plugin_ros2dds::ros_discovery] Error receiving ParticipantEntitiesInfo on ros_discovery_info: invalid utf-8 sequence of 1 bytes from index 4

Or in a ROS Node:
[WARN] [1701269487.404903002] [rmw_cyclonedds_cpp]: Failed to parse type hash for topic 'ros_discovery_info' with type 'rmw_dds_common::msg::dds_::ParticipantEntitiesInfo_' from USER_DATA '(null)'.

To reproduce

In Iron, run zenoh-bridge-ros2dds with any ROS Node and see the logs.

System info

  • commit: a485c63
  • ROS distro: Iron and Rolling
  • Platform: all

Support `--ros-args -r __node:=` command line arg

Describe the feature

zenoh-bridge-ros2dds shall support the ROS Command Line Args, logging a "not supported ROS argument ... - ignored" warning for each one, until its actually supported.

In a first step, the -r __node:=node_name argument should be supported and mapped to the ros2dds.nodename configuration.

[Bug] Debian/Ubuntu zenoh-bridge-ros2dds.service panic when interface not yet up during boot.

Describe the bug

I am using zenoh-bridge-ros2dds service that, otherwise works, with a conf.json5 containing an entry specifying an endpoint that is not yet availble when it zenoh attempts to bind to it. Maybe zenoh could retry, possibly timeout, rather than panic?

listen: {
endpoints: ["tcp/192.168.0.241:7447"]
},

It appears that the After/Wants = network-online.target is not enough to make sure the interface is up and ready to be bound to.

head -5 /etc/systemd/system/zenoh-bridge-ros2dds.service
[Unit]
Description = Eclipse Zenoh Bridge for ROS2 with a DDS RMW
Documentation=https://github.com/eclipse-zenoh/zenoh-plugin-ros2dds
After=network-online.target
Wants=network-online.target

I can restart the service successfully after the interface is actually up.

Feb 02 11:54:54 iwdbase systemd[1]: Started Eclipse Zenoh Bridge for ROS2 with a DDS RMW.
Feb 02 11:54:54 iwdbase zenoh-bridge-ros2dds[9499]: [2024-02-02T00:54:54Z INFO  zenoh_bridge_ros2dds] zenoh-bridge-ros2dds v0.10.1-rc.2 built with rustc 1.72.0 (5680fa18f 2023-08-23)
Feb 02 11:54:54 iwdbase zenoh-bridge-ros2dds[9499]: [2024-02-02T00:54:54Z DEBUG zenoh::net::runtime] Zenoh Rust API v0.10.1-rc
Feb 02 11:54:54 iwdbase zenoh-bridge-ros2dds[9499]: [2024-02-02T00:54:54Z INFO  zenoh::net::runtime] Using PID: d391bd5b8b9524caa7d489bae271cff0
Feb 02 11:54:54 iwdbase zenoh-bridge-ros2dds[9499]: [2024-02-02T00:54:54Z DEBUG zenoh::net::routing::network] [Routers network] Add node (self) d391bd5b8b9524caa7d489bae271cff0
Feb 02 11:54:54 iwdbase zenoh-bridge-ros2dds[9499]: [2024-02-02T00:54:54Z DEBUG zenoh::net::routing::network] [Peers network] Add node (self) d391bd5b8b9524caa7d489bae271cff0
Feb 02 11:54:54 iwdbase zenoh-bridge-ros2dds[9499]: [2024-02-02T00:54:54Z ERROR zenoh::net::runtime::orchestrator] Unable to open listener tcp/192.168.0.241:7447: Can not create a new TCP listener bound to tcp/192.168.0.241:7447: [192.168.0.241:7447: Cannot assign requested address (os error 99) at /home/runner/.cargo/registry/src/index.crates.io-6f17d22bba15001f/zenoh-link-tcp-0.10.1-rc/src/unicast.rs:245.] at /home/runner/.cargo/registry/src/index.crates.io-6f17d22bba15001f/zenoh-link-tcp-0.10.1-rc/src/unicast.rs:333.
Feb 02 11:54:54 iwdbase zenoh-bridge-ros2dds[9499]: thread 'main' panicked at 'called `Result::unwrap()` on an `Err` value: Can not create a new TCP listener bound to tcp/192.168.0.241:7447: [192.168.0.241:7447: Cannot assign requested address (os error 99) at /home/runner/.cargo/registry/src/index.crates.io-6f17d22bba15001f/zenoh-link-tcp-0.10.1-rc/src/unicast.rs:245.] at /home/runner/.cargo/registry/src/index.crates.io-6f17d22bba15001f/zenoh-link-tcp-0.10.1-rc/src/unicast.rs:333.', zenoh-bridge-ros2dds/src/main.rs:77:62
Feb 02 11:54:54 iwdbase zenoh-bridge-ros2dds[9499]: note: run with `RUST_BACKTRACE=1` environment variable to display a backtrace
Feb 02 11:54:55 iwdbase systemd[1]: zenoh-bridge-ros2dds.service: Main process exited, code=dumped, status=6/ABRT
Feb 02 11:54:55 iwdbase systemd[1]: zenoh-bridge-ros2dds.service: Failed with result 'core-dump'.

To reproduce

  1. Start a zenoh-bridge-ros2dds.service with an endpoint address that does not yet exist.
  2. panic

System info

  • Ubuntu 22.04.3 LTS
  • zenoh-bridge-ros2dds v0.10.1-rc.2 built with rustc 1.72.0 (5680fa18f 2023-08-23)
cat /etc/systemd/system/zenoh-bridge-ros2dds.service 
[Unit]
Description = Eclipse Zenoh Bridge for ROS2 with a DDS RMW
Documentation=https://github.com/eclipse-zenoh/zenoh-plugin-ros2dds
After=network-online.target
Wants=network-online.target

[Service]
Type=simple
Environment=RUST_LOG=debug
#Environment=RUST_BACKTRACE=1
Environment=ROS_DISTRO=humble
Environment=RMW_IMPLEMENTATION=rmw_cyclonedds_cpp
Environment=CYCLONEDDS_URI="file:///etc/aos/zenoh/zenoh-bridge-cyclonedds-config.xml"
Environment=ROS_DOMAIN_ID="10"
ExecStart = /usr/bin/zenoh-bridge-ros2dds -c /etc/aos/zenoh/conf.json5
KillMode=mixed
KillSignal=SIGINT
RestartKillSignal=SIGINT
Restart=on-failure
PermissionsStartOnly=true
User=zenoh-bridge-ros2dds
StandardOutput=journal
StandardError=journal
SyslogIdentifier=zenoh-bridge-ros2dds

[Install]
WantedBy=multi-user.target
cat /etc/aos/zenoh/zenoh-bridge-cyclonedds-config.xml 
<?xml version="1.0" encoding="utf-8"?>
<CycloneDDS
    xmlns="https:://cdds.io/config"
    xmlns:xsi="http://www.w3.org/2001/XMLSchema=instance"
    xsi:schemaLocation="https://cdds.io/config https://raw.githubusercontent.com/eclipse-cyclonedds/cyclonedds/master/etc/cyclonedds.xsd"
>
    <Domain Id="any">
        <General>
            <Interfaces>
                <NetworkInterface address="127.0.0.1" multicast="true"/>
            </Interfaces>
            <DontRoute>true</DontRoute>
        </General>
    </Domain>
</CycloneDDS>

[Bug] Three agents duplicate incoming messages

Describe the bug

Thanks for your amazing work. I was trying the zenoh_bridge_ros2dds, with three remote PCs connected on the same Wi-FI. In this demo the robot 0 publish messages in a topic, and this message is listened by the other two robots (1`` and 2). The number refers to `ROS_DOMAIN_ID` . Communication sometimes works and sometimes exhibit bugs on the number of messages read by one agent, always the last agent, who duplicates the messages. Is there a way to avoid this, maybe using namespace? Ideally I would like that every agents can publish/subscribe from the same topic without this bug. If this is not a bug could you provide me an example with more than 2 remote pc?

This is the final output

Robot 0:

[INFO] [1706202282.021806288] [talker]: Publishing: 'Hello World: 730'
[INFO] [1706202283.021789337] [talker]: Publishing: 'Hello World: 731'
[INFO] [1706202284.021752267] [talker]: Publishing: 'Hello World: 732'
[INFO] [1706202285.021799607] [talker]: Publishing: 'Hello World: 733'
[INFO] [1706202286.021835910] [talker]: Publishing: 'Hello World: 734'
[INFO] [1706202287.021864647] [talker]: Publishing: 'Hello World: 735'

Robot 1:

[INFO] [1706202281.939232374] [listener]: I heard: [Hello World: 730]
[INFO] [1706202281.939652915] [listener]: I heard: [Hello World: 730]
[INFO] [1706202282.939507519] [listener]: I heard: [Hello World: 731]
[INFO] [1706202282.939790751] [listener]: I heard: [Hello World: 731]
[INFO] [1706202283.938842816] [listener]: I heard: [Hello World: 732]
[INFO] [1706202283.942020988] [listener]: I heard: [Hello World: 732]
[INFO] [1706202284.939364677] [listener]: I heard: [Hello World: 733]
[INFO] [1706202284.941937116] [listener]: I heard: [Hello World: 733]
[INFO] [1706202285.939902659] [listener]: I heard: [Hello World: 734]
[INFO] [1706202285.940282093] [listener]: I heard: [Hello World: 734]
[INFO] [1706202286.938968271] [listener]: I heard: [Hello World: 735]
[INFO] [1706202286.939437045] [listener]: I heard: [Hello World: 735]

Robot 2:

[INFO] [1706202282.020367529] [listener]: I heard: [Hello World: 730]
[INFO] [1706202283.019923871] [listener]: I heard: [Hello World: 731]
[INFO] [1706202284.020355711] [listener]: I heard: [Hello World: 732]
[INFO] [1706202285.020997782] [listener]: I heard: [Hello World: 733]
[INFO] [1706202286.021133424] [listener]: I heard: [Hello World: 734]
[INFO] [1706202287.020533780] [listener]: I heard: [Hello World: 735]

To reproduce

Robot 0 (IP 192.168.0.10)

Terminal 1

ROS_DOMAIN_ID=0 RMW_IMPLEMENTATION=rmw_cyclonedds_cpp CYCLONEDDS_URI=/cyclonedds_0.xml \
zenoh_bridge_ros2dds -l udp/0.0.0.0:7447 -e udp/192.168.0.22:7447 -e udp/192.168.0.23:7447 --no-multicast-scouting

Terminal 2

ROS_DOMAIN_ID=0 RMW_IMPLEMENTATION=rmw_cyclonedds_cpp CYCLONEDDS_URI=/cyclonedds_0.xml \
ros2 run demo_nodes_cpp talker
[INFO] [1706202282.021806288] [talker]: Publishing: 'Hello World: 730'

Robot 1 (IP 192.168.0.22)

Terminal 1

ROS_DOMAIN_ID=1 RMW_IMPLEMENTATION=rmw_cyclonedds_cpp CYCLONEDDS_URI=/cyclonedds_1.xml \
zenoh_bridge_ros2dds -l udp/0.0.0.0:7447  --no-multicast-scouting

Terminal 2

ROS_DOMAIN_ID=1 RMW_IMPLEMENTATION=rmw_cyclonedds_cpp CYCLONEDDS_URI=/cyclonedds_1.xml \
ros2 run demo_nodes_cpp listener
[INFO] [1706202281.939232374] [listener]: I heard: [Hello World: 730]
[INFO] [1706202281.939652915] [listener]: I heard: [Hello World: 730]

Robot 2 (IP 192.168.0.23)

Terminal 1

ROS_DOMAIN_ID=2 RMW_IMPLEMENTATION=rmw_cyclonedds_cpp CYCLONEDDS_URI=/cyclonedds_2.xml \
zenoh_bridge_ros2dds -l udp/0.0.0.0:7447  --no-multicast-scouting

Terminal 2

ROS_DOMAIN_ID=2 RMW_IMPLEMENTATION=rmw_cyclonedds_cpp CYCLONEDDS_URI=/cyclonedds_2.xml \
ros2 run demo_nodes_cpp listener
[INFO] [1706202282.020367529] [listener]: I heard: [Hello World: 730]

Robot 1 is the last robot that listen the topic.
the .xml configuration are all the same with the only difference in the field NetworkInterface

<CycloneDDS>

 <Domain id="any">

     <General>
         <Interfaces>
             <NetworkInterface name="wlp****"/>
             <!-- For less traffic, force multicast usage on loopback even if not configured.         -->
             <!-- All ROS Nodes and bridges must have this same config, otherwise they won't discover -->
             <!-- <NetworkInterface address="127.0.0.1" multicast="true"/>  -->
         </Interfaces>
         <DontRoute>true</DontRoute>
     </General>
 </Domain>

</CycloneDDS>

System info

  • Platform Docker, under image osrf/ros:humble-desktop
  • Architecture x86_64
  • Zenoh Version: zenoh-bridge-ros2dds v0.10.1-rc built with rustc 1.70.0 (90c541806 2023-05-31) (built from a source tarball)
    zenoh bridge for DDS v0.10.1-rc
  • Zenoh was build as a ROS2 package

ros2cli full support

Describe the feature

Is your feature request related to a problem?
I have successfully set up a communication between two ROS2 systems via zenoh-bridge-ros2dds. I can see correctly topics, services and actions using the corresponding ros2cli commands. The problem is that when i try to run cli parameters command (i.e. ros2 param dump /my_node) the cli respond Node not found. I think that this is due to the fact that the cli command checks for the node in the graph before trying to contact the corresponding parameter service, unfortunately the node /my_node is not available on both side but only on the side that creates it making the ros2 parameters cli unusable with this bridge.

Describe the solution you'd like
ros2 node list works as expected by reporting all nodes of both sides

Additional context
Note that before the release of this ROS2 specific bridge i was using the generic zenoh-bridge-dds and i never faced this issue (ros2 node list returns all node names not only the bridge node). The motivation for me to use this rather than the generic bridge is that with this version this issue is solved.

[Bug] Unable to communicate while using ROS_LOCALHOST_ONLY

Describe the bug

While using CycloneDDS + ROS_LOCALHOST_ONLY + Disable multicast on localhost, talker & listener can talk to each other.

To reproduce

  1. Terminal 1
ROS_DOMAIN_ID=1 RMW_IMPLEMENTATION=rmw_cyclonedds_cpp ROS_LOCALHOST_ONLY=1 ros2 run demo_nodes_cpp talker
  1. Terminal 2
ROS_DOMAIN_ID=2 RMW_IMPLEMENTATION=rmw_cyclonedds_cpp ROS_LOCALHOST_ONLY=1 ros2 run demo_nodes_cpp listener
  1. Terminal 3
./target/release/zenoh-bridge-ros2dds -d 1 --ros-localhost-only
  1. Terminal 4
./target/release/zenoh-bridge-ros2dds -d 2 --ros-localhost-only
  1. Talker and listener can't talk to each other, but we can enable multicast to fix this.
sudo ip link set lo multicast on
  1. Also, if we use FastDDS, then this issue also disappears.

Extra information

I also tried the same thing on zenoh-bridge-dds, and found that the issue also appeared in 0.10.0 but disappeared in 0.7.0.
Therefore, I guess this is related to Zenoh version.
Here is my environment for zenoh-bridge-dds: https://github.com/evshary/zenoh_demo_docker_env/tree/main/ros_with_zenoh_bridge_dds

System info

Platform: Ubuntu 22.04
ROS version: Humble
Bridge version: main branch

[Bug] Unable to open listener tcp/[::]:0, even when specifying an IPv4 address

Describe the bug

Hey!

I'm trying to run zenoh-bidge-ros2dds binary, but the binary won't start because the TCP listener can't be bound. The strange thing is, that the listener is trying to bind it to the auto interface [::]:0 local address, even when specifying an IPv4 address.
I'm running an unmodified Ubuntu 23.10, but now the strange thing. When using Ubuntu 22.04 or EndeavourOS, Zenoh is starting successfully. So I suspect a network interface misbehaviour (maybe due to misconfiguration or an Ubuntu specific issue with resolving the network interface).

Nevertheless, maybe someone is able to reproduce this issue.

To reproduce

  1. Get Ubuntu 23.10
  2. Get zenoh-bridge-ros2dds
  3. Start the zenoh-bridge-ros2dds bridge with or without parameters
  4. Crash with the following error message:
INFO  zenoh_bridge_ros2dds] zenoh-bridge-ros2dds v0.11.0-dev-39-g9547868
INFO  zenoh::net::runtime] Using PID: 81592944a3adf6c4bcbc2efa2d045072
ERROR zenoh::net::runtime::orchestrator] Unable to open listener tcp/[::]:0: 
Can not create a new TCP listener bound to tcp/[::]:0: [[::]:0: Address family not supported by protocol (os error 97) 
at /home/user/.cargo/git/checkouts/zenoh-cc237f2570fab813/42f9384/io/zenoh-links/zenoh-link-tcp/src/unicast.rs:222.] 
at /home/user/.cargo/git/checkouts/zenoh-cc237f2570fab813/42f9384/io/zenoh-links/zenoh-link-tcp/src/unicast.rs:305.

thread 'main' panicked at 'called `Result::unwrap()` on an `Err` value: 
Can not create a new TCP listener bound to tcp/[::]:0: [[::]:0: Address family not supported by protocol (os error 97) 
at /home/user/.cargo/git/checkouts/zenoh-cc237f2570fab813/42f9384/io/zenoh-links/zenoh-link-tcp/src/unicast.rs:222.]
at /home/user/.cargo/git/checkouts/zenoh-cc237f2570fab813/42f9384/io/zenoh-links/zenoh-link-tcp/src/unicast.rs:305.', zenoh-bridge-ros2dds/src/main.rs:77:62

System info

  • Platform: Ubuntu 23.10
  • CPU: Intel i7-1255U (x86_64)
  • Zenoh version: main source build and CI binary

[Bug] Remote bridge constantly retires writers on UDP

Describe the bug

No issues with TCP, but with the exact same configuration with UDP I get about a second or two of streaming, followed by a barrage of messages saying "Remote bridge {GUID} retires {Publisher/Service/Action/etc.}" and then "Route Publisher (ROS:/{TOPIC} -> Zenoh:{TOPIC}) removed"

Connectivity isn't an issue since just replacing udp with tcp in the command argument everything works fine.

Using CycloneDDS configured on the localhost only, and loopback multicast force-enabled.

For extra context, running around 60 nodes with 130 topics on a single PC, a lot from the Nav2 stack. WiFi bandwidth at least 150 Mbit/s. When streaming over TCP, around 80 Mbit/s down. Running in Podman OCI containers for convenience, but previously reproduced outside of containers too. Devices both on the same LAN.

To reproduce

  1. Start Zenoh bridge with config -l udp/0.0.0.0:7447
  2. Run client/peer with config -e udp/{ BRIDGE_IP}:7447

System info

  • Platform: Official Open Source Robotics Foundation Docker image (Humble) running on Debian 12 host.
  • CPU: AMD Ryzen Embedded V2718 with Radeon Graphics
  • Zenoh verison/commit: 2c52d0b

[Bug] Gossip discovery fails

Describe the bug

I'm trying to setup the a peer-to-peer topology with gossip discovery via a central router. Gossip discovery between 2 robots via a predefined router: robot1 -> router <- robot2. All of them are running zenoh_bridge_ros2dds locally in the respective mode.

I found 2 issues.

  1. Discovery for "scan" topics works only if I set multicast discovery to true. Gossip discovery fails for some reason.
  2. When multicast discovery is enable, I can find the respective topic. However, the command ros2 topic echo works only the first time. When stopping the echo command and rerunning it, no data arrive anymore. After restarting the local zenoh bridge the echo command works again exactly 1 time.

zenoh-config.json

However, for some reason multicast discovery works, while gossip discovery does not.

To reproduce

  1. ros2 run zenoh_bridge_ros2dds zenoh_bridge_ros2dds -c zenoh_config.json5 -e tcp/192.168.0.153:7447
  2. ros2 topic echo <yourscan>
  3. Router has the same settings, except: mode=router

zenoh-config.json

System info

  • Ubuntu 22.04
  • zenoh peer runs directly on the OS
  • zenoh router runs withing a devcontainer with --net=host setting.

Subscriber-side throttling

Describe the feature

DDS is so problematic over WiFi that even for working on the robot in our lab sitting just meters away, we use a Zenoh bridge to communicate.

We have a fairly unconstrained configuration, only limiting uncompressed image/video topics, with 150 mbps being the norm for bandwidth downstream from the robot.

This works quite well, until someone remotely needs to bridge into the robot (WFH etc.) We could use a different more conservative configuration, drastically throttling all high-bandwidth topics, but this unnecessarily degrades the experience of those adjacent the robot.

I've tried running multiple bridges on different ports, the idea that each would have a different configuration, with different rates of throttling, that a remote user could choose which to connect to based on their circumstance. But this seems to cause a weird issue where frames are duplicated slightly out of timesync and sent together, presenting on the video feeds as rewinding and fastforwarding type glitches.

Ideally, a remote client/subscriber could inform the Zenoh bridge/server how to throttle the publishing of the topics, but for that client/subscriber alone, instead of a single global setting. This would be much more flexible and allow much greater granularity, as well as reducing overhead compared to running multiple bridges.

[Bug] the plugin sometimes does not discover ROS2 Node

Describe the bug

Hi! The bug was quite similar to this. We just switched from zenoh-plugin-dds to zenoh-plugin-ros2dds to bridge ROS 2 messages. This time ROS distribution Humble is used and CycloneDDS as RMW_IMPLEMENTATION. However, the issue persists as the one occurred in this. One bridge is randomly unable to discover ROS node which are set up within the same virtual network.

What's more, even if switching back to zenoh-plugin-dds, using humble and CycloneDDS, the issue still persists. I think the issue had never been resolved:(

To reproduce

Docker Compose file:

services:
  talker:
    image: osrf/ros:humble-desktop
    command:
      - /bin/sh
      - -c
      - |
        apt update
        apt install -y ros-iron-rmw-cyclonedds-cpp
        ros2 run demo_nodes_cpp talker
    environment:
      - RMW_IMPLEMENTATION=rmw_cyclonedds_cpp
    networks:
      - rosnet-1
  listener:
    image: osrf/ros:humble-desktop
    command:
      - /bin/sh
      - -c
      - |
        apt update
        apt install -y ros-iron-rmw-cyclonedds-cpp
        ros2 run demo_nodes_cpp listener
    environment:
      - RMW_IMPLEMENTATION=rmw_cyclonedds_cpp
    depends_on:
      - talker
    networks:
      - rosnet-2
  zenoh-router:
    image: eclipse/zenoh
    ports:
      - 7447:7447
      - 8000:8000
    networks:
      - protocolnet-1
      - protocolnet-2
  zenoh-bridge-dds-1:
    image: eclipse/zenoh-bridge-ros2dds::latest
    command: -m client -e tcp/zenoh-router:7447
    networks:
      - rosnet-1
      - protocolnet-1
    environment:
      - ROS_DISTRO=humble
    depends_on:
      - zenoh-router
  zenoh-bridge-dds-2:
    image: eclipse/zenoh-bridge-dds
    command: -m client -e tcp/zenoh-router:7447
    environment:
      - ROS_DISTRO=humble
    networks:
      - rosnet-2
      - protocolnet-2
    depends_on:
      - zenoh-router

networks:
  rosnet-1:
  rosnet-2:
  protocolnet-1:
  protocolnet-2:

System info

Platform: Ubuntu 20.04 64-bit
Zenoh: 0.10.0-rc
ROS2: humble
Docker: Docker Engine - Community(version 24.0.5 )

[Bug] Deny and allow list on publisher is ignored

Describe the bug

Perhaps I am totally in the wrong here as I started out with Zenoh a couple of days ago, but I have encountered the following inconsistency. I setup one ROS 2 node that publishes and one that subscribes. The first node publishes two topics, /topic and /topic_denied, and the second subscribes to them. The first is on ROS_DOMAIN_ID=1 and the second on ROS_DOMAIN_ID=2.

If any of these topics is in the allow or deny list of the publisher in its zenoh .json5 configuration file (mutually exclusively) then their messages appear on the subscriber, contrary to what is intended.

If any of the topics is in the allow or deny list of the subscriber, then they do/do not appear on the subscriber, exactly as intended.

To reproduce

Follow the instructions of this repo. Both .json5 files do not allow or deny anything at this state. Denying or allowing topics /topic or /topic_denied on the publisher results in no difference.

In the sample output of the terminal where the publisher's Zenoh bridge runs you can see that topic topic_denied does end up in the deny list:

[2024-02-14T14:09:38Z INFO  zenoh_bridge_ros2dds] zenoh-bridge-ros2dds v0.11.0-dev-26-gfdf9a1a built with rustc 1.72.1 (d5c2e9c34 2023-09-13) (built from a source tarball)
[2024-02-14T14:09:38Z INFO  zenoh::net::runtime] Using PID: e0458ce77601d0a72358779565ae335
[2024-02-14T14:09:38Z INFO  zenoh::net::runtime::orchestrator] Zenoh can be reached at: tcp/127.0.0.1:7448
[2024-02-14T14:09:38Z INFO  zenoh::net::runtime::orchestrator] zenohd listening scout messages on 224.0.0.224:7446
[2024-02-14T14:09:38Z WARN  zenoh::net::runtime::orchestrator] Unable to connect to any locator of scouted peer e95b712d1b1db65a48c6d36e3bcad460: [tcp/127.0.0.1:7447]
[2024-02-14T14:09:38Z INFO  zenoh_plugin_ros2dds] ROS2 plugin Config { id: Some("id_publisher"), namespace: "/", nodename: "zenoh_bridge_ros2dds_id_publisher", domain: 2, ros_localhost_only: false, allowance: Some(Deny(ROS2InterfacesRegex { publishers: Some(Regex("^/topic_denied$")), subscribers: None, service_servers: None, service_clients: None, action_servers: None, action_clients: None })), pub_max_frequencies: [], transient_local_cache_multiplier: 10, queries_timeout: None, reliable_routes_blocking: true, pub_priorities: [], __required__: None, __path__: None }
[2024-02-14T14:09:38Z INFO  zenoh_plugin_ros2dds] New ROS 2 bridge detected: id_subscriber

BTW I can see that there is a warning in the output, but is it relevant to this issue? It does not seem to obstruct message delivery.

System info

Platform: Ubuntu 22.04, 64 bit
zenoh-plugin-ros2dds commit fdf9a1a

[Bug] dpkg error installing zenoh-bridge-ros2dds in Docker

Describe the bug

Similarly to eclipse-zenoh/zenoh#415 , in a Docker container the postinstall script is failing because there is no sudo nor systemctl

To reproduce

  1. docker run -it --init ubuntu
  2. apt update && apt install -y ca-certificates
  3. echo "deb [trusted=yes] https://download.eclipse.org/zenoh/debian-repo/ /" | tee -a /etc/apt/sources.list > /dev/null
  4. apt update && apt install -y zenoh-bridge-ros2dds

System info

  • Docker ubuntu:latest
  • Version: 0.10.1-rc

Expose all ROS Nodes of a remote system instead of just a `zenoh_bridge_ros2dds` Node

Describe the feature

It was a deliberate choice for the first implementation to expose only 1 zenoh_bridge_ros2dds Node in place of all the remote Nodes this bridge is serving. My fear is that in a system with tens of robots, each with thousands of Nodes, the ROS graph within a fleet manager host would be overwhelmed with >10000 Nodes.

However, this causes troubles for Parameters support (#72) and prevent some other use cases where a granular remote view of the ROS Graph for a robot is required.

The plugin should provide an option allowing to choose if a bridge exposes all the Nodes of a remote system, or only 1 zenoh_bridge_ros2dds for all it's topics/services/actions, with the drawback that parameters are not supported (later we might consider to have the bridge Node itself re-exposing all the parameters of remote Nodes).

The question of default behaviour remain open. Any opinion from ROS community ?

[Bug] Cannot write ROS 2 Launch file as mandatory "--ros-args" not handled by Zenoh

Describe the bug

I want to run Zenoh as a ROS 2 package in a container. However, ros2 run does not install an explicit signal handler for SIGTERM, and when run as PID1 in a container, a special case where the default disposition of those signals being changed from Terminate to Ignore applies. Further, if the container is initialized with a lightweight init manager such as tini, ros2 run will explicitly set the disposition of SIGINT to Ignore because it doesn't detect a controlling terminal.

I therefore want to create a simple launchfile that correctly handles these signals (notably SIGINT) and forwards them to its children. The launchfile will be configured explicitly with "emulate_tty=True" to trick Zenoh installing the SIGINT handler. The problem is that as far as I can tell, ROS 2 Launch will always append --ros-args to argv of the node its launching, and Zenoh doesn't like that.

Could Zenoh be modified to detect --ros-args and harmlessly ignore it? At least until there's possibly more ROS-specific functionality integrated in which care maybe reading the args might be useful.

To reproduce

zenoh_launch.py

from launch import LaunchDescription
from launch_ros.actions import Node

def generate_launch_description():
    return LaunchDescription([
        Node(
            package='zenoh_bridge_ros2dds',
            executable='zenoh_bridge_ros2dds',
            name='zenoh_bridge_ros2dds',
            emulate_tty=True,
        )
    ])

Error log:

[INFO] [launch]: Default logging verbosity is set to INFO
[INFO] [zenoh_bridge_ros2dds-1]: process started with pid [21470]
[zenoh_bridge_ros2dds-1] [2023-12-20T19:34:39Z INFO  zenoh_bridge_ros2dds] zenoh-bridge-ros2dds v0.1.0-dev built with rustc 1.70.0 (90c541806 2023-05-31) (built from a source tarball)
[zenoh_bridge_ros2dds-1] error: Found argument '--ros-args' which wasn't expected, or isn't valid in this context
[zenoh_bridge_ros2dds-1] 
[zenoh_bridge_ros2dds-1] 	If you tried to supply `--ros-args` as a value rather than a flag, use `-- --ros-args`
[zenoh_bridge_ros2dds-1] 
[zenoh_bridge_ros2dds-1] USAGE:
[zenoh_bridge_ros2dds-1]     zenoh_bridge_ros2dds [OPTIONS]
[zenoh_bridge_ros2dds-1] 
[zenoh_bridge_ros2dds-1] For more information try --help
[ERROR] [zenoh_bridge_ros2dds-1]: process has died [pid 21470, exit code 2, cmd '/opt/ros/overlay_ws/install/zenoh_bridge_ros2dds/lib/zenoh_bridge_ros2dds/zenoh_bridge_ros2dds -c /root/.zenoh/lab.json5 -- --ros-args'].
[zenoh_bridge_ros2dds-1] 

System info

  • Platform: Official Open Source Robotics Foundation Docker image (Humble) running on Debian 12 host.
  • CPU: AMD Ryzen Embedded V2718 with Radeon Graphics
  • Zenoh verison/commit: 2c52d0b

[Bug] zenoh-bridge-ros2dds reports "Plugin load failure: Library file 'libzenoh_plugin_ros2dds.dylib' not found"

Describe the bug

zenoh-bridge-ros2dds is supposed to be a standalone executable with the zenoh-plugin-ros2dds library linked statically.

However, after #116, if the library is not built, it reports this error at startup:
Plugin load failure: Library file 'libzenoh_plugin_ros2dds.dylib' not found
And the bridge is not functional.

To reproduce

  1. build the bridge only: cargo build -p zenoh-bridge-ros2dds
  2. start the bridge: ./target/debug/zenoh-bridge-ros2dds

System info

Support `--ros-args --remap` for topics/services/actions names

Describe the feature

#48 added the support of --ros-args --remap but only for __ns and __node.
It would be nice to allow remapping also for topics/services/actions names, when routed to Zenoh.
I.e. for --ros-args --remap X:=Y : X is the name over DDS and Y is the name over Zenoh.

[Bug] Unable to connect to any locator of scouted peer

Describe the bug

When I try to communicate three nodes (1 x86 PC + 2 ARM64 boards), the message "unable to connect to any locator of scouted peer" appears on my PC showing the IP of one of the ARM boards. After this, no data is exchanged between the two affected endpoints.
Each machine uses CycloneDDS + ROS_LOCALHOST_ONLY=1 environment variable.
Moreover, sudo ip link set lo multicast on is typed before launching the bridge.

In my case, it is important to keep "peer-to-peer" topology instead of router-client.

Not really sure if this is caused by an incorrect configuration or an actual bug. Any feedback about this greatly appreciated.

To reproduce

  1. sudo ip link set lo multicast on on every machine.
  2. Execute zenoh_bridge_ros2dds -i "n1" -c zenoh_config.json5 on ARM machine 1.
  3. Execute zenoh_bridge_ros2dds -i "n2" -c zenoh_config.json5 on ARM machine 2.
  4. Execute zenoh_bridge_ros2dds -i "pc" -c zenoh_config.json5 on x86 PC.
    My JSON5 configuration is attached (changed extension due to GitHub extension policies)
    zenoh_config.json

System info

PC
Platform: Ubuntu 22.04 with kernel 6.0.2.37
ROS version: Humble with ros-humble-cyclonedds 0.10.3-1jammy.20231117.175619 and ros-humble-rmw-cyclonedds-cpp 1.3.4-1jammy.20231117.183821
Bridge version: main branch according to commit: 83ba7e4

ARM64
Platform: Ubuntu with kernel 5.15.0
ROS version: Humble with ros-humble-cyclonedds 0.10.3-1jammy.20231117.170100 and ros-humble-rmw-cyclonedds-cpp 1.3.4-1jammy.20231118.090403
Bridge version: main branch according to commit: 83ba7e4

Hi @boyu-hang,

          Hi @boyu-hang,

I think the issue is that the osrf/ros:iron-desktop image uses rmw_fastrtps_cpp as RMW_IMPLEMENTATION by default.
zenoh-bridge-dds relies on CycloneDDS. I'm not sure why, but it seems FastDDS has by default a stange behaviour leading to discovery issues with CycloneDDS. See the comment here.

Testing this docker-compose.yaml that installs and set RMW_IMPLEMENTATION=rmw_cyclonedds_cpp, it works for me:

services:
  talker:
    image: osrf/ros:iron-desktop
    command:
      - /bin/sh
      - -c
      - |
        apt update
        apt install -y ros-iron-rmw-cyclonedds-cpp
        ros2 run demo_nodes_cpp talker
    environment:
      - RMW_IMPLEMENTATION=rmw_cyclonedds_cpp
    networks:
      - rosnet-1
  listener:
    image: osrf/ros:iron-desktop
    command:
      - /bin/sh
      - -c
      - |
        apt update
        apt install -y ros-iron-rmw-cyclonedds-cpp
        ros2 run demo_nodes_cpp listener
    environment:
      - RMW_IMPLEMENTATION=rmw_cyclonedds_cpp
    depends_on:
      - talker
    networks:
      - rosnet-2
  zenoh-router:
    image: eclipse/zenoh
    ports:
      - 7447:7447
      - 8000:8000
    networks:
      - protocolnet-1
      - protocolnet-2
  zenoh-bridge-dds-1:
    image: eclipse/zenoh-bridge-dds
    command: -m client -e tcp/zenoh-router:7447
    networks:
      - rosnet-1
      - protocolnet-1
    depends_on:
      - zenoh-router
  zenoh-bridge-dds-2:
    image: eclipse/zenoh-bridge-dds
    command: -m client -e tcp/zenoh-router:7447
    networks:
      - rosnet-2
      - protocolnet-2
    depends_on:
      - zenoh-router

networks:
  rosnet-1:
  rosnet-2:
  protocolnet-1:
  protocolnet-2:

Unfortunately the osrf/ros:iron-desktop image doesn't include rmw_cyclonedds_cpp. You'll probably want to write your own Dockerfile to add it, rather that install it at each run.

Note that on the zenoh-router container I routed the port 8000 allowing to access the Zenoh admin space and see what the bridges are discovering from DDS and routing to/from Zenoh with such command:
curl 'http://localhost:8000/@/service/*/dds/**'

Originally posted by @JEnoch in eclipse-zenoh/zenoh-plugin-dds#161 (comment)

[Bug] Issue with nav2 when using the bridge

Describe the bug

When the zenoh bridge is running and I launch navigation2, the lifecycle transitions seem to be acting up. Also, sending navgoals fail.

An example output for the lifecycle thing is:

[controller_server-1] [WARN] [1702648966.041332898] [rcl_lifecycle]: No transition matching 1 found for current state inactive
[controller_server-1] [ERROR] [1702648966.041406110] [controller_server]: Unable to start transition 1 from current state inactive: Transition is not registered., at ./src/rcl_lifecycle.c:355

As far as I can tell, the nodes are activated despite the error.

However, when the bridge is running in one of my gazebo simulations, bt_navigator dies after I send a navgoal. Sending navgoals works fine when I don't run the bridge.

[rviz2-16] [INFO] [1702651216.046634453] [rviz2]: Setting goal pose: Frame:robot2/map, Position(-0.481345, 2.39804, 0), Orientation(0, 0, 0.865052, 0.501682) = Angle: 2.09051
[bt_navigator-12] terminate called after throwing an instance of 'std::runtime_error'
[bt_navigator-12]   what():  Failed to accept new goal
[bt_navigator-12] 
[bt_navigator-12] [INFO] [1702651216.047840482] [robot2.bt_navigator]: Begin navigating from current location (-0.00, 0.00) to (-0.48, 2.40)
[ERROR] [bt_navigator-12]: process has died [pid 11670, exit code -6, cmd '/opt/ros/humble/lib/nav2_bt_navigator/bt_navigator --ros-args -r __node:=bt_navigator -r __ns:=/robot2 --params-file /tmp/tmpsue952wd -r odom:=odometry/global -r /tf:=tf -r /tf_static:=tf_static'].

I tried starting the bridge after I had already started the simulation with nav2 and made sure that sending navgoals worked.
Sending another navgoal after I started the bridge resulted in:

[rviz2-16] [INFO] [1702651950.991863971] [rviz2]: Setting goal pose: Frame:robot1/map, Position(-1.25979, 1.01845, 0), Orientation(0, 0, 0.999542, 0.0302465) = Angle: 3.08109
[bt_navigator-12] [INFO] [1702651950.992806330] [robot1.bt_navigator]: Begin navigating from current location (0.00, 0.71) to (-1.26, 1.02)
[bt_navigator-12] terminate called after throwing an instance of 'std::runtime_error'
[bt_navigator-12]   what():  Failed to accept new goal
[bt_navigator-12] 
[planner_server-10] terminate called after throwing an instance of 'std::runtime_error'
[planner_server-10]   what():  Failed to accept new goal
[planner_server-10] 
[ERROR] [bt_navigator-12]: process has died [pid 22391, exit code -6, cmd '/opt/ros/humble/lib/nav2_bt_navigator/bt_navigator --ros-args -r __node:=bt_navigator -r __ns:=/leo1 --params-file /tmp/tmp1k_whwny -r odom:=odometry/global -r /tf:=tf -r /tf_static:=tf_static'].
[ERROR] [planner_server-10]: process has died [pid 22386, exit code -6, cmd '/opt/ros/humble/lib/nav2_planner/planner_server --ros-args -r __node:=planner_server -r __ns:=/leo1 --params-file /tmp/tmp3be7p5hf -r /tf:=tf -r /tf_static:=tf_static'].

To reproduce

To reproduce the lifecycle errors:

  1. Clone https://github.com/ros-planning/navigation2
  2. Open navigation2 as a Dev Container
  3. export RMW_IMPLEMENTATION=rmw_cyclonedds_cpp in the teminals you use in the container
  4. Download the latest zenoh-bridge-ros2dds from https://github.com/eclipse-zenoh/zenoh-plugin-ros2dds/actions
  5. Run ./zenoh-bridge-ros2dds -l tcp/0.0.0.0:7447 in the Dev Container
  6. ros2 launch nav2_bringup navigation_launch.py

I have only seen the crashing nodes when I send a navgoal in my own simulation, but I suspect the issue is general.
If you don't have a simulated robot environment to test this in, I can set up a Dev Container with turtlebots or something.

System info

  • Platform: Ubuntu 22.04 in docker
  • Processor: AMD Ryzen 5 PRO 7530U

Add no_timestamp config option

Describe the feature

In some cases, the robot may run in an environment without Internet at all, so it cannot obtain the current accurate time. This will cause the system time on the PC and the robot to be inconsistent, and a large number of timestamp errors will occur.

[2023-12-07T16:04:59Z ERROR zenoh::net::routing::pubsub] Error treating timestamp for received Data (incoming timestamp from d113f0e6e37936bdf028f3f7467f89ea exceeding delta 500ms is rejected: 2023-12-26T11:43:21.115716297Z vs. now: 2023-12-07T16:04:59.434372112Z). Replace timestamp: Some(6571ed2b6f338330/c4f374afae09051a4804b053c0270376)

And in most cases on Host PC we only need the latest news, so it would be better if we could turn off timestamp.

[Bug] Listener will receive duplicate data

Describe the bug

Run the talker and listener with different domain ID and they can talk to each other with two bridges.
However, if I close talker and listener without closing the bridges and retry again, the listener will receive duplicate data.

To reproduce

Terminal 1:

./target/release/zenoh-bridge-ros2dds -d 1

Terminal 2:

./target/release/zenoh-bridge-ros2dds -d 2

Terminal 3:

ROS_DOMAIN_ID=1 ros2 run demo_nodes_cpp talker

Terminal 4:

ROS_DOMAIN_ID=2 ros2 run demo_nodes_cpp listener

Then stop and rerun on Terminal 3 & 4, listener will receive duplicate data.

System info

  • Platform: Ubuntu 22.04
  • ROS version: Humble
  • Bridge version: main branch

[Bug] Topic subscription failure with multiple zenoh-bridge-ros2dds peers

Describe the bug

This may be related to this and this.

We have a setup that involves a central server connected to multiple robots, all running ROS2 iron in docker containers. The central server provides some topics to all robots. We also need some robot to robot ROS2 communication.

We initially used zenoh-bridge-ros2dds in peer mode at the server and all robots, but experienced non-obvious failures of data transmission on topics between server and robots.

A simplified setup that exhibits the problem is:

server: docker(talker)
server: docker(listener)
server: docker(zenoh-bridge-ros2dds -m peer -l tcp/0.0.0.0:7447)

robot1: docker(zenoh-bridge-ros2dds -m peer -l tcp/0.0.0.0:7447)
robot1: docker(listener)

robot2: docker(zenod-bridge-ros2dds -m peer -l tcp/0.0.0.0:7447)
robot2: docker(listener)

The server containers are started, then the zenoh containers on the robots. Robot1 listener is started, correctly shows received data, then stopped. Robot2 listener is started, may show data, then is stopped. Robot2 listener is started again, does not show any data.

If the robot zenoh containers are changed to clients, connecting to the server ip address, the failure does not occur.

If the listener on the server is not started, the failure seems to occur very rarely.

To reproduce

We have not managed to reproduce this with composed containers on a single host. Server is running Ubuntu 20.04, robot1 and robot2 are running Ubuntu 22.04. The robots are connected over WiFi. The container simonj23/dots_core:iron is a ROS2 iron distribution with CycloneDDS installed.

Run in all cases with config files in the current directory.

cyclonedds.xml:

<?xml version="1.0" encoding="UTF-8" ?>
<CycloneDDS xmlns="https://cdds.io/config" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="https://cdds.io/config https://raw.githubusercontent.com/eclipse-cyclonedds/cyclonedds/master/etc/cyclonedds.xsd">
    <Domain id="any">
        <General>
            <Interfaces>
                <NetworkInterface name='lo' multicast='true' />
            </Interfaces>
            <DontRoute>true</DontRoute>
        </General>
    </Domain>
</CycloneDDS>

minimal.json5:

{
  plugins: {
    ros2dds: {
      allow: {
          publishers: [ "/chatter", ],
          subscribers: [ "/chatter", ],
      }
    },
  },
}

compose.zenoh_peer.yaml

services:
  zenoh-peer-ros2dds:
    image: eclipse/zenoh-bridge-ros2dds:0.10.1-rc.2
    environment:
      - ROS_DISTRO=iron
      - CYCLONEDDS_URI=file:///config/cyclonedds.xml
      - RUST_LOG=debug,zenoh::net=trace,zenoh_plugin_ros2dds=trace
    network_mode: "host"
    init: true
    volumes:
      - .:/config
    command:
      - -m peer
      - -l tcp/0.0.0.0:7447
      - -c /config/minimal.json5

On server:
start talker

docker run -it --rm --network=host \
-e "RMW_IMPLEMENTATION=rmw_cyclonedds_cpp" \
-e "CYCLONEDDS_URI=file:///config/cyclonedds.xml" \
-v .:/config simonj23/dots_core:iron \
bash -c 'source /opt/ros/iron/setup.bash && ros2 run demo_nodes_cpp talker'

start listener

docker run -it --rm --network=host \
-e "RMW_IMPLEMENTATION=rmw_cyclonedds_cpp" \
-e "CYCLONEDDS_URI=file:///config/cyclonedds.xml" \
-v .:/config simonj23/dots_core:iron \
bash -c 'source /opt/ros/iron/setup.bash && ros2 run demo_nodes_cpp listener'

start zenoh

docker compose -f compose.zenoh_peer.yaml up

On robot1 start zenoh

docker compose -f compose.zenoh_peer.yaml up

On robot2 start zenoh

docker compose -f compose.zenoh_peer.yaml up

On robot1 start then stop listener:

docker run -it --rm --network=host -e "RMW_IMPLEMENTATION=rmw_cyclonedds_cpp" -e "CYCLONEDDS_URI=file:///config/cyclonedds.xml" -v .:/config simonj23/dots_core:iron bash -c 'source /opt/ros/iron/setup.bash && ros2 run demo_nodes_cpp listener'
[INFO] [1709553347.923254612] [listener]: I heard: [Hello World: 236]
[INFO] [1709553348.928124335] [listener]: I heard: [Hello World: 237]
[INFO] [1709553349.925678933] [listener]: I heard: [Hello World: 238]
^C[INFO] [1709553350.409051386] [rclcpp]: signal_handler(signum=2)

On robot2 start then stop listener:

docker run -it --rm --network=host -e "RMW_IMPLEMENTATION=rmw_cyclonedds_cpp" -e "CYCLONEDDS_URI=file:///config/cyclonedds.xml" -v .:/config simonj23/dots_core:iron bash -c 'source /opt/ros/iron/setup.bash && ros2 run demo_nodes_cpp listener'
[INFO] [1709553358.925248190] [listener]: I heard: [Hello World: 247]
[INFO] [1709553359.927279191] [listener]: I heard: [Hello World: 248]
^C[INFO] [1709553360.630004049] [rclcpp]: signal_handler(signum=2)

On robot2 start listener:

docker run -it --rm --network=host -e "RMW_IMPLEMENTATION=rmw_cyclonedds_cpp" -e "CYCLONEDDS_URI=file:///config/cyclonedds.xml" -v .:/config simonj23/dots_core:iron bash -c 'source /opt/ros/iron/setup.bash && ros2 run demo_nodes_cpp listener'

At this point, robot2 no longer gets any data on the chatter topic. The situation can be recovered by restarting the zenoh container on the server.

Log files attached. Server IP address is 192.168.0.70, robot1: 192.168.0.101, robot2: 192.168.0.105.

It appears from the server logfile that something may be going wrong with topic unsubscribe. When robot1 listener is stopped, 2024-03-04T11:26:40Z, there are two messages of UndeclareSubscriber, but when robot2 listener is stopped, 2024-03-04T11:26:58Z, there is only one, and the next subscribe does not correctly succeed.

server_log.txt
robot1_log.txt
robot2_log.txt

System info

Server: Ubuntu 20.04 arm64
Robots: Ubuntu 22.04 arm64
zenoh-bridge-ros2dds: 0.10.1-rc.2

[Bug] Unable to have both allow and deny in the config

Describe the bug

If I have allow & deny in the configuration, the zenoh-bridge-ros2dds will crash.

To reproduce

  1. Uncomment the allow & deny configuration in DEFAULT_CONFIG.json5
  2. Run ./target/release/zenoh-bridge-ros2dds -c DEFAULT_CONFIG.json5
  3. Crash with the following error
    image

Also, I'm wondering what I should do if I only want to allow some publishers and subscribers topic, but no service and action.
If I have the following configuration

      allow: {
        publishers: ["/some_topic"],
        subscribers: ["/some_topic"],
        service_servers: [],
        service_clients: [],
        action_servers: [],
        action_clients: [],
      },

zenoh-bridge-ros2dds will allow all service and action to pass the bridge. My current workaround is to put /unused_topic in the arrays. Maybe we can view the empty array as denying all the topic/service/action.

System info

  • Platform: Ubuntu 22.04
  • ROS version: Humble
  • Bridge version: main branch

Support several Publisher/Subscriber on a same topic per Node

Describe the feature

Even if not usual, ROS doesn't forbid a Node to create several Publisher or Subscriber on a same topic.
This shall be supported by the plugin.

Currently the plugin considers only 1 Publisher or Subscriber per topic, replacing the old one with each newly discovered one.

Add allow/deny configuration for nodes

Describe the feature

Currently one can configure the bridge to allow or deny routing of messages for Publishers, Subscribers, Service Server, Service Client, Action Server and Action Client.

It would be useful to have the possibility to allow or deny the routing of any message for a discovered ROS Node, based on its Node name.

Priority configuration for Publishers routes

Describe the feature

In case of high traffic with big payloads and high CPU usage for a host, it may happen that some important messages with smaller payloads suffer an extra latency due to the fill of Zenoh transmission queue with big payloads waiting for fragmentation.
In such case, it would be good to have the important messages published on Zenoh with a higher priority. Thus Zenoh will make them overtake the big payloads in the queue, reducing the latency for higher priority messages.

Ideally the solution would be to map the DDS TRANSPORT_PRIORITY QoS to Zenoh Priority.
However, looking at ROS 2 rclpp interface, the TRANSPORT_PRIORITY is not exposed and cannot be set from ROS 2.

Therefore, the simplest solution is to add for the bridge configuration a setting allowing to configure the priority for the Publishers route, in a similar way than the pub_max_frequencies setting.

[Bug] `queries_timout.default` config not applying to Actions if `queries_timout.actions` config doesn't exist

Describe the bug

With such a configuration the timeout for queries is not applied to Actions and remains the default 5.0 seconds:

{
  plugins: {
    ros2dds: {
      queries_timeout: {
        default: 3600.0,
      }
    }
  }
}

While it correctly applies with such config:

{
  plugins: {
    ros2dds: {
      queries_timeout: {
        default: 3600.0,
        actions: { }
      }
    }
  }
}

To reproduce

On the same host, run the the ROS 2 Actions tutorial across 2 domains with 2 bridges using the config above:

  • ROS_DOMAIN_ID=0 zenoh-bridge-ros2dds -m peer -d 0 -c conf.json5
  • ROS_DOMAIN_ID=1 zenoh-bridge-ros2dds -m peer -d 1 -c conf.json5
  • ROS_DOMAIN_ID=0 ros2 run action_tutorials_py fibonacci_action_server
  • ROS_DOMAIN_ID=1 ros2 run action_tutorials_py fibonacci_action_client
    The client gets a timeout error on get_result since the goal lasts more than 5 seconds.

System info

  • Zenoh v0.11.0-rc.2

[Bug] Subscribers don't work when the bridge is configured with namespace

Describe the bug

If a bridge is created with a namespace, ROS subscriber cannot received messages coming from another bridge (using namespace for the publisher side bridge works).

Removing the namespace of the bridge with subscriber resolve the problem.

The problem comes from the namespace which is automatically added to subscribers and that changes the links between subscribers and publishers :

ROS talker ------------- Zenoh network --------- ROS listener (ns = /client)

pub /chatter ----------> chatter -------------------> /chatter
/client/chatter <------- client/chatter <---------- sub /chatter

On this schema, we see that the chatter publisher publish on "chatter" but the chatter listener looking for message on "client/chatter".

The problem is the same for service/client : if there is a namespace on the client side bridge, client and service will never communicate.

To reproduce

Terminal 1 : zenoh-bridge-ros2dds -l tcp/0.0.0.0:7447 -d 10

Terminal 2 : zenoh-bridge-ros2dds -e tcp/127.0.0.1:7447 -n /client -d 11

Terminal 3 : RMW_IMPLEMENTATION=rmw_cyclonedds_cpp ROS_DOMAIN_ID=10 ros2 run demo_nodes_cpp talker

Terminal 4 : RMW_IMPLEMENTATION=rmw_cyclonedds_cpp ROS_DOMAIN_ID=11 ros2 run demo_nodes_cpp listener

The listener never receives any message from the talker but the topic /chatter is visible with a ROS_DOMAIN_ID=11 ros2 topic list and a ROS_DOMAIN_ID=11 ros2 topic echo /chatter doesn't get any message.

System info

  • Ubuntu 22.04
  • Docker osrf/ros:iron-desktop
  • Zenoh bridge 0.10.1-rc

[Bug] ROS Service Client hanging when the bridge has not discovered a Server

Describe the bug

If a ROS 2 Service Client is started and discovered by the bridge before the corresponding Service Server is discovered, the Client hangs and the Server never receives the request.

To reproduce

Run in this order:

  1. zenoh-bridge-ros2dds
  2. zenoh-bridge-ros2dds -d 1
  3. ros2 run demo_nodes_cpp add_two_ints_client
  4. ROS_DOMAIN_ID=1 ros2 run demo_nodes_cpp add_two_ints_server

The client keeps hanging and the server doesn't receive the request.

System info

[Bug] Warning log "Failed requirement for PublicationCache... the Session is not configured with 'add_timestamp=true'"

Describe the bug

When discovering a TRANSIENT_LOCAL DDS Publisher (e.g. on rosout topic), the bridge display this warning log:

[2024-01-17T16:42:25Z WARN zenoh_plugin_ros2dds] Error updating route: Failed create PublicationCache for key rosout: Failed requirement for PublicationCache on rosout: the Session is not configured with 'add_timestamp=true' at /Users/julienenoch/.cargo/git/checkouts/zenoh-cc237f2570fab813/780ec60/zenoh-ext/src/publication_cache.rs:132.

To reproduce

Just run:

  1. zenoh-bridge-ros2dds
  2. ros2 run demo_nodes_cpp listener

System info

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.