Giter VIP home page Giter VIP logo

azure_kinect_ros_driver's Introduction

Azure Kinect ROS Driver

This project is a node which publishes sensor data from the Azure Kinect Developer Kit to the Robot Operating System (ROS). Developers working with ROS can use this node to connect an Azure Kinect Developer Kit to an existing ROS installation.

This repository uses the Azure Kinect Sensor SDK to communicate with the Azure Kinect DK. It supports both Linux and Windows installations of ROS.

Build Status

Features

This ROS node outputs a variety of sensor data, including:

  • A PointCloud2, optionally colored using the color camera
  • Raw color, depth and infrared Images, including CameraInfo messages containing calibration information
  • Rectified depth Images in the color camera resolution
  • Rectified color Images in the depth camera resolution
  • The IMU sensor stream
  • A TF2 model representing the extrinsic calibration of the camera

The camera is fully configurable using a variety of options which can be specified in ROS launch files or on the command line.

However, this node does not expose all the sensor data from the Azure Kinect Developer Kit hardware. It does not provide access to:

  • Microphone array

For more information about how to use the node, please see the usage guide.

Status

This code is provided as a starting point for using the Azure Kinect Developer Kit with ROS. Community developed features are welcome.

For information on how to contribute, please see our contributing guide.

Building

The Azure Kinect ROS Driver uses catkin to build. For instructions on how to build the project please see the building guide.

Join Our Developer Program

Complete your developer profile here to get connected with our Mixed Reality Developer Program. You will receive the latest on our developer tools, events, and early access offers.

Code of Conduct

This project has adopted the Microsoft Open Source Code of Conduct. For more information see the Code of Conduct FAQ or contact [email protected] with any additional questions or comments.

Reporting Security Issues

Security issues and bugs should be reported privately, via email, to the Microsoft Security Response Center (MSRC) at <[email protected]>. You should receive a response within 24 hours. If for some reason you do not, please follow up via email to ensure we received your original message. Further information, including the MSRC PGP key, can be found in the Security TechCenter.

License

MIT License

azure_kinect_ros_driver's People

Contributors

amelhassan avatar aravindbattaje avatar bnjff avatar christian-rauch avatar helenol avatar iory avatar jeffdelmerico avatar microsoft-github-policy-service[bot] avatar microsoftopensource avatar msftgits avatar nfiedler avatar oblitorum avatar ooeygui avatar ricardoragel avatar roseflunder avatar seanyen avatar shuntaraw avatar skalldri avatar zchenpds avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

azure_kinect_ros_driver's Issues

Switch to the c++ wrapper for playback

Is your feature request related to a problem? Please describe.
To resolve deprecation warnings and also for a less verbose code we should switch to the new c++ wrapper for the playback-api that came with sensor sdk version 1.2.0.

Describe the solution you'd like
Use k4arecord/playback.hpp instead of k4arecord/playback.h

ROS Drivers crash when using export ROS_MASTER_URI and export ROS_HOSTNAME

Describe the bug
So I have gotten the Azure Kinect working on Ubuntu 16.04 without any issue aside from the issue of the image processing thread running behind:

[ WARN] [1568899738.323945977]: Image processing thread is running behind.
Expected max loop time: 0.200000000
Actual loop time: 0.214532173

I want to use the Azure Kinect on a mobile robot's ros master, and thus perform the steps outlined in http://wiki.ros.org/ROS/Tutorials/MultipleMachines to export the mobile robot's ros master. However, when I perform the commands outlined in the reproduce section, the terminal immediately crashes without any log file generated that I can find. In my k4atypes.h file, the logging level is set to most severe.

To Reproduce
Steps to reproduce the behavior:

  1. Connect Azure Kinect to PC running Ubuntu 16.04
  2. Perform following commands to gain access to USB connected Kinect
    sudo chmod 666 /dev/bus/usb/001/*
    sudo chmod 66 /dev/bus/usb/002/*
  3. Ensure you are connected to remote roscore by pinging remote address
  4. Source the workspace containing the Azure Kinect ROS Drivers and then run the following commands:
    export ROS_MASTER_URI=http://192.168.1.101:11311
    //Insert your own external roscore address here
    export ROS_HOSTNAME=192.168.1.198
    //Insert your own PC address here
  5. Run roslaunch azure_kinect_ros_driver driver.launch

Expected behavior
Launch file does not crash

Logs
Unable to find log files

Please also enable DEBUG level logging of the ROS node.
See roscpp/Overview/Logging for how to enable ROS DEBUG logs.

Screenshots
azure_kinect_process_died

Desktop (please complete the following information):

  • OS: Ubuntu
  • Version 16.04

Additional context
I pulled the changes made in #70 however that did not fix the issue either

"process has died" when roslaunch driver.launch

Describe the bug & To Reproduce
After catkin_make the Azure_Kinect_ROS_Driver ROS package, when run

roslaunch driver.launch

following errors will occur
Azure_Kinect_Bug

Expected behavior
The driver.launch should run successfully.

Desktop (please complete the following information):

  • OS: Ubuntu 16.04 (running on Intel NUC mini PC)
  • Version: ROS Kinetic

Additional context
I already sucessfully run Azure ROS Driver on my laptop (Ubuntu 16.04 & ROS Kinetic). However, when I do the same thing on my robot mini PC (which is Intel NUC, in Ubuntu 16.04 & ROS Kinetic environment), this error occured.

I can make sure that the robot mini PC fulfill the hardware requirment to run Azure Kinect. Actually, I followed the Azure Kinect SDK build steps, and can successfully run k4aviewer on my mini PC. Here is the screenshot:
k4aviewer

Another thing that might help the troubleshooting is that, when I run driver.launch on robot mini PC without plugging Azure Kinect camera, the exact same error will occur. So it seems that the error might occur before the sensor is being found.

Could anyone give me any clues to debug it? Any suggestion will also help me. I know Azure Kinect is not officially supported to run in Ubuntu 16.04 & ROS Kinect environment. But currently many robot can only work with this Ubuntu version.

Kinect timestamps and ros::Time::now() drift appart

Describe the bug
Over time, ROS Time and Kinect time appear to drift appart. This may impact the timestamping solution implemented in #21 .

To Reproduce
Leave the ROS node running overnight. Observe that timestamps drift significantly over long periods of time.

Expected behavior
Timestamps should remain relatively consistent over long periods of time.

Logs
Please provide relevant logs to help diagnose your issue. To assist in debugging, please enable info level logging of the Azure Kinect Sensor SDK and attach any logs you have to this issue.
See k4atypes.h for how to enable logs.

Please also enable DEBUG level logging of the ROS node.
See roscpp/Overview/Logging for how to enable ROS DEBUG logs.

Screenshots
If applicable, add screenshots to help explain your problem.

Desktop (please complete the following information):

  • OS: [e.g. Windows 10, Ubuntu]
  • Version [e.g. Version 1903, 18.04]

Additional context
Add any other context about the problem here.

Enable a CI system

We need to have Azure DevOps CI support before we go public. It should be able to:

  • Download and install the Azure Kinect SDK
  • Download and install ROS
  • Build the node

This needs to be done on both Windows and Linux.

driver.launch fails if depth stream is disabled

Describe the bug
driver.launch fails with errors if depth_enabled is set to false.

To Reproduce

  1. Run: roslaunch azure_kinect_ros_driver driver.launch depth_enabled:=false point_cloud:=false rgb_point_cloud:=false
  2. ROS node terminates with errors.

Expected behavior
ROS node runs without error and publishes RGB images and IMU data (and others)

Logs
[2019-08-12 17:53:28.142] [error] [t=3742] /home/vsts/work/1/s/extern/Azure-Kinect-Sensor-SDK/src/transformation/transformation.c (84): transformation_possible(). Expect depth camera is running to perform transformation.
[2019-08-12 17:53:28.142] [error] [t=3742] /home/vsts/work/1/s/extern/Azure-Kinect-Sensor-SDK/src/transformation/transformation.c (101): transformation_possible(calibration, source_camera) returned failure in transformation_3d_to_3d()
[2019-08-12 17:53:28.142] [error] [t=3742] /home/vsts/work/1/s/extern/Azure-Kinect-Sensor-SDK/src/sdk/k4a.c (919): transformation_3d_to_3d(calibration, source_point3d_mm->v, source_camera, target_camera, target_point3d_mm->v) returned failure in k4a_calibration_3d_to_3d()
terminate called after throwing an instance of 'k4a::error'
what(): Calibration contained invalid transformation parameters!

Desktop (please complete the following information):

  • OS: Ubuntu
  • Version: 18.04

Playback support

The Azure Kinect Sensor SDK has a native playback / recording API. ROS also has a native playback and recording API, and these two formats are incompatible.

The Azure Kinect ROS Driver could be used to play back Azure Kinect recordings through ROS. This could be useful for people who want to collect data without having to install ROS, and then play it back in future on a ROS system. It is also useful for people who have previously collected data in the Azure Kinect recording format and now wish to use it with ROS.

We should add playback support to the Azure Kinect ROS Driver, potentially by creating a new node which is specific to playback.

Publish JPEG images instead of BGRA

The Azure Kinect does not natively support publishing BGRA images: the BGRA conversion is done in the SDK, with significant processing required on the host to support the conversion.

To reduce the workload on the host device, the ROS node should support publishing a native format (MJPEG is probably a good choice) as well as BGRA format.

Increase parallelism of sensor data processing

Is your feature request related to a problem? Please describe.
Currently, the node processes large amounts of Kinect sensor data serially. Each image is produced in series, then followed by the point cloud. On powerful machines, this could be done in a multi-threaded way to speed up image generation.

Describe the solution you'd like
Provide an option to multi-thread the node so that images can be produced faster.

Describe alternatives you've considered
This could also be done by spawning many nodelets, but it could be expensive to serialize the K4A capture object so that it could be passed to various nodelets.

Additional context
Add any other context or screenshots about the feature request here.

Publish body index map in ROS

Is your feature request related to a problem? Please describe.
No body segmentation result is provided by this driver.

Describe the solution you'd like
Publish body index map in ROS if body_tracking_enabled=true

Describe alternatives you've considered
(None)

Additional context

Object Boundary "Leak" in RGB-D Image and PointCloud

The problem still exists in the last SDK.
Not already resolved

Describe the bug
There is severe overlaps at the boundary of the object in /rgb_to_depth/image_raw topic. You can just see my hand below. It just looks like there is another hand close behind my real hand.
misOverlap
Besides, the pointclouds topic also seems suffer from this boundary problem. It make the object's looks "larger" due to the overlap of the object's boundary. What's more, the pointcloud also has approximate 1 second lag, make real-time application based on Azure Kinect ROS Driver difficult.

To Reproduce
My driver.launch parameters configurations:
driverParam
run

roslaunch Azure_Kinect_ROS_Driver/launch/driver.launch

and then show /rgb_to_depth/image_raw in rqt_image_view

Expected behavior
A registered RGB-D image with corrrect and sharp boundary. A real-time pointcloud2 data within low lag (just consider Kinect V1 camera's real-time pointcloud data performance in Rviz).

Verify if gyro and accel are sampled at different rates

Old comments suggest that the IMU's gyro and accel are sampled at different rates, with different timestamps.

This might be a problem for ROS, since it only provides a way to send a single IMU message with a single timestamp.

Investigate if this is true, and try to determine if this will cause issues with common SLAM packages (for example, Cartographer).

Issue with Camera Extrinsic Calibration

Describe the bug
We are attempting to use your Kinect Azures in high precision robotic sensing and manipulation projects. Specifically, we are trying to find rigid spatial transforms from multiple kinect cameras to an arbitrary fixed frame, i.e. extrinsic calibration between camera frame and fixed frame. We use a charUco board in conjunction with openCV to calculate this transform. In order to verify our calibration, we perform an experiment to evaluate the accuracy of the calibration. The steps to this experiments are outlined below:

Using the Azure_Kinect_ROS_Driver, acquire rgb and depth (depth_to_rgd) images of the charUco board as well as the rgb intrinsics and distortion parameters from the /camera_info topic. We collect images @ (4096x3072), and the depth mode is set to WFOV unbinned.

After collecting color and depth images, we use openCV’s aruco module to find markers in the images

corners, ids, rejected_img_points = cv2.aruco.detectMarkers(img, dictionary)

These markers are further refined.
corners, ids = cv2.aruco.refineDetectedMarkers(img, board, corners, ids, rejected_img_points)[:2]

CharUco corners are found
num_charuco, charuco_corners, charuco_ids = cv2.aruco.interpolateCornersCharuco(corners, ids, img, board)

The pose (transform) of the board in the camera frame is computed.
ret, rvec, tvec = cv2.aruco.estimatePoseCharucoBoard(charuco_corners, charuco_ids, board, intrinsics['camera_matrix'], intrinsics['dist_coeffs'])

Using the depth image and intrinsics, project the 2D charUco points into the 3D camera frame.

x = (u - cx) * pt_depth * 1/fx //u is charUco corner x coordinate , cx is x optical center, fx is focal length x
y = (v - cy) * pt_depth * 1/fy //v is charUco corner y coordinate , cy is y optical center, fy is focal length y
z = pt_depth * unit_scaling // unit_scaling converts mm to meters, here it is set to 1

Using the computed transform, transform these points from 3D camera frame to charUco board frame

rotMat = cv2.Rodrigues(rvec)[0]
T = np.zeros((4,4)
T[:3, :3] = rotMat
T[:3, 3] = tvec[:, 0]
T[3,3] = 1
T = np.linalg.inv(T)

Calculate euclidean distances between depth points and ground truth points

boardFramePoints = []
for p in range(len(pts3D)):
boardFramePoints.append(np.dot(T, np.concatenate([pts3D[p], [1]]))[:3])
Given these transformed points (in the charUco board frame) and the ground truth board points, calculate the euclidean distance between these two sets
error = boardFramePoints - obj_points // obj_points are ground truth board points from aruco lib

euc = np.linalg.norm(error,axis=1)  

In these experiments, we’ve seen errors on the order of centimeters; we expect these errors to be a few millimeters. Across five different images, we performed this experiment and averaged the euclidean distances, we obtain an average error of 0.0172954 meters and the standard deviation is 0.00574098015.

What do you think could be causing this error? We’ve observed that the depth measurements are accurate. We’ve used objects of known dimensions to verify depth measurements. This leads us to believe that the intrinsic/distortion calibration parameter could be incorrect. Should there be any conversions of the distortion parameters before their use in aruco? The distortion parameters from ROS seem to adhere to the rational polynomial model. I assume that the aruco functions are expecting plumb bob parameters, but I couldn’t find anything in their documentation. Could you please give some feedback as to why our transform/experiment gives the results that it does?

To Reproduce
Steps to reproduce the behavior:

  1. Clone my repo which uses your intrinsics (https://github.com/t-walker-21/extrinsicCalibration/tree/master)
  2. Install the two dependencies in the README
  3. Run the python script in the README
  4. See error

Expected behavior
Given the intrinsics and dist coefficents from the ROS node, we expect the depth projection error to be very low (~2-7 mm); however, the error is more like 17 mm

Desktop:

  • OS: [Ubuntu 18]

RGB Pointcloud slows down the publishing rate a lot

Describe the bug
When enabling the rgb_point_cloud the performance of the node degrades a lot. Without the colorized pointcloud the node can maintain a steady 30 hz publishing rate when using 30 FPS with the kinect.
With the rgb_point_cloud enabled the publishing rate decreases to like 10-12 Hz.
When visualizing the point cloud in RViz the slow publishing rate leds to a laggy view.
RViz itself maintains high enough FPS displaying the pointcloud but the data comes in too slow.
When using the k4aviewer there is no slowdown in colorized point cloud view.

To Reproduce
Steps to reproduce the behavior:

  1. Start the node via the driver.launch and use fps:=30, depth_mode:=NFOV_UNBINNED, rgb_point_cloud:=false
  2. Call "rostopic hz points2" in another terminal
  3. Restart the node with rgb_point_cloud:=true
  4. Call "rostopic hz points2" again and compare the rate with step 2

Expected behavior
Maintain the 30 FPS rate for publishing the rgb pointcloud on machines that can easily view the colorized point cloud at 30 FPS in the k4aviewer.

Desktop (please complete the following information):

  • OS: Ubuntu 18.04

Don't register topics that won't be published

Is your feature request related to a problem? Please describe.
Generally, topics that aren't going to be published given a device configuration, shouldn't be published.

For example, if the point_cloud parameter is false, the points2 topic shouldn't be published.

Describe the solution you'd like
Don't publish topics that cannot be provided given a device configuration.

Describe alternatives you've considered
None.

Additional context
Add any other context or screenshots about the feature request here.

Don't perform processing on unsubscribed topics

Is your feature request related to a problem? Please describe.
The ROS node performs a lot of processing to perform the point_cloud, depth_to_rgb and rgb_to_depth transforms. These are not always used, and it should be possible to disable them when not needed.

Describe the solution you'd like
Several improvements:

  • Parameters to disable unneeded transforms
  • Automatically stop processing for these topics when there are no subscribers

Describe alternatives you've considered
None,

Additional context
Add any other context or screenshots about the feature request here.

Playback files color support (MJPEG to BGRA conversion)

Is your feature request related to a problem? Please describe.
Add a method to use the colorimage of playback files which is in MJPEG format.
Currently the node can only process BGRA color images.

Describe the solution you'd like
Looks like the color image must be converted from MJPEG to BGRA.

Describe alternatives you've considered
None. Guess working directly with a compressed JPEG image to create a pointcloud is not possible.

RGB Point Cloud attempts to render pixels not seen by the color camera

Describe the bug
When rendering an RGB point cloud, the native point cloud conversion function attempts to render invalid color pixels into the point cloud. This causes the image to contain "bleed over" from other valid color pixels.

To Reproduce
Steps to reproduce the behavior:

  1. Enable the rgb_point_cloud parameter
  2. Point the camera at a far-away object
  3. Obscure part of the scene such that an object is hidden from the RGB camera, but visible to the depth camera
  4. Observe that incorrect colors are pasted onto this hidden object from surrounding objects that the RGB camera can see.

Expected behavior
When RGB point cloud is enabled, invalid depth and color pixels should not be rendered.

Logs
Please provide relevant logs to help diagnose your issue. To assist in debugging, please enable info level logging of the Azure Kinect Sensor SDK and attach any logs you have to this issue.
See k4atypes.h for how to enable logs.

Please also enable DEBUG level logging of the ROS node.
See roscpp/Overview/Logging for how to enable ROS DEBUG logs.

Screenshots
If applicable, add screenshots to help explain your problem.

Desktop (please complete the following information):

  • OS: [e.g. Windows 10, Ubuntu]
  • Version [e.g. Version 1903, 18.04]

Additional context
Add any other context about the problem here.

Update Sensor SDK version to 1.2.0

Is your feature request related to a problem? Please describe.
With the release of Azure Kinect Sensor SDK 1.2.0 we should switch to this new version.
microsoft/Azure-Kinect-Sensor-SDK#608

The MSI Installer for Windows as well as NuGet Packages are already released.
The debian package for Ubuntu follows early next week.
With the new version we don't need to include the custom k4a.hpp anymore because the 1.2.0 version has the added "get_handle" method for the capture class.

Describe the solution you'd like

  • Update CMake files with the new version (depth_engine also got a bump to 2.0)
  • Remove custom k4a.hpp from this repository and use k4a.hpp from the sdk again

Describe alternatives you've considered
none

Additional context
none

P matrix in CameraInfo is incorrect

Describe the bug
P (projection matrix) in sensor_msgs/CameraInfo message should be defined for rectified images and not for raw images by convention (see the link below). Currently left 3x3 matrix of P is identical to K (intrinsic matrix) even if D (distortion) is nonzero.
http://docs.ros.org/melodic/api/sensor_msgs/html/msg/CameraInfo.html

To Reproduce
Steps to reproduce the behavior:

  1. Run ROS node:
    $ roslaunch azure_kinect_ros_driver driver.launch
  2. Subscribe to rgb/camera_info:
    $ rostopic echo -n1 rgb/camera_info

Expected behavior
left 3x3 matrix of P is different from K unless D is zero.

Logs

$ rostopic echo -n1 /kinect0/rgb/camera_info
header: 
  seq: 233
  stamp: 
    secs: 1565665931
    nsecs: 268071349
  frame_id: "kinect0_rgb_camera_link"
height: 720
width: 1280
distortion_model: "rational_polynomial"
D: [0.7799050211906433, -2.9802768230438232, 0.0007482637884095311, 5.6622509873704985e-05, 1.6997523307800293, 0.6572195887565613, -2.811908721923828, 1.6302372217178345]
K: [604.8837280273438, 0.0, 636.9334106445312, 0.0, 604.9359130859375, 367.4653625488281, 0.0, 0.0, 1.0]
R: [1.0, 0.0, 0.0, 0.0, 1.0, 0.0, 0.0, 0.0, 1.0]
P: [604.8837280273438, 0.0, 636.9334106445312, 0.0, 0.0, 604.9359130859375, 367.4653625488281, 0.0, 0.0, 0.0, 1.0, 0.0]
binning_x: 0
binning_y: 0
roi: 
  x_offset: 0
  y_offset: 0
  height: 0
  width: 0
  do_rectify: False
---

Desktop (please complete the following information):

  • OS: Ubuntu
  • Version 18.04

Integrate the body tracking data to ROS

Is your feature request related to a problem? Please describe.
No, just a feature request

Describe the solution you'd like
Could we integrate the body tracking data to ROS system which can display the joint data on RViz and utilize the joint data through on difference ROS module?

Also, I want to use the ROS system to record the body movement data.

Describe alternatives you've considered
No

Additional context
No

ROS2 Support

As ROS1 is slowly being phased out in favor of ROS2, we should begin to investigate supporting ROS2 with Azure Kinect. This will involve creating a new branch to support a ROS2 release, and then porting the node to run on the new ROS2 messaging stack.

Unclear error message when starting cameras with unsupported depth_mode/fps combination

Currently, the default launch parameters for the depth camera are not supported (30 fps w/ WFOV_UNBINNED), which causes the following error on startup. It is not clear from the error that the cause is an unsupported combination of parameters, rather than a problem with the device. I'm not sure about the best way to resolve this, but since the call to k4a_device_start_cameras in k4a.hpp only indicates success or failure, perhaps some logic before the call to check for valid parameters would be appropriate.

turtlebot@CPR-TBT-1360:~/k4a_ws/src/Azure-Kinect-ROS-Driver$ roslaunch azure-kinect-ros-driver driver.launch 
... logging to /home/turtlebot/.ros/log/5490788a-8214-11e9-84b1-302432084cf1/roslaunch-CPR-TBT-1360-7297.log
Checking log directory for disk usage. This may take awhile.
Press Ctrl-C to interrupt
Done checking log file disk usage. Usage is <1GB.

started roslaunch server http://CPR-TBT-1360:41101/

SUMMARY
========

PARAMETERS
 * /node/color_enabled: True
 * /node/color_resolution: 1536P
 * /node/depth_enabled: True
 * /node/depth_mode: WFOV_UNBINNED
 * /node/fps: 30
 * /node/point_cloud: True
 * /node/rgb_point_cloud: True
 * /rosdistro: kinetic
 * /rosversion: 1.12.14

NODES
  /
    node (azure-kinect-ros-driver/node)

auto-starting new master
process[master]: started with pid [7307]
ROS_MASTER_URI=http://localhost:11311

setting /run_id to 5490788a-8214-11e9-84b1-302432084cf1
process[rosout-1]: started with pid [7320]
started core service [/rosout]
process[node-2]: started with pid [7334]
[ INFO] [1559135946.531854764]: K4A Parameters:
[ INFO] [1559135946.532216718]: sensor_sn - std::string : 
[ INFO] [1559135946.532273119]: depth_enabled - bool : 1
[ INFO] [1559135946.532403370]: depth_mode - std::string : WFOV_UNBINNED
[ INFO] [1559135946.532534871]: color_enabled - bool : 1
[ INFO] [1559135946.532656610]: color_resolution - std::string : 1536P
[ INFO] [1559135946.532709286]: fps - int : 30
[ INFO] [1559135946.532748149]: point_cloud - bool : 1
[ INFO] [1559135946.532879063]: rgb_point_cloud - bool : 1
[ INFO] [1559135946.542739731]: Found 1 sensors
[ INFO] [1559135946.542824807]: No serial number provided: picking first sensor
[ INFO] [1559135946.753357450]: K4A[0] : 000505491912
[ INFO] [1559135946.803890966]: K4A Serial Number: 000505491912
[ INFO] [1559135946.808999033]: RGB Version: 1.6.98
[ INFO] [1559135946.809102560]: Depth Version: 1.6.70
[ INFO] [1559135946.809158423]: Audio Version: 1.6.14
[ INFO] [1559135946.809214286]: Depth Sensor Version: 6109.7.0
[ INFO] [1559135947.027033745]: Setting RGB Camera Resolution: 1536P
[ INFO] [1559135947.027139996]: Setting Depth Camera Mode: WFOV_UNBINNED
[ INFO] [1559135947.027189934]: Setting Camera FPS: 30
[ INFO] [1559135949.371008745]: K4A Calibration Blob:
[ INFO] [1559135949.371100471]: 	 Depth:
[ INFO] [1559135949.371141609]: 		 Extrinsics:
[ INFO] [1559135949.371232747]: 			 Translation: 0, 0, 0
[ INFO] [1559135949.371283848]: 			 Rotation[0]: 1, 0, 0
[ INFO] [1559135949.371399499]: 			 Rotation[1]: 0, 1, 0
[ INFO] [1559135949.371461862]: 			 Rotation[2]: 0, 0, 1
[ INFO] [1559135949.371505425]: 		 Resolution:
[ INFO] [1559135949.371550563]: 			 Width: 1024
[ INFO] [1559135949.371601689]: 			 Height: 1024
[ INFO] [1559135949.371645727]: 		 Intrinsics:
[ INFO] [1559135949.371694890]: 			 Model Type: 4
[ INFO] [1559135949.371752141]: 			 Parameter Count: 14
[ INFO] [1559135949.371806041]: 			 cx: 514.567
[ INFO] [1559135949.371862317]: 			 cy: 515.812
[ INFO] [1559135949.371941055]: 			 fx: 505.549
[ INFO] [1559135949.372021268]: 			 fy: 505.684
[ INFO] [1559135949.372067219]: 			 k1: 1.84525
[ INFO] [1559135949.372124845]: 			 k2: 1.06214
[ INFO] [1559135949.372176333]: 			 k3: 0.05402
[ INFO] [1559135949.372233121]: 			 k4: 2.17978
[ INFO] [1559135949.372292059]: 			 k5: 1.63216
[ INFO] [1559135949.372344347]: 			 k6: 0.287351
[ INFO] [1559135949.372401973]: 			 codx: 0
[ INFO] [1559135949.372459473]: 			 cody: 0
[ INFO] [1559135949.372525824]: 			 p2: -0.0001269
[ INFO] [1559135949.372585675]: 			 p1: 8.09905e-06
[ INFO] [1559135949.372638525]: 			 metric_radius: 0
[ INFO] [1559135949.372692776]: 	 Color:
[ INFO] [1559135949.372736851]: 		 Extrinsics:
[ INFO] [1559135949.372831802]: 			 Translation: -32.0406, -2.13734, 3.99055
[ INFO] [1559135949.372943053]: 			 Rotation[0]: 0.999997, -0.00251766, 0.000236749
[ INFO] [1559135949.373000442]: 			 Rotation[1]: 0.00248799, 0.996297, 0.0859424
[ INFO] [1559135949.373098955]: 			 Rotation[2]: -0.000452245, -0.0859415, 0.9963
[ INFO] [1559135949.373142781]: 		 Resolution:
[ INFO] [1559135949.373223844]: 			 Width: 2048
[ INFO] [1559135949.373316507]: 			 Height: 1536
[ INFO] [1559135949.373360683]: 		 Intrinsics:
[ INFO] [1559135949.373418334]: 			 Model Type: 4
[ INFO] [1559135949.373490897]: 			 Parameter Count: 14
[ INFO] [1559135949.373541410]: 			 cx: 1021.42
[ INFO] [1559135949.373597660]: 			 cy: 783.21
[ INFO] [1559135949.373646648]: 			 fx: 968.663
[ INFO] [1559135949.373700987]: 			 fy: 968.85
[ INFO] [1559135949.373758375]: 			 k1: 0.601996
[ INFO] [1559135949.373822150]: 			 k2: -2.92989
[ INFO] [1559135949.373880801]: 			 k3: 1.7556
[ INFO] [1559135949.373932764]: 			 k4: 0.480061
[ INFO] [1559135949.373992140]: 			 k5: -2.74819
[ INFO] [1559135949.374053515]: 			 k6: 1.67575
[ INFO] [1559135949.374101778]: 			 codx: 0
[ INFO] [1559135949.374158979]: 			 cody: 0
[ INFO] [1559135949.374216267]: 			 p2: 0.000396648
[ INFO] [1559135949.374267668]: 			 p1: 0.000109577
[ INFO] [1559135949.374315918]: 			 metric_radius: 0
[ INFO] [1559135949.374362769]: 	 IMU (Depth to Color):
[ INFO] [1559135949.374403019]: 		 Extrinsics:
[ INFO] [1559135949.374455982]: 			 Translation: -32.0406, -2.13734, 3.99055
[ INFO] [1559135949.374534621]: 			 Rotation[0]: 0.999997, -0.00251766, 0.000236749
[ INFO] [1559135949.374587196]: 			 Rotation[1]: 0.00248799, 0.996297, 0.0859424
[ INFO] [1559135949.374641609]: 			 Rotation[2]: -0.000452245, -0.0859415, 0.9963
[ INFO] [1559135949.374685860]: 	 IMU (Depth to IMU):
[ INFO] [1559135949.374737273]: 		 Extrinsics:
[ INFO] [1559135949.374810961]: 			 Translation: -51.0218, 3.5791, 0.993862
[ INFO] [1559135949.374872687]: 			 Rotation[0]: 0.00204108, 0.0972247, -0.99526
[ INFO] [1559135949.374944187]: 			 Rotation[1]: -0.999998, -6.64172e-05, -0.00205728
[ INFO] [1559135949.374999151]: 			 Rotation[2]: -0.000266121, 0.995262, 0.0972243
[ INFO] [1559135949.375044051]: 	 IMU (IMU to Depth):
[ INFO] [1559135949.375084526]: 		 Extrinsics:
[ INFO] [1559135949.375136252]: 			 Translation: 3.6835, 3.97166, -50.8692
[ INFO] [1559135949.375190665]: 			 Rotation[0]: 0.00204108, -0.999998, -0.000266121
[ INFO] [1559135949.375261828]: 			 Rotation[1]: 0.0972247, -6.64172e-05, 0.995262
[ INFO] [1559135949.375315204]: 			 Rotation[2]: -0.99526, -0.00205728, 0.0972243
[ INFO] [1559135949.375361154]: 	 IMU (Color to IMU):
[ INFO] [1559135949.375419205]: 		 Extrinsics:
[ INFO] [1559135949.375477256]: 			 Translation: -46.9573, -28.4609, 2.99776
[ INFO] [1559135949.375540019]: 			 Rotation[0]: 0.00156067, 0.0113347, -0.999935
[ INFO] [1559135949.375596819]: 			 Rotation[1]: -0.999995, -0.00273097, -0.00159172
[ INFO] [1559135949.375659483]: 			 Rotation[2]: -0.00274883, 0.999932, 0.0113304
[ INFO] [1559135949.375704308]: 	 IMU (IMU to Color):
[ INFO] [1559135949.375745359]: 		 Extrinsics:
[ INFO] [1559135949.375815022]: 			 Translation: -28.3792, -2.54304, -47.0334
[ INFO] [1559135949.375876998]: 			 Rotation[0]: 0.00156067, -0.999995, -0.00274883
[ INFO] [1559135949.375967786]: 			 Rotation[1]: 0.0113347, -0.00273097, 0.999932
[ INFO] [1559135949.376029124]: 			 Rotation[2]: -0.999935, -0.00159172, 0.0113304
[ INFO] [1559135949.376228326]: Depth -> IMU offset: ( -51.0218, 3.5791, 0.993862 )
[ INFO] [1559135949.376319390]: IMU Roll Extrinsics (YPR): -89.8831, 0.0152476, 84.4206
[ INFO] [1559135949.376382990]: IMU Roll Extrinsics Corrected (YPR): 0.116945, 0.0152476, 0.420642
[ INFO] [1559135949.376504729]: Depth -> RGB offset: ( -32.0406, -2.13734, 3.99055 )
[ INFO] [1559135949.376569605]: Color Roll Extrinsics (YPR): 0.142552, 0.0259118, -4.93017
[ INFO] [1559135949.376670831]: STARTING CAMERAS
[2019-05-29 15:19:09.376] [error] [t=7334] ../src/sdk/k4a.c (615): depth_fps_and_mode_supported == true returned failure in validate_configuration()
[2019-05-29 15:19:09.376] [error] [t=7334] ../src/sdk/k4a.c (694): validate_configuration(device, config) returned failure in k4a_device_start_cameras()
terminate called after throwing an instance of 'k4a::error'
  what():  Failed to start cameras!
[node-2] process has died [pid 7334, exit code -6, cmd /home/turtlebot/k4a_ws/devel/lib/azure-kinect-ros-driver/node __name:=node __log:=/home/turtlebot/.ros/log/5490788a-8214-11e9-84b1-302432084cf1/node-2.log].
log file: /home/turtlebot/.ros/log/5490788a-8214-11e9-84b1-302432084cf1/node-2*.log

Add an Azure Kinect URDF model

Is your feature request related to a problem? Please describe.
When visualizing sensor data in RVIZ, there is no Azure Kinect URDF model visible.

Describe the solution you'd like
We should provide a URDF model that approximates the dimensions of the sensor to integrate with existing robots.

Describe alternatives you've considered
None.

Additional context
Add any other context or screenshots about the feature request here.

Add nodelet version

For efficiency, we should also build a nodelet version of the node so that it can be bundled into a nodelet manager with related processes (like image_proc/rectify).

Formatting the code with clang-format to apply ROS CppStyleGuide

Is your feature request related to a problem? Please describe.
In one of my pull requests I forgot some indentation in the code.
There exists a ROS CppStyleGuide which we could apply to this codebase.
For autoformatting they provide a .clang-format file:
https://github.com/davetcoleman/roscpp_code_format

I saw that the Azure-Kinect-Sensor-SDK also has a .clang-format file in its repository and its consumed in the CMakeLists.txt:
https://github.com/microsoft/Azure-Kinect-Sensor-SDK/blob/develop/CMakeLists.txt#L147

I don't really know this is just some temporaray formatting for the build process or if the source files actually are formatted after this. I never tried this out yet together with CMake. I only used the format file manually in my VS Code Editor.
If all files are formatted after the build it would be a good improvement because every pull request should invoke building the package -> all files would be formatted automatically.

Describe the solution you'd like
Integrate the .clang-format file and use it during the CMake build process.
Ensure correct formatting.

Describe alternatives you've considered
Semi manual formatting the code calling clang format after changing source file.

Release a Melodic build

Is your feature request related to a problem? Please describe.
The "melodic" branch does not contain any code. It should contain the latest release for ROS Melodic.

Describe the solution you'd like
Merge code from master into "melodic" to create the melodic release.

Describe alternatives you've considered
None.

Additional context
None.

Use driver nodelet in the rtabmap_slam launch file

Is your feature request related to a problem? Please describe.
The rtabmap_slam launch file starts a new node of the Azure Kinect ROS Driver, but uses nodelets for the image_proc post-processing steps. It would be significantly more memory efficient to use the pointer-passing mechanics enabled through nodelets.

Describe the solution you'd like
Use the nodelet version of the ROS driver in the rtabmap_slam launch file.

Describe alternatives you've considered
None.

Additional context
None.

CMake should use Module Mode instead of Config Mode

Is your feature request related to a problem? Please describe.
When finding modules on the system, it's more appropriate to use a CMake Find Module than the Config file mode. This would allow CMake to discover multiple SDKs installed on Windows, and pick an appropriate one.

Describe the solution you'd like
Switch to using CMake Find Modules.

Describe alternatives you've considered
None.

Additional context
Add any other context or screenshots about the feature request here.

Windows does not build with Program Files installed SDK

Describe the bug
Windows does not build with the latest commit in master. It fails to find the Azure Kinect SDK.

To Reproduce
Steps to reproduce the behavior:

  1. Run "catkin_make" to try and build the project

Observe that a CMakeLists.txt error is produced saying that the SDK cannot be found.

Expected behavior
Project builds.

Logs

Finding K4A SDK binaries
-- Azure Kinect SDK Version File Contents: 1.1.0

-- PACKAGE_VERSION: 1.1.0
-- PACKAGE_FIND_VERSION: 1.1.0
CMake Error at Azure_Kinect_ROS_Driver/CMakeLists.txt:199 (message):
  K4A SDK not found in Program Files or ./ext/sdk.  Please install the Azure
  Kinect SDK.


-- Configuring incomplete, errors occurred!

Screenshots
If applicable, add screenshots to help explain your problem.

Desktop (please complete the following information):

  • OS: Windows 10
  • Version: 1903

Additional context
Add any other context about the problem here.

IMU is not compliant with REP-145

Describe the bug
The IMU exposed by the ROS driver is not compliant with REP-145.

Expected behavior
To my understanding, the correct way to do this would be:

  • Emit the IMU data (gyro and accelerometer data) exactly as it comes off the sensor: the same axis, the same polarity
    • Emit this data in a co-ordinate frame “imu_frame”. This is the sensor’s “internal reference frame”
  • Publish a TF between “camera_base” -> “imu_frame”. This TF uses the extrinsics, but also includes the translation / rotation needed to correct the “imu_frame” into the same co-ordinate frame as the camera
    • This TF ensures that, from the “camera_base” co-ordinate frame, the following two properties are true when the camera is sitting on a flat table:
      • +Y of the camera co-ordinate frame is pointing towards the floor
      • +Z of the IMU co-ordinate frame is pointing away from the floor

Documentation cleanup

Is your feature request related to a problem? Please describe.
Documentation in the repository is not always consistent. It should be cleaned up to ensure that developers have a consistent experience.

Describe the solution you'd like
Clean up the documentation.

Describe alternatives you've considered
None.

Additional context
None.

Use ImageTransport::advertiseCamera()

Is your feature request related to a problem? Please describe.
This is a proposal of a minor refactoring to clean up code.

Describe the solution you'd like
Replace the pairs of ImageTransport::advertise<Image>() and NodeHandle::advertise<CameraInfo>() in k4a_ros_device.cc with single calls to ImageTransport::advertiseCamera(). If there is a technical reason not to do this, please clarify it in the code comment.

ROS launch configuration for cartographer

Hi. I am new to Kinect/ROS and currently trying to get a (SLAM) 3D pose from the device. I have found the launch configuration for rtabmap_ros but didn't manage to install it on windows. I was recommended to use cartographer_ros which I got running.
Now I am struggling with the configuration. Cartographer needs a .lua as well as a .launch configuration. For beginners like me it would be tremendously helpful to have an example configuration for cartographer similar to the one in slam_rtabmap.launch.

materials are too generically named, can conflict when integrating

Describe the bug
When the URDF for the Kinect is included in another XACRO, the generic material name can conflict. Suggest namespacing it.

To Reproduce
Create a xacro which defines a material like black.
Include the kinect urdf.

notice that robot state publisher crashes because the URDF parser fails

** Request **
Prefix the materials like azure-kinect-black

Lower the latency of IMU samples.

Is your feature request related to a problem? Please describe.
The current implementation of void K4AROSDevice::imuPublisherThread() causes a lot of IMU samples to be buffered and dropped because the call to k4a_device_.get_imu_sample() is not fast enough. (The IMU samples are generated at 1600 Hz but are comsumed at 300 Hz). As a consequence, the latency of the IMU samples being published is low at the beginning but could reach up to 500 ms very quickly. This is even much larger than the latency of the image captures.

[DEBUG] [1564506266.837885822]: Depth image timestamp lags ros::Time::now() by
93.1763 ms. The lag ranges from 85.5594ms to 149.853ms.
[DEBUG] [1564506266.838092553]: Color image timestamp lags ros::Time::now() by
93.5138 ms. The lag ranges from 85.6315ms to 156.59ms.
[DEBUG] [1564506266.838987116]: IMU timestamp lags ros::Time::now() by
507.203 ms. The lag ranges from 4.47381ms to 509.671ms.
[DEBUG] [1564506266.842241953]: IMU timestamp lags ros::Time::now() by
505.639 ms. The lag ranges from 4.47381ms to 509.671ms.

Describe the solution you'd like
We might want consider deleting loop_rate.sleep();. By doing this, most of the time spent in this thread will be on waiting for get_imu_sample() to return as this function would block until the next sample is available. This would minimize the latency of published IMU messages. If a lower sampling frequency is needed, we could downsample it for publication.

Once the change is applied, the latency would look like this.

[DEBUG] [1564509165.887144542]: IMU timestamp lags ros::Time::now() by
0.800075 ms. The lag ranges from 0.406712ms to 5.679ms.
[DEBUG] [1564509165.891448754]: Depth image timestamp lags ros::Time::now() by
87.1476 ms. The lag ranges from 87.1476ms to 107.465ms.
[DEBUG] [1564509165.891636434]: Color image timestamp lags ros::Time::now() by
87.4763 ms. The lag ranges from 87.4763ms to 107.699ms.
[DEBUG] [1564509165.891814039]: IMU timestamp lags ros::Time::now() by
4.82506 ms. The lag ranges from 0.406712ms to 5.679ms.

Describe alternatives you've considered
(None)

Additional context
(None)

[Crash] on exit when camera fails to start.

Describe the bug
If the camera fails to start for some reason, the shutdown path crashes.

To Reproduce
Steps to reproduce the behavior:
If you do not have a failing camera, simply step over the device->startCamera(), and allow the process to exit.

Crash occurs here:

    // Join the publisher thread
    ROS_INFO("Joining IMU publisher thread");
    imu_publisher_thread_.join();
    ROS_INFO("IMU publisher thread joined");

Expected behavior
Graceful exit.

Dynamic reconfiguration of RGB camera parameters

Is your feature request related to a problem? Please describe.
Currently camera parameters (brightness, exposure, contrast, etc) cannot be adjusted. It would be nice if these parameters can be changed at run time.

Describe the solution you'd like
Implement dynamic_reconfigure interface to call the color sensor control commands.

As a reference, realsense2_camera provides such a reconfiguration interface. The interface of rqt_reconfigure looks like this.

Describe alternatives you've considered
These parameter may be set on start up via ros parameters. This is not an alternative solution and should be implemented as the way to set the default values.

Additional context
When RGB images from multiple kinects are fused, the color can be inconsistent due to exposure mismatch. I would like to disable auto exposure and set the absolute value.

Passive IR mode causes a ROS node crash

Describe the bug
The ROS node crashes when running depth in PASSIVE_IR mode.

To Reproduce
Steps to reproduce the behavior:

  1. Run the ROS node with _depth_mode:=PASSIVE_IR

Expected behavior
IR images should be provided in PASSIVE_IR mode.

Logs

[ INFO] [1562001283.655749500]: Depth -> IMU offset: ( -50.9478, 3.4486, 1.18985 )
[ INFO] [1562001283.656014300]: IMU Roll Extrinsics (YPR): -90.0498, 0.310712, 84.1384
[ INFO] [1562001283.656356300]: IMU Roll Extrinsics Corrected (YPR): -0.0498135, 0.310712, 0.138434
[ERROR] [1562001283.656818500]: Could not determine depth camera mode for rotation correction
[ INFO] [1562001283.658792600]: Depth -> RGB offset: ( -31.9573, -2.12932, 3.96378 )
[ INFO] [1562001283.659195900]: Color Roll Extrinsics (YPR): -0.10809, 0.0298361, -5.46022
[ INFO] [1562001283.659621000]: STARTING CAMERAS
[ INFO] [1562001283.836646200]: STARTING IMU
[ INFO] [1562001283.838358700]: K4A Started
[ERROR] [1562001285.223603600]: Cannot render depth frame: no frame
[ERROR] [1562001285.223849400]: Cannot render depth frame: no frame
[ERROR] [1562001285.225339600]: Failed to get rectifed depth frame

Screenshots
If applicable, add screenshots to help explain your problem.

Desktop (please complete the following information):

  • OS: Windows 10
  • Version: 1903

Additional context
Add any other context about the problem here.

Use K4A timestamps as input to the ROS timestamps

Currently, ROS timestamps are computer by capturing ros::Time::now() as soon as a capture arrives, and using that time for all data emitted from that capture. To better synchronize the IMU and RGB/Depth captures, and to eliminate variability introduced by ROS, we should use the K4A timestamps as input to produce the ROS timestamp.

MJPEG decoder initialization failed

I sometimes enconter the problem, not everytime.
comand: roslaunch azure_kinect_ros_driver rectify_test.launch

Logs
[2019-08-01 16:49:34.768] [error] [t=6690] /home/username/Software/Azure-Kinect-Sensor-SDK/src/color/uvc_camerareader.cpp (1274): DecodeMJPEGtoBGRA32(). MJPEG decoder initialization failed

[2019-08-01 16:49:34.769] [error] [t=6691] /home/username/Software/Azure-Kinect-Sensor-SDK/src/sdk/k4a.c (271): capturesync_get_capture(device->capturesync, capture_handle, timeout_in_ms) returned failure in k4a_device_get_capture()
terminate called after throwing an instance of 'k4a::error'
what(): Failed to get capture from device!

Desktop (please complete the following information):

  • OS: [Ubuntu16.04]
  • ROS Version [Kinectic]

Warn if the main sensor loop falls behind realtime

Is your feature request related to a problem? Please describe.
On some resource constrained systems, or in the case of #56, the sensor processing loop can fall behind realtime. We should warn developers if this happens.

Describe the solution you'd like
Add a ROS_WARN to indicate that we have fallen behind realtime.

Describe alternatives you've considered
None.

Additional context
Add any other context or screenshots about the feature request here.

Publish RGB pointclouds beyond the texture

Is your feature request related to a problem? Please describe.
Currently, RGB points are reconstructed only from the depths that have non-zero texture colors. Given that the depth camera has significantly wider FoV, it would be nice to have an option to publish all points including those that are not covered by the texture.

Describe the solution you'd like
A potential implementation would be to publish all points regardless of the texture, and the points without texture can be colored in black. This is the approach adopted in realsense2_camera https://github.com/IntelRealSense/realsense-ros

Describe alternatives you've considered
The points without texture may be colored in other colors, or a user-specified value.

Conflict with compressed_image_transport

Describe the bug
A node crashes when compressed color/depth stream is subscribed.

To Reproduce
Steps to reproduce the behavior:

  1. Install compressed_image_transport:
sudo apt-get install ros-melodic-compressed-image-transport
  1. Run a ROS node:
roslaunch azure_kinect_ros_driver driver.launch
  1. Subscribe to one of compressed streams:
rostopic hz /rgb/image_raw/compressed
  1. See error:
[node-1] process has died [pid 25048, exit code -11, cmd ...

Expected behavior
Subscription works.

Logs
See above.

Desktop (please complete the following information):

  • OS: Ubuntu
  • Version 18.04

Additional context
The node crashes at cv::imencode() in CompressedPublisher::advertiseImpl:
https://github.com/ros-perception/image_transport_plugins/blob/indigo-devel/compressed_image_transport/src/compressed_publisher.cpp#L137

The reason of the crash is that the version of libjpeg-turbo statically linked to libk4a is incompatible with the version of the same library dynamically loaded in libopencv_imgcodecs. It seems something similar happens in #37.

Capture timestamps are not synchronized with each other or the IMU

Each individual ROS message contains a timestamp, in host-time. This timestamp is currently populated at the time each message is generated. Since there is processing associated with each message, this means that each frame that gets published has a slightly different timestamp.

This doesn't represent how the sensor works very well: RGB and depth frames are captured at the same time, and so the ROS timestamps should reflect the time they were captured, not the time they were rendered.

Fixing this may involve using the on-device timestamp to allow synchronization between the IMU and Capture timestamps, since the IMU is read on a different thread.

slam_rtabmap.launch | state publishers in wrong namespace?

In working to get this k4a ros package working, I noticed that the two state_publisher nodes crash because they cannot find the robot_description parameter loaded at the beginning of the launch file (the urdf).

Moving these two nodes outside the k4a group/namespace allows them to run without crashing.

proposing that either the urdf be moved into k4a namespace, or that the state_publishers move out. I can make the desired change and PR and build out a few more examples for rtabmap.

Add a new parameter to control the body tracking smoothing between frames

Is your feature request related to a problem? Please describe.
Since body tracking SDK version 0.9.2 there is a method to set the smoothing factor between 0 (no smoothing) and 1 (full smoothing). The default is 0.0 since 0.9.3 and it should be possible to change this value for the node.

Describe the solution you'd like
Add a parameter for the node to override the default value of 0.0.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.