Giter VIP home page Giter VIP logo

redtail's Introduction

NVIDIA Redtail project

Autonomous visual navigation components for drones and ground vehicles using deep learning. Refer to wiki for more information on how to get started.

This project contains deep neural networks, computer vision and control code, hardware instructions and other artifacts that allow users to build a drone or a ground vehicle which can autonomously navigate through highly unstructured environments like forest trails, sidewalks, etc. Our TrailNet DNN for visual navigation is running on NVIDIA's Jetson embedded platform. Our arXiv paper describes TrailNet and other runtime modules in detail.

The project's deep neural networks (DNNs) can be trained from scratch using publicly available data. A few pre-trained DNNs are also available as a part of this project. In case you want to train TrailNet DNN from scratch, follow the steps on this page.

The project also contains Stereo DNN models and runtime which allow to estimate depth from stereo camera on NVIDIA platforms.

IROS 2018: we presented our work at IROS 2018 conference as a part of Vision-based Drones: What's Next? workshop.

CVPR 2018: we presented our work at CVPR 2018 conference as a part of Workshop on Autonomous Driving.

References and Demos

News

  • 2020-02-03: Alternative implementations. redtail is no longer being developed, but fortunately our community stepped in and continued developing the project. We thank our users for the interest in redtail, questions and feedback!

    Some alternative implementations are listed below.

  • 2018-10-10: Stereo DNN ROS node and fixes.

    • Added Stereo DNN ROS node and visualizer node.
    • Fixed issue with nvidia-docker v2.
  • 2018-09-19: Updates to Stereo DNN.

    • Moved to TensorRT 4.0
    • Enabled FP16 support in ResNet18 2D model, resulting in 2x performance increase (20fps on Jetson TX2).
    • Enabled TensorRT serialization in ResNet18 2D model to reduce model loading time from minutes to less than a second.
    • Better logging and profiler support.
  • 2018-06-04: CVPR 2018 workshop. Fast version of Stereo DNN.

  • GTC 2018: Here is our Stereo DNN session page at GTC18 and the recorded video presentation

  • 2018-03-22: redtail 2.0.

    • Added Stereo DNN models and inference library (TensorFlow/TensorRT). For more details, see the README.
    • Migrated to JetPack 3.2. This change brings latest components such as CUDA 9.0, cuDNN 7.0, TensorRT 3.0, OpenCV 3.3 and others to Jetson platform. Note that this is a breaking change.
    • Added support for INT8 inference. This enables fast inference on devices that have hardware implementation of INT8 instructions. More details are on our wiki.
  • 2018-02-15: added support for the TBS Discovery platform.

    • Step by step instructions on how to assemble the TBS Discovery drone.
    • Instructions on how to attach and use a ZED stereo camera.
    • Detailed instructions on how to calibrate, test and fly the drone.
  • 2017-10-12: added full simulation Docker image, experimental support for APM Rover and support for MAVROS v0.21+.

    • Redtail simulation Docker image contains all the components required to run full Redtail simulation in Docker. Refer to wiki for more information.
    • Experimental support for APM Rover. Refer to wiki for more information.
    • Several other changes including support for MAVROS v0.21+, updated Jetson install script and few bug fixes.
  • 2017-09-07: NVIDIA Redtail project is released as an open source project.

    Redtail's AI modules allow building autonomous drones and mobile robots based on Deep Learning and NVIDIA Jetson TX1 and TX2 embedded systems. Source code, pre-trained models as well as detailed build and test instructions are released on GitHub.

  • 2017-07-26: migrated code and scripts to JetPack 3.1 with TensorRT 2.1.

    TensorRT 2.1 provides significant improvements in DNN inference performance as well as new features and bug fixes. This is a breaking change which requires re-flashing Jetson with JetPack 3.1.

redtail's People

Contributors

alexey-kamenev avatar hildebrandt-carl avatar nsmoly avatar vijay609 avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

redtail's Issues

Where to Start

I think this is the most general question in here, however I need to ask with all due respect. It is an amazing open source project. I dedicated myself to fork this project with your guidance. For a while, I am inspecting this project and surveying literature. I also bought jetson tx2, new computer with GTX 1070 and got system ready to start. Unfortunately, I am all alone in this project and struggling where to start. I just need right start point and a spark. Would you please give me that spark?

Thank you in advance, Ender.

fail to start gazebo inside the container

I follow the steps in 'Building Docker image' and succeed building the image. Then I create container, build components, all succeed. but when I run Gazebo simulator with 'make posix_sitl_default gazebo' there comes errors. I realize these errors are about gazebo. so i just run 'gazebo' in the terminal inside the container. the terminal outputs:

root@yang-X556UQK:~# gazebo
libGL error: No matching fbConfigs or visuals found
libGL error: failed to load driver: swrast
libGL error: No matching fbConfigs or visuals found
libGL error: failed to load driver: swrast
X Error of failed request: BadValue (integer parameter out of range for operation)
Major opcode of failed request: 154 (GLX)
Minor opcode of failed request: 3 (X_GLXCreateContext)
Value in failed request: 0x0
Serial number of failed request: 30
Current serial number in output stream: 31
X Error of failed request: BadValue (integer parameter out of range for operation)
Major opcode of failed request: 154 (GLX)
Minor opcode of failed request: 3 (X_GLXCreateContext)
Value in failed request: 0x0
Serial number of failed request: 30
Current serial number in output stream: 31

I guess it's because gazebo need glx. but according to https://github.com/NVIDIA/nvidia-docker/wiki/Frequently-Asked-Questions#is-opengl-supported opengl+glx is not supported by nvidia docker. so i'm confused about how to run gazebo simulator. confirm my config below:
host computer: Ubuntu 16.04 x64 with Nvidia Geforce 940MX
docker image: nvidia-redtail-sim:kinetic, built according to https://github.com/NVIDIA-Jetson/redtail/wiki/Testing-in-Simulator
gazebo simulator: running inside container with 'make posix_sitl_default gazebo'. should I run this inside the container or just in the host computer?

Webcam model

What webcam did you use and what fps where you able to achieve with it?
Thanks for opening up this work!

Where did the file "labels_map.txt" come from?

Hello,thank you for sharing the code. I am very interested in the works what you have done. When I follow the steps to create a training dataset, I couldn't find the way to create the file "labels_map.txt". Could you tell me how to create the file.

QGround connect to pixhawk failure with MB12xx sonar enabled

Hi Alexey
Do you ever encounter this issue:
set "SENS_EN_MB12xx" enabled and close Qground, reboot pixhawk, then wait few seconds open Qground again, pixhawk never connected to qground successfully.
To double check this issue, I erase pixhawk flash, program binary again, set this parameter several times.
This issue not related whether px4flow i2c connected to pixhawk or not.
Program online standard version(stable) not found this issue.
can you help check this? Thanks, otherwise I need change firmware version if I want to MB12xx sonar, not Lidar.

firmware version: PX4 firmware v1.4.4.
QGroundControl version: 3.2.4/2.9.5......

Invoking "make px4_controller_node -j4 -l4" failed

Running command: "make cmake_check_build_system" in "/home/ubuntu/ws/build"

Running command: "make px4_controller_node -j4 -l4" in "/home/ubuntu/ws/build"

[ 33%] Building CXX object px4_controller/CMakeFiles/px4_controller_node.dir/src/px4_controller.cpp.o
/home/ubuntu/ws/src/px4_controller/src/px4_controller.cpp: In member function ‘bool px4_control::PX4Controller::arm()’:
/home/ubuntu/ws/src/px4_controller/src/px4_controller.cpp:531:77: error: ‘mavros_msgs::SetMode::Response {aka struct mavros_msgs::SetModeResponse_<std::allocator >}’ has no member named ‘success’
if (setmode_client_.call(offb_setmode) && offb_setmode.response.success)
^
px4_controller/CMakeFiles/px4_controller_node.dir/build.make:62: recipe for target 'px4_controller/CMakeFiles/px4_controller_node.dir/src/px4_controller.cpp.o' failed
make[3]: *** [px4_controller/CMakeFiles/px4_controller_node.dir/src/px4_controller.cpp.o] Error 1
CMakeFiles/Makefile2:4014: recipe for target 'px4_controller/CMakeFiles/px4_controller_node.dir/all' failed
make[2]: *** [px4_controller/CMakeFiles/px4_controller_node.dir/all] Error 2
CMakeFiles/Makefile2:4026: recipe for target 'px4_controller/CMakeFiles/px4_controller_node.dir/rule' failed
make[1]: *** [px4_controller/CMakeFiles/px4_controller_node.dir/rule] Error 2
Makefile:1603: recipe for target 'px4_controller_node' failed
make: *** [px4_controller_node] Error 2
Invoking "make px4_controller_node -j4 -l4" failed
All done.

caffe digits ERROR: error code -11

When training orientation head,I met the following questions.

Setting up train-data
Top shape: 64 3 180 320 (11059200)
Top shape: 64 (64)
Memory required for data: 44237056
Creating layer data_aug

##System configuration
Operating system: ubuntu 16.04
Compiler:
CUDA version (if applicable): 9.0
CUDNN version (if applicable): 9.0.176
Python version (if using pycaffe): tried both 2.7 and 3.5.2

Memory Required For Data Error

Hello there,

Currently using this development stack:

  • Ubuntu 16.04 LTS (64 Bit)
  • GeForce GTX 960M (Driver: 384.90), Cuda 8.0, CuDNN 8.0
  • Digits 6.1
  • Caffe 0.15

I tried to follow this tutorial. (Already created the Forest trails dataset.) When I create the exact same classification model in the tutorial, I'm getting this error:

error

I thought it could be some error within my system, so I tried to train a simple MNIST-LeNet model. Works without a problem and also utilizes the GPU.

Any ideas?

Dataset Preperation

Hi,

I am interested in your work and want to thank for your sharing. Also, I am new to deep learning.
I have started to implement your project myself. Here are my questions:

  1. I would like to create my own dataset. If i convert rgb images to grayscale, doest that affect training in a bad manner? because my gpu is GTX 1070. What do you think ?
  2. As is well known, I need to resize images before training. What size should images be? I mean how can I decide about size of images? According to the network what I will use or something else?

Kind regards, Ender.

Linking stage error

The next error ocurred while running the jetson_ros_install.sh script on a TX2:
[ 73%] Built target camera_info_manager
[ 76%] Linking CXX executable /home/nvidia/ws/devel/lib/camera_info_manager/unit_test
CMakeFiles/convert.dir/src/convert.cpp.o: In function main': convert.cpp:(.text+0x134): undefined reference to ros::console::initializeLogLocation(ros::console::LogLocation*, std::string const&, ros::console::levels::Level)' convert.cpp:(.text+0x2cc): undefined reference to ros::console::initializeLogLocation(ros::console::LogLocation*, std::string const&, ros::console::levels::Level)'
convert.cpp:(.text+0x408): undefined reference to ros::console::initializeLogLocation(ros::console::LogLocation*, std::string const&, ros::console::levels::Level)' /home/nvidia/ws/devel/lib/libcamera_calibration_parsers.so: undefined reference to YAML::Emitter::Write(std::string const&)'
/home/nvidia/ws/devel/lib/libcamera_calibration_parsers.so: undefined reference to YAML::ostream_wrapper::write(std::string const&)' /home/nvidia/ws/devel/lib/libcamera_calibration_parsers.so: undefined reference to YAML::detail::node_data::empty_scalar'
/home/nvidia/ws/devel/lib/libcamera_calibration_parsers.so: undefined reference to YAML::detail::node_data::set_scalar(std::string const&)' /home/nvidia/ws/devel/lib/libcamera_calibration_parsers.so: undefined reference to YAML::Emitter::PrepareIntegralStream(std::basic_stringstream<char, std::char_traits, std::allocator >&) const'
collect2: error: ld returned 1 exit status

Any idea of what the problem could be
Thanks

Library confusion when compiling OpenCV

Hi,

On wiki, under Test Simulation, without showing correct libcuda.so, it gives this error when building;

/usr/local/nvidia/lib/libcuda.so: error adding symbols: File in wrong format
collect2: error: ld returned 1 exit status

Please review my change and merge them in, from;
cosmicog/redtail-wiki@1b38eed

Fake camera stream and use local video file instead

Hi Alexey and guys:
Can you help provide the first view video recorded by your drone when your were testing in forest?
It's hard to simulate forest environment in shanghai city.
so, I assume let my drone read your video file(not my camera), then drone thoughts it is in forest.
now I put my drone in outdoor playground, other steps are same as yours except input video steam are faked.
If observed the drone's flight route is along with the road direction in the video, so I think I copied your demo.
Is it feasible about my thought?
Thanks ahead!

How can I visualize what Jetson is thinking?

Around 36 seconds into this YouTube Video, you can see green, yellow and red shading on top of the video feed. How would I go about replicating that? I'm using gstreamer on my laptop to watch what the drone sees, and have the TrailNet DNN node running. Were you able to see that in real time, or did you apply that effect after the flight was over?

Some mistakes in everything.launch?

In file everything.lunch, there is one line:
<arg name="object_model_path" default="/home/nvidia/redtail/models/pretrained/yolo-relu.caffemodel" />

but not found this file in models/pretrained/ directory:
README.md Initial commit 2 months ago
TrailNet_SResNet-18.caffemodel Initial commit 2 months ago
TrailNet_SResNet-18.prototxt Initial commit 2 months ago
yolo-relu.caffemodel.00 Initial commit 2 months ago
yolo-relu.caffemodel.01 Initial commit 2 months ago
yolo-relu.caffemodel.02 Initial commit 2 months ago
yolo-relu.prototxt Initial commit 2

Now If changing "yolo-relu.caffemodel" to "yolo-relu.caffemodel.00" "yolo-relu.caffemodel.01" "yolo-relu.caffemodel.02", any one file in these 3 file, issue encountered:

[FATAL] [1510726990.784716595]: Failed to parse network: /home/nvidia/redtail/models/pretrained/yolo-relu.prototxt, /home/nvidia/redtail/models/pretrained/yolo-relu.caffemodel.00

Now run yolo-dnn node independently,
rosrun caffe_ros caffe_ros_node __name:=object_dnn _prototxt_path:=/home/nvidia/redtail/models/pretrained/yolo-relu.prototxt _model_path:=/home/nvidia/redtail/models/pretrained/yolo-relu.caffemodel.02 utput_layer:=fc25 _inp_scale:=0.00390625 _inp_fmt:="RGB" _post_proc:="YOLO" _obj_det_threshold:=0.2 _use_fp16:=true

same issue happened:
[FATAL] [1510727562.919367634]: Failed to parse network: /home/nvidia/redtail/models/pretrained/yolo-relu.prototxt, /home/nvidia/redtail/models/pretrained/yolo-relu.caffemodel.02
Segmentation fault

Can you help identify this issue? THX!

Validation accuracy remains below 60%

Hello, I'm trying to verify I get correct results training with the forest trails dataset, using the instructions in the wiki, before I try to fine tune with my own data. However, I haven't been able to get validation accuracy above 60%.

I tried running 40 epochs, without much difference. Here's a few screenshots and a caffe log. What other info can I provide to help diagnose this issue?

screenshot from 2018-03-19 12-00-42
screenshot from 2018-03-19 12-00-04
caffe_output(5).log

Calibrating IRIS sensros

After updating firmware to the version listed in Wiki, it said "Follow the standard steps to calibrate the sensors, RC, and flight modes for the drone.". I have tried to do this with both QGroundControl and Mission Planner. They both get stuck after calibrating accelerometer. Is there any workaround for this? Which one should be used for calibration? QGroundControl or Mission Planner?
Thanks!

Simulation on TX1

According to the instructions with simulation, I found most packages(TensorRT-2.1.2.x86_64.cuda-8.0-16-04-tar.bz2, nvidia-docker.......) work on x64 platform, which means it only works on PC not Jetson.
How can I make simulation work on TX1?

PX4 and ROS Docker Setup Issue

Hi,

Because, unfortunately, I could not use Redtail docker image although I have tried a few times, I have started to build image manually regarding the guide, starting from PX4 and Ros Docker setup (step 2). I have followed the steps on PX4 developer page by starting with this:

mkdir src
cd src
git clone https://github.com/PX4/Firmware.git
cd Firmware

To pull px4-dev-ros:v1 , I implemented "Calling Docker Manually" section as follows:

# enable access to xhost from the container
xhost +
  • First problem showed up with xhost + command and output: access control disabled, clients can connect from any host. I googled that but could not find solution although I have found a recommendation but did not help. Nevertheless, I kept going on running docker as follows:
#Run docker
docker run -it --privileged \
    --env=LOCAL_USER_ID="$(id -u)" \
    -v ~/src/Firmware:firmware:rw \
    -v /tmp/.X11-unix:/tmp/.X11-unix:ro \
    -e DISPLAY=:0 \
    -p 14556:14556/udp \
    --name=px4-ros px4io/px4-dev-ros:v1.0 bash
  • However, output turned out: docker: Error response from daemon: invalid volume specification: '/home/deep/src/Firmware:firmware:rw': invalid mount config for type "bind": invalid mount path: 'firmware' mount path must be absolute.
    I also tried Helper Script (docker_run.sh) but nothing was okay. Could you please help me?

Kind regards, Ender.

Auvidea J-120 + Jetson TX2 issue: Fixed in JetPack 3.2?

Hi,

This page claims that the USB issue with the J120 board must be fixed in the JetPack release after 3.1. Recently I found the version 3.2 developer preview is here, but I cannot see anything related to that in the release notes.

Can you tell me whether the issue was fixed in the version 3.2 or I'd better use 3.1 + patch?

Thanks in advance.

Rover overcorrects steeering. How to tune steering parameters?

This video shows driving behavior observed during initial testing of redtail running on a rock crawler platform. The steering appears to be overcorrecting.

https://www.youtube.com/watch?v=Lexi6v5XSSQ

Before testing I attempted to adjust the linear_speed_scale and turn_angle_scale to normalize controller output to expected servo output values. Is this the correct approach? Could you please provide more info on setting up controller parameters?

These are the parameters that were used for running the controller node.

rosrun px4_controller px4_controller_node _altitude_gain:=0 _linear_speed:=2 _joy_type:="xbox_wired" _obj_det_limit:=0.3 _vehicle_type:=apmrover _linear_speed_scale:=200 _turn_angle_scale:=-400 _dnn_turn_angle:=45.0 _dnn_lateralcorr_angle:=45.0

Auvidea J120

have a quick question about the carrier board you used in this project... I have Auvidea J120 too and cannot get CAN Bus working with TX2 module. Looking at the modifications to device tree and mcp251x driver that Auvidea made, it looks like it's all TX1 specific and I don't see they ever have published any changes specific to TX2 module. Can you share the information on what you had to do to get CAN Bus working in your project with Auvidea J120 carrier board and TX module?

TBS Discovery J120 Power

The TBS Discovery build shows the J120 hooked up directly to the power coming from the battery. I thought the power fluctuations coming from the battery had the potential to damage the equipment. Have you guys had any issues with this?

what is the test dataset

Hello:
This paper gives me a lot of help for my research. Thanks for it!
I have some questions about the process of training and testing the trailnet. I followed the wiki and trained the orientation head. The accuracy of validation dataset is about 86%. I used the folder 011 in the Forest trail dataset as the test dataset and the accuracy is only 77%. It seems too low according the paper and i dont know why.
Can you tell me what dataset you chose as your test dataset? Could you give me any suggestments to improve the accuracy?That would help me a lot!
Thank you so much!

USB Patch

Hi,
Foremost, thanks for releasing this project. We have questions concerning the USB Patch for Jetpack 3.1. In particular, we are interested in determining where the .dtb files should be found. We have searched in the 64_TX2 directory, and found .dtb files that were already located in the following directories Linux_for_Tegra_tx2/bootloader, Linux_for_Tegra_tx2/kernel/dtb, and Linux_for_Tegra_tx2/rootfs/boot. Also, does this patch address the same issue as the JetsonHacks/ACMModule fix? Either fix, we hope will prevent us from having to build a kernel to access our teensy microcontroller. Thank you for your time.

Camera calibration file

To run the gscam node I am using a camera calibration file created from redtail/tools/camera_rig/widecam_mono_calibration/, yet I am getting a "Failed to parse camera calibration from file" error. It comes from "yaml-cpp: error at line 0, column 0: bad conversion".
Any suggestions? Was the calibration file supposed to be created by other means?

Build issues on x86_64

I would like to run redtail on my GTX1070 laptop for controlling a small FPV vehicle remotely.

While trying to build redtail on x86_64, I'm getting cmake errors related to Qt5 such as the example below. I already have the Qt5 packages on my system. What am I missing here? What other info can I provide to help solve this issue? install log and dpkg -l output are attached

CMake Error at /usr/share/cmake-3.5/Modules/FindCUDA.cmake:1693 (add_executable):
Target "caffe_ros_node" links to target "Qt5::Core" but the target was not
found. Perhaps a find_package() call is missing for an IMPORTED target, or
an ALIAS target is missing?
Call Stack (most recent call first):
caffe_ros/CMakeLists.txt:124 (cuda_add_executable)

log_121217.txt

dpkgList.txt

Probems building sim code

Trying to build the ros packages I get this error on px4_controller build.
/root/redtail/ros/packages/px4_controller/src/px4_controller.cpp: In member function 'bool px4_control::PX4Controller::parseArguments(const ros::NodeHandle&)':
/root/redtail/ros/packages/px4_controller/src/px4_controller.cpp:287:20: error: 'make_unique' is not a member of 'std'
vehicle_ = std::make_unique();

Any ideas ?

Does joystick must have?

Hi Guys,
In wiki page, part : "Flying a drone" --- "Flying"
The article mentioned control the drone using joystick:
left stick:
up/down: change altitude higher/lower
left/right: yaw
right stick:
up/down: fly forward/backward
left/right: strafe left/right

Can we use the 8 channels transmitter to replace joystick,
http://quadcoptergarage.com/wp-content/uploads/2014/03/Spektrum-DX7s.png

we can also control roll/pitch/yaw/throttle through transmitter. and control transmitter's switches into different modes(POS HOLD, STABILIZED ....), even into channel 7: kill switch channel 8: offboard switch.

current control data direction:
joystick--->tx1---->pixhawk
Does ROS on tx1 grab joystick's data(analysis, take action...) or just transparent forward to pixhawk through mavlink?

thanks ahead!

No DSO part

I've tried this demo. It's fantastic. Thanks for your sharing.

but there's no dso reference information in this repo. Could u please give us some suggestions about dso implementation?

thanks a lot!

ArduPilot Support

This looks impressive, but lacks support for the largest potential user and developer base.

No movement

Set up RedTail. Everything looks like its working but I get no movement on the wheel motors. Channel 1 and channel 3 in r/c out remain at 1500. Numbers are coming from px4_controller output. I see commands being sent and responded to by mavros. Any ideas where to look? How is it supposed to behave when you first start it up?

Dataset

Hi, the link to the dataset doesn't work for me. Has it been removed?

How to use Stereo DNN with trailnet and px4_controller?

The Stereo DNN work looks very interesting.

How can I use it with the other components of redtail?

Is the intended use case to train trailnet with lidar+photo depth images from stereo dnn? If so, my camera (ZED) can output depth images, would I still use stereo dnn, or can I simply train trailnet in the usual way using depth images?

Will there be further work on the px4_controller released soon? Are there plans to implement waypoints, or velocity setpoints to supplement the currently used rc_override steering?

Some more details on modified yolo

Hi team,

First of all thanks a lot for releasing the code. Great work!

Could you point me to some references on your workflow from training yolo with darknet then converting the cfg and weights to Caffe and making the modifications to yolo? Which script(s) did you use to convert and what is the performance difference between your yolo with TensorRT vs the darknet yolo/yolo2 implementations on a TX1/2? I was only able to find some timing measurements in you GTC slides, but no comparison in terms of mAP or any other detection quality measure.

Thanks!
Marc

Gazebo Sim for TX2

Hey guys! JetPack 3.1 does not support docker install. The docker image is required to run the simulation. We should clarify somewhere in the simulation text that it is supported only by a host Ubuntu machine right now.

J120 BSP with JetPack 3.2

What is the solution for USB connectivity since all of the Auvidea and Connecttech BSP's support only JetPack 3.1? I worked on this for many hours and got ROS and zed_wrapper functioning on the development board only to find no USB when I changed to the J120.

RB

UART of Auvidea J120 with TX2

maybe it's not suitable to raise an issue here. but really i cannot find another place to discuss J120+TX2. I want to use the UART of Auvidea J120 and TX2. but i don't know physically where is the uart port on J120. besides, i see /dev/ttyTHS1, /dev/ttyTHS2, /dev/ttyTHS3. don't know which port could I use? thanks so much if anyone could give a hint.

Rover mode for Pixhawk Cube?

I'm trying to get a Rover working with Pixhawk cube and TX2.

When I use your custom firmware, there's no option for Rover, only air frames.

When I use APM firmware, there is an option for Rover, but there's no SYS_COMPANION parameter, and I'm not getting any status updates from MAVROS.

Any ideas?

Thanks!!

Missing px4_controller from rospack

I am trying to set up my xbox wireless controller to interface with gazebo in docker but I get the error:
"[rospack] Error: package 'px4_controller' not found".

Everything else in the installation seems to run fine but "rospack list" command does not return the px4_controller as part of ros. Using "rostopic echo /joy" does show the controller is working in the docker just fine. According to the installation instructions, the quad should take off setting up the joystick but it doesn't do that either. Any suggestions?

Build Issues - Cmake Failed

We are using Jetpack 3.1, as well as OpenCV 2.4.13 and are seeing the following build issues. Any assistance would be greatly appreciated. Also, the necessary environment for the Erle Brain 3 was lightly discussed. Is the default environments via Erle Brain docs enough, specifically for the Rover? Thanks again.

-- Using CATKIN_DEVEL_PREFIX: /home/nvidia/ws/devel
-- Using CMAKE_PREFIX_PATH: /opt/ros/kinetic
-- This workspace overlays: /opt/ros/kinetic
-- Using PYTHON_EXECUTABLE: /usr/bin/python
-- Using Debian Python package layout
-- Using empy: /usr/bin/empy
-- Using CATKIN_ENABLE_TESTING: ON
-- Call enable_testing()
-- Using CATKIN_TEST_RESULTS_DIR: /home/nvidia/ws/build/test_results
-- Found gtest sources under '/usr/src/gtest': gtests will be built
-- Using Python nosetests: /usr/bin/nosetests-2.7
-- catkin 0.7.8
-- BUILD_SHARED_LIBS is on
-- ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
-- ~~ traversing 7 packages in topological order:
-- ~~ - angles
-- ~~ - caffe_ros
-- ~~ - camera_calibration_parsers
-- ~~ - image_transport
-- ~~ - camera_info_manager
-- ~~ - gscam
-- ~~ - px4_controller
-- ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
-- +++ processing catkin package: 'angles'
-- ==> add_subdirectory(angles)
-- +++ processing catkin package: 'caffe_ros'
-- ==> add_subdirectory(caffe_ros)
CUDA_TOOLKIT_ROOT_DIR not found or specified
-- Could NOT find CUDA (missing: CUDA_TOOLKIT_ROOT_DIR CUDA_NVCC_EXECUTABLE CUDA_INCLUDE_DIRS CUDA_CUDART_LIBRARY)
CMake Error at caffe_ros/CMakeLists.txt:15 (CUDA_CUDART_LIBRARY):
Unknown CMake command "CUDA_CUDART_LIBRARY".

-- Configuring incomplete, errors occurred!
See also "/home/nvidia/ws/build/CMakeFiles/CMakeOutput.log".
See also "/home/nvidia/ws/build/CMakeFiles/CMakeError.log".
Invoking "cmake" failed
Building px4_controller package...
Base path: /home/nvidia/ws
Source space: /home/nvidia/ws/src
Build space: /home/nvidia/ws/build
Devel space: /home/nvidia/ws/devel
Install space: /home/nvidia/ws/install

Running command: "cmake /home/nvidia/ws/src -DCATKIN_DEVEL_PREFIX=/home/nvidia/ws/devel -DCMAKE_INSTALL_PREFIX=/home/nvidia/ws/install -G Unix Makefiles" in "/home/nvidia/ws/build"

-- Using CATKIN_DEVEL_PREFIX: /home/nvidia/ws/devel
-- Using CMAKE_PREFIX_PATH: /opt/ros/kinetic
-- This workspace overlays: /opt/ros/kinetic
-- Using PYTHON_EXECUTABLE: /usr/bin/python
-- Using Debian Python package layout
-- Using empy: /usr/bin/empy
-- Using CATKIN_ENABLE_TESTING: ON
-- Call enable_testing()
-- Using CATKIN_TEST_RESULTS_DIR: /home/nvidia/ws/build/test_results
-- Found gtest sources under '/usr/src/gtest': gtests will be built
-- Using Python nosetests: /usr/bin/nosetests-2.7
-- catkin 0.7.8
-- BUILD_SHARED_LIBS is on
-- ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
-- ~~ traversing 7 packages in topological order:
-- ~~ - angles
-- ~~ - caffe_ros
-- ~~ - camera_calibration_parsers
-- ~~ - image_transport
-- ~~ - camera_info_manager
-- ~~ - gscam
-- ~~ - px4_controller
-- ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
-- +++ processing catkin package: 'angles'
-- ==> add_subdirectory(angles)
-- +++ processing catkin package: 'caffe_ros'
-- ==> add_subdirectory(caffe_ros)
CUDA_TOOLKIT_ROOT_DIR not found or specified
-- Could NOT find CUDA (missing: CUDA_TOOLKIT_ROOT_DIR CUDA_NVCC_EXECUTABLE CUDA_INCLUDE_DIRS CUDA_CUDART_LIBRARY)
CMake Error at caffe_ros/CMakeLists.txt:15 (CUDA_CUDART_LIBRARY):
Unknown CMake command "CUDA_CUDART_LIBRARY".

-- Configuring incomplete, errors occurred!
See also "/home/nvidia/ws/build/CMakeFiles/CMakeOutput.log".
See also "/home/nvidia/ws/build/CMakeFiles/CMakeError.log".
Invoking "cmake" failed

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.