Giter VIP home page Giter VIP logo

spring-2023-final-project-team-5's Introduction

Review Assignment Due Date

Obstacle Avoidance and Pedestrian Detection with ROS2


Logo

ECE-MAE 148 Final Project

Team 5 Spring 2023



Table of Contents

  1. Team Members
  2. Abstract
  3. What We Promised
  4. Accomplishments
  5. Challenges
  6. Final Project Videos
  7. Software
  8. Hardware
  9. Gantt Chart
  10. Course Deliverables
  11. Project Reproduction
  12. Acknowledgements
  13. Contacts

Team Members

Shasta Subramanian (ECE) - LinkedIn

Armond Greenberg (MAE)

Jacob Cortez (MAE)

Zixu Hao (ECE - UPS Student)


Abstract

The baseline goals of our team's final project were implementing obstacle avoidance and pedestrian detection on top of the lane following program. Using the Lidar and OakD Lite Camera, the robot is able to closely follow the yellow-dotted line on the track, make optimal turning decisions to avoid any objects in the pathway, and stop if a pedestrian is detected.


What We Promised

  • Obstacle avoidance where if the obstacle is not a person the robot swerves around and continues on its path using LIDAR
  • If a pedestrian is detected the robot will stop until the person is no longer detected using the OakD Lite Camera
  • The previous two objectives would also be achieved while line following and within ROS2

Accomplishments

  • Achieved all of our promised goals successfully
  • Completed project within ROS2
  • Refined obstacle avoidance algorithm
  • Extremely accurate person detection (almost too good)
    • Works on real humans and print out images

Challenges

  • Combining our obstacle avoidance program on the track with the pedestrian detection proved to be more complicated than initially expected
  • Adapting the various nodes, creating unique publishers/subscribers, and implementing all of our code within ROS2

Final Project Videos

Click any of the clips below to reroute to the video.

Final Demo

Final Clips

Everything Together (3rd Person)

Everything Together (POV)

Obstacle Avoidance

Pedestrian Detection

Early Progress Clips

Early Obstacle Avoidance

Early Pedestrian Detection


Software

Overall Architecture

Our project was completed entirely with ROS2 navigation in python. The 'rclpy' package is being used to control the robot and our primary control logic consists of the Calibration, Person Detection, Lane Detection, Lane Guidance, and nodes.

  • The Calibration Node was adapted from Spring 2022 Team 1 and updated for our use case. We strictly needed the gold mask to follow the yellow lines and implemented our own lane following code.

  • The Person Detection Node was full created for our team's project implementation. We create a new oakd node that sends the same image used by the depthai package for person detection to our guidance node so that the pedestrian detection can run concurrently with the line following.

  • The Lane Detection Node is used to control the robot within the track. We adapt the PID function to calculate the error and set new values to determine the optimal motion of the car to continue following the yellow lines in the lane. This is done by taking the raw camera image, using callibrated color values to detect yellow, and ultimately using the processed image to publish the control values that are are subscribed by the lane guidance node.

  • Ultimately, the "magic" happens within the Lane Guidance Node which is responsible for directly controlling car's movement. We have adapted the Lidar subscription from Spring 2022 Team 1 to detect obstacles within a particular viewing range in front of our car. The lane guidance node subscribes to lane detection node and our Person Detection nodes to correctly traverse the path. If no obstacles are detected, the car will simply continue its line following program, sticking to the yellow lines in the middle of the lane. If an obstacle is detected by the lidar, the car will correspondingly make a turn based on the object's angle and distance. As it routes around the object, the car continues to check for obstacles to avoid any collision and come back to the path. Additionally, if the subscription to the person_detected node is triggered as active, then the car knows there is a pedestrian in view and will stop.

Obstacle Avoidance

We used the LD06 Lidar to implement obstacle avoidance within ROS2. The program logic is quite simple in that we are constantly scanning the 60 degrees in front of the robot. If an object is detected within our distace threshold, the robot will accordingly make a turn to avoid it. Our logic for selecting which direction to turn in is quite simple in that if the object is on the left side, we first turn right and otherwise we turn left. Both turning directions include a corrective turn to bring the robot back to the centerline of the track and continue lane following.

Pedestrian Detection

We used the DepthAI package to implement the pedestrian detection within ROS2. We took advantage of the Tiny YOLO neural network setup found within the examples. We filter through the detections to check strictly for a "person" with adjustable confidence levels. We found that a 60% confidence level worked pretty well for our project's use cases. Surprisingly, we found better results with real humans walking in front of the robot (it would detect their feet and be able to classify them as "person" objects). We were also able to successfully scan various printout images of people with high accuracy and success. The programming logic for the pedestrian detection is very simple in that if a "person" has been detected in the image passed through by the camera, the VESC throttles are set to 0, stopping the car, until the person has moved out of the field of view.


Hardware

  • 3D Printing: Camera Mount, Jetson Nano Case
  • Laser Cutting: Base plate to mount electronics and other components.

Parts List

  • Traxxas Chassis with steering servo and sensored brushless DC motor
  • Jetson Nano
  • WiFi adapter
  • 64 GB Micro SD Card
  • Adapter/reader for Micro SD Card
  • Logitech F710 controller
  • OAK-D Lite Camera
  • LD06 Lidar
  • VESC
  • Anti-spark switch with power switch
  • DC-DC Converter
  • 4-cell LiPo battery
  • Battery voltage checker/alarm
  • DC Barrel Connector
  • XT60, XT30, MR60 connectors

Additional Parts used for testing/debugging

  • Car stand
  • USB-C to USB-A cable
  • Micro USB to USB cable
  • 5V, 4A power supply for Jetson Nano

Baseplate

Jetson Nano Case

Credit: https://www.thingiverse.com/thing:3778338

Camera Mount

Credit: https://www.thingiverse.com/thing:5336496

Circuit Diagram


Gantt Chart


Course Deliverables

Here are our autonomous laps as part of our class deliverables and preparation for the final project:

Here are our presentation slides for the weekly project updates and final presentation: Team 5 Presentation


Project Reproduction

If you are interested in reproducing our project, here are a few steps to get you started with our repo:

  1. Clone this repository
  2. Replace the ucsd_robocar_sensor2_pkg and ucsd_robocar_lane_detection2_pkg in the default ucsd_robocar_hub2 directory
  3. Calibrate Your Robot
    1. Toggle camera_nav_calibration to 1 and camera_nav to 0 within node_config.yaml
    2. Run source_ros2, build_ros2, and then ros2 launch ucsd_robocar_nav2_pkg all_nodes.launch.py
    3. Adjust sliders within GUI to ensure gold mask is clear with NO noise
    4. Toggle camera_nav_calibration to 0 and camera_nav to 1 within node_config.yaml
    5. Update your PID and throttle values in ros_racer_calibration.yaml
  4. Run on Track
    1. Run source_ros2, build_ros2, and then ros2 launch ucsd_robocar_nav2_pkg all_nodes.launch.py

Alternatively you can refer to the lane_guidance_node.py and lane_detection_node.py programs in ucsd_robocar_lane_detection2_pkg/ucsd_robocar_lane_detection2_pkg to adapt our code as needed for your project. We have extensive comments through the code explaining what is happening. Additionally, if you search for (Edit as Wanted) in our code, we have listed the primary areas where one would want to adjust parameters to adapt the lidar usage, pedestrian detection logic, and more. Some consistent, but simple and relevant issues we encountered were ensuring file pathways were correct and making sure that all dependencies are installed.

Best of luck!


Acknowledgements

Special thanks to Professor Jack Silberman and TAs (Kishore Nukala & Moises Lopez) for all the support!

Programs Referenced:


Contacts

spring-2023-final-project-team-5's People

Contributors

shastasubramanian avatar j1cortez avatar github-classroom[bot] avatar zixuhao avatar

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.