Giter VIP home page Giter VIP logo

arekmula / autonomous_vehicle Goto Github PK

View Code? Open in Web Editor NEW
8.0 1.0 3.0 15.94 MB

An autonomous vehicle simulation in ROS and Gazebo. Built around https://github.com/osrf/car_demo

License: MIT License

CMake 1.25% Python 93.20% Shell 5.55%
autonomous-vehicles ros car-demo docker rospy convolutional-neural-networks velocity steering-angles autonomous-driving autonomous-car gazebo model throttle

autonomous_vehicle's Introduction

autonomous_vehicle

selfdriving

The goal of the project is to build a platform that will simulate an autonomous vehicle. The platform should provide essential functionality:

  • controlling the vehicle
  • collecting the data
  • autonomous ride

The following image shows the concept of the simulation platform: platform_concept

  • Prius block - a vehicle with access to steering inputs, odometry and images from cameras
  • Joy/Keyboard block - Controlling the vehicle, changing vehicle ride mode (manual/autonomous), turning on/off collecting the data
  • Visualization block - Visualizing current velocity, drive mode, steering inputs on front camera image
  • Dataset block - collecting the images and labels used to train the CNN model.
  • Convolutional Neural Network Model - The trained model that returns the predicted steering angle and vehicle based on input image. The vehicle should ride autonomously after turing on corresponding mode
  • PID - the predicted vehicle speed needs to be converted to throttle/brake inputs
  • We actually added one more PID block, so the predicted steering angle is also used by PID

Dependencies

  • Ubuntu 18.04 or Ubuntu 20.04
  • Docker

Network architectures

During implementation, we've tested 2 different network architectures.

The second network seemed to work better for us. We adapted it a bit and changed the input shape of the image to 800x264 as well as the output shape as we had to predict steering angle and velocity. Final network architecture looks following: network_architecture

First run

The commands run in host are marked as H and the commands from terminal are marked as C

  • Clone the repository
  • Go to the docker directory - H$ cd docker
  • Build the docker image - H$ ./build.sh
  • Run the container using H$ run_cpu.sh or H$ run_gpu.sh
  • Go to main workspace C$ cd /av_ws
  • Initialize and build the workspace (It might take long) C$ catkin init, C$ catkin build
  • Load environment variables C$ source/av_ws/devel/setup.bash
  • Run demo package C$ roslaunch car_demo demo.launch
  • Save the docker container
  • Close the container
  • Create workspace on your local machine H$ mkdir -p ~/av_ws/src
  • Move the av_03 and av_msgs folder to H$ ~/av_ws/src directory
  • Make sure that --volume arguments in docker/run_gpu.sh or docker/run_cpu.sh points to the correct directories containing av_03 and av_msgs
    • Sometimes you need to change $USER value to your real username
  • Run the container H$ run_cpu.sh or H$ run_gpu.sh
  • Go to the main workspace directory and then to av_03 package and see if the files are there
C$ cd /av_ws/src/av_03/
C$ ls

Running the package

  • Run the docker container
  • Go to the workspace directory
  • Download the trained model and put it in cnn_models in av_03 package
  • Build catkin package
C$ catkin build
  • Source the environment
C$ source devel/setup.bash
  • Launch the demo
C$ roslaunch av_03 av.launch

Selfdriving

To start the selfdriving run:

C$ rostopic pub --once /prius/mode av_msgs/Mode "{header:{seq: 0, stamp:{secs: 0, nsecs: 0}, frame_id: ''}, selfdriving: true, collect: false}"

Collecting the data

To collect the data you need to launch controller_node. To do so uncomment line 31 in av.launch file. Then pressing C will start collecting data. The steering is being done by the arrow keys

Saving docker container

H$ docker container ps
H$ docker commit container_name av:master

autonomous_vehicle's People

Contributors

arekmula avatar jakub-bielawski avatar skwarson96 avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.