Giter VIP home page Giter VIP logo

calibration-exercise's Introduction

calibration-exercise

Camera image rectification

To calibrate the camera, we use the camera_calibration package. I made the launch file calibrate_camera.launch. The launch file plays the bag file and runs the camera_calibration package simultaneously. When the routine is done, click "Calibrate" then "Save". This will save .yaml file that contains new calibration parameters.

Screenshot

Now we use the change_camera_info.py to write a new bag file with the new camera_info parameter overwritten.

Now we have a new bag file with the correct camera_info topic. We create another launch file to play this new bag file and also the image_proc and image_view packages to view the new rectified image vs the old unrectified image.

Screenshot

Lidar to Camera Calibration

To align the lidar point cloud to the camera image, we must first find some corresponding features between the two datasets. Usually during calibration there are some physical features that are created to easily find corresponding points between lidar and camera images. In this case, however, we will have to manually find some points between the image and the lidar point cloud that correspond to each other. In this case, we can pick a frame where the checkerboard is clearly visible and label it's corners in both the lidar image and the camera image. These points are fed via a JSON file to a camera-to-lidar calibration module (ros-camera-lidar-calibration) that outputs the tranformation and rotation between the points. This transofrmation and rotation parameters are fed to a static transformer publisher in image_to_lidar.launch where it aligns the lidar to the camera frame. Screen shot of the two aligned below:

Screenshot

RGB Point Cloud

To create an RGB Point Cloud, a script is used which displays the registered velodyne scan and adds RGB values to each point in the point cloud. To do this, the Velodyne point cloud is projected from 3-D space onto the 2-D plane, and the closest RGB value from the camera is used to color that depth point. The XYZ + RGB point cloud is then published for viewing. At the moment, RVIZ over the network is not working so I was not able to visualize the point cloud.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.