Giter VIP home page Giter VIP logo

raw_image_pipeline's Introduction

RAW Image Pipeline

Image processing utilities used for cameras that provide RAW data, such as the Alphasense Core unit.

Maintainers: Matias Mattamala ([email protected])

Contributors: Matias Mattamala, Timon Homberger, Marco Tranzatto, Samuel Zimmermann, Lorenz Wellhausen, Shehryar Khattak, Gabriel Waibel

raw_image_pipeline overview

License

This source code is released under a MIT License.

raw_image_pipeline_white_balance relies on Shane Yuan's AutoWhiteBalance package licensed under GNU.

raw_image_pipeline_python relies on Pascal Thomet's cvnp, licensed under MIT as well.

Build

Build Status

Overview

Packages

  1. raw_image_pipeline: ROS-independent implementation of the pipeline.
  2. raw_image_pipeline_python: Python bindings for raw_image_pipeline.
  3. raw_image_pipeline_ros: ROS interface to run the processing pipeline.
  4. raw_image_pipeline_white_balance: Additional white balance algorithm built upon Shane Yuan's code, based on Barron's (1, 2).

Pipeline

The package implements diferent modules that are chained together to process the RAW images. Each can be disabled and the image will be processed by the subsequent modules.

  • Debayer: auto, bayer_bggr8, bayer_gbrg8, bayer_grbg8, bayer_rggb8
  • Flip: Flips the image 180 degrees
  • White balance: simple, grey_world, learned (from OpenCV), ccc (from raw_image_pipeline_white_balance package), pca (custom implementation)
  • Color correction: Simple color correction based on a mixing BGR 3x3 matrix.
  • Gamma correction: default (from OpenCV), custom (custom implementation)
  • Vignetting correction: Removes the darkening effect of the lens toward the edges of the image by applying a polynomial mask.
  • Color enhancement: Converts the image to HSV and applies a gain to the S (saturation) channel.
  • Undistortion: Corrects the image given the camera calibration file.
Detailed modules description

Debayer Module

This module demosaics a Bayer-encoded image into BGR values (following OpenCV's convention). It relies on OpenCV's methods for both CPU and GPU.

Parameters

  • debayer/enabled: Enables the module. True by default
  • debayer/encoding: Encoding of the incoming image. auto uses the encoding field of the ROS message
    • Values: auto, bayer_bggr8, bayer_gbrg8, bayer_grbg8, bayer_rggb8

Flip

This flips the image 180 degrees. Just that.

Parameters

  • debayer/enabled: Enables the module. False by default

White Balance

It automatically corrects white balance using different available algorithms.

Parameters

  • white_balance/enabled: Enables the module. False by default
  • white_balance/method: To select the method used for automatic white balance
    • simple: from OpenCV. Tends to saturate colors depending on the clipping percentile.
    • grey_world: from OpenCV
    • learned: from OpenCV
    • ccc: from raw_image_pipeline_white_balance package
    • pca: custom implementation
  • white_balance/clipping_percentile: Used in simple method
    • Values: between 0 and 100
  • white_balance/saturation_bright_thr: Used in grey_world, learned and ccc methods
    • Values: Between 1.0 and 0.0. E.g. 0.8 means that all the values above 0.8*255 (8 bit images) are discarded for white balance determination.
  • white_balance/clipping_percentile: Used in simple method
    • Values: Between 1.0 and 0.0. E.g. 0.2 means that all the values below 0.2*255 (8 bit images) are discarded.
  • white_balance/temporal_consistency: Only for ccc. False by default. It uses a Kalman filter to smooth the illuminant estimate.

Color calibration

It applies a fixed affine transformation to each BGR pixel to mix and add a bias term to the color channels. It can be a replacement for the white balance module.

  • color_calibration/enabled: Enables the module. False by default
  • color_calibration/calibration_file: A YAML file with the color calibration matrix and bias. Example file. This file can be obtained using a script in the raw_image_pipeline_python package: color_calibration.py. To run it, we require a set of images capturing a calibration board (example): a reference image ref.png (example) and a collection of images from the camera to be calibrated. The usage is:
color_calibration.py [-h] -i INPUT -r REF [-o OUTPUT_PATH] [-p PREFIX] [--loss LOSS] [--compute_bias]

Performs color calibration between 2 images, using ArUco 4X4

optional arguments:
  -h, --help            show this help message and exit
  -i INPUT, --input INPUT
                        Input image (to be calibrated), or folder with reference images
  -r REF, --ref REF     Reference image to perform the calibration
  -o OUTPUT_PATH, --output_path OUTPUT_PATH
                        Output path to store the file. Default: current path
  -p PREFIX, --prefix PREFIX
                        Prefix for the calibration file. Default: none
  --loss LOSS           Loss used in the optimization. Options: 'linear', 'soft_l1', 'huber', 'cauchy', 'arctan'
  --compute_bias        If bias should be computed

⚠️ This feature is experimental and it is not recommended for 'serious' applications.

Gamma correction

It applies a gamma correction to improve illumination.

Parameters

  • gamma_correction/enabled: Enables the module. False by default
  • gamma_correction/method: To select the method used for automatic white balance
    • default: correction from OpenCV (CUDA only)
    • custom: a custom implementation based on a look-up table.
  • gamma_correction/k: Gamma factor: >1 is a forward gamma correction that makes the image darker; <1 is an inverse correction that increases brightness.

Vignetting correction

It applies a polynomial illumination compensation to overcome the barrel effect of wide-angle lenses: s * (r^2 * a2 + r^4 * a4), with r the distance to the image center.

Parameters

  • vignetting_correction/enabled: Enables the module. False by default
  • vignetting_correction/scale: s value
  • vignetting_correction/a2: 2nd-order factor
  • vignetting_correction/a4: 4th-order factor

⚠️ This feature is experimental and it is not recommended for 'serious' applications.

Color enhancement

It increases the saturation of the colors by transforming the image to HSV and applying a linear gain.

Parameters

  • vignetting_correction/enabled: Enables the module. False by default
  • vignetting_correction/saturation_gain: A factor to increase the saturation channel of the HSV channel.

Undistortion

It undistorts the image follow a pinhole model. It requires the intrinsic calibration from Kalibr.

  • color_calibration/enabled: Enables the module. False by default
  • color_calibration/calibration_file: Camera calibration from Kalibr, following the format of the example file

Requirements and compilation

Dependencies

sudo apt install libyaml-cpp-dev
cd ~/git
git clone [email protected]:catkin/catkin_simple.git
git clone [email protected]:ethz-asl/glog_catkin.git
git clone [email protected]:leggedrobotics/pybind11_catkin.git
cd ~/catkin_ws/src
ln -s ../../git/catkin_simple .
ln -s ../../git/glog_catkin .
ln -s ../../git/pybind11_catkin .

If you need CUDA support, you need to build OpenCV with CUDA. Check the instructions below

Build raw_image_pipeline_ros

To build the ROS package:

catkin build raw_image_pipeline_ros

If you also need the Python bindings, run:

catkin build raw_image_pipeline_python

CUDA support

If you are using a Jetson or another GPU-enabled computer and want to exploit the GPU, you need to compile OpenCV with CUDA support. Clone the opencv_catkin package, which setups OpenCV 4.2 by default.

cd ~/git
git clone [email protected]:ori-drs/opencv_catkin.git
cd ~/catkin_ws/src
ln -s ../../git/opencv_catkin .
cd ~/catkin_ws

⚠️ Before compiling, you need to confirm the compute capability of your NVidia GPU, which you can check in this website or the CUDA wikipedia page.

Compilation on Jetson Xavier board (compute capability 7.2)

catkin build opencv_catkin --cmake-args -DCUDA_ARCH_BIN=7.2
source devel/setup.bash

Compilation on Jetson Orin board (compute capability 8.7)

catkin build opencv_catkin --cmake-args -DCUDA_ARCH_BIN=8.7
source devel/setup.bash

Compilation on other platforms (e.g. laptops, desktops)

There are some extra considerations if you plan to compile OpenCV with CUDA in your working laptop/desktop:

  1. Compute capability may be different for your GPU: Please check the aforementioned websites to set the flag correctly.
  2. The opencv_catkin default flags are the minimum: Graphical support libraries (such as GTK) are disabled, so you cannot use methods such as cv::imshow. If you want to enable it, you can check the flags in the CMakeLists of opencv_catkin
  3. Default OpenCV version is 4.2: The package installs by default OpenCV 4.2, which was the version compatible with ROS melodic. This can be changed by modyfing the CMakeLists of opencv_catkin as well.

OpenCV's compilation will take a while - get a coffee in the meantime. When it's done, you can rebuild raw_image_pipeline_ros.

Troubleshooting

  • If you get errors due to glog, remove glog_catkin, compile opencv_catkin using the system's glog, and then build raw_image_pipeline_ros (which will compile glog_catkin)
  • If OpenCV fails due to CUDA errors, confirm that you compiled using the right compute capability for your GPU.
  • If you are using older versions of CUDA (10.x and before), they may require older GCC versions. For example, to use GCC 7 you can use:
catkin build opencv_catkin --cmake-args -DCUDA_ARCH_BIN=<your_compute_capability> -DCMAKE_C_COMPILER=/usr/bin/gcc-7

Run the node

To run, we use the same launch file as before:

roslaunch raw_image_pipeline_ros raw_image_pipeline_node.launch

This launchfile was setup for Alphasense cameras. The parameters can be inspected in the launch file itself.

Alphasense-specific info

Setup

Please refer to Alphasense Setup for the instructions to setup the host PC where the Alphasense will be connected. For further information you can refer the official manual.

raw_image_pipeline's People

Contributors

ckassab avatar dwisth avatar jonasfrey96 avatar mmattamala avatar timonh avatar tomlankhorst avatar yifutao avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar

raw_image_pipeline's Issues

Brainstorm missing features

Hi @JonasFrey96 @YifuTao

This and next week I plan to migrate the alphasense_rsl repo currently on Bitbucket to this repo, and make it open source.
In that way we can make the repo easily accessible by collaborators (like DRS in Oxford) and managing the project with GitHub. My main concern is keeping this also synchronized with the changes in anymal_rsl

In particular, some changes I have in mind:

  • Enable options to publish original image, undistorted, and mono
  • Publish undistorted + mask of valid pixels, so we have access to the full undistorted image without cropping
  • Improve auto white balance model
  • Improve color calibration script (including changes in cpp code and python bindings to accept the bias term)

Feel free to contribute with any other ideas here in the meantime, and then we can make separate issues for that once I have a better idea on how to organize this. @JonasFrey96 also feel free to add other people might be interested or are using the alphasense at RSL, like Turcan or Fan.

Issues when using with realsense

These issues are related to 'raw_image_pipeline_node.launch' launch files

  1. If you set input_type to color and turn off all the image processing options (e.g., debayer/enabled, flip/enabled), the topics of debayered and color would still be published
    And if you set input_type to color to other values, even if you turn on debayer/enabled, there is still nothing output

  2. (I think this issue is related to the above one) If I set flip/enabled to true and turn off other options, the image would still be debayered. (Here the input_type is set to color)

  3. topics with postfix slow do not publish messages at a slow rate

  4. Need feature to handle 16bit image (depth image)

Build error when using ros-noetic-pybind11-catkin ubuntu package

If you have the ros-noetic-pybind11-catkin ubuntu package installed then you may encounter the following error when building raw_image_pipeline_python:

catkin_ws/src/raw_image_pipeline/raw_image_pipeline_python/thirdparty/cvnp/cvnp/cvnp.cpp:26:55: error: ‘class pybind11::dtype’ has no member named ‘char_’

You need to remove this package and use the [email protected]:leggedrobotics/pybind11_catkin.git source repository instead.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.