Giter VIP home page Giter VIP logo

digit-depth's Introduction


This repository is archived as of April 2024.

DIGIT

This codebase allows you:

  • Collect image frames from DIGIT and annotate circles in each frame.
  • Save the annotated frame values into a csv file.
  • Train a baseline MLP model for RGB to Normal mapping.
  • Generate depth maps in real-time using a fast Poisson Solver.
  • Estimate 2D object pose using PCA and OpenCV built-in algorithms.

Currently, labeling circles is done manually for each sensor. It can take up to an hour for annotating 30 images.
This codebase has a script that will replace manual labeling and model training process up to 10 mins.

Visualization

Estimating object pose by fitting an ellipse (PCA and OpenCV):


Depth image point cloud :


Marker movement tracking ( useful for force direction and magnitude estimation):


TODO

  • Add a Pix2Pix model to generate depth maps from RGB images.
  • Add a Monocular Depth model to generate depth maps from RGB images.

Config files

There are a number of configs params to be edited before you can run the scripts. This is the rough execution order:

  • python scripts/mm_to_pix.py : This script will help you calculate the mm_to_pix value for your sensor. You need to place a caliper on the sensor and press SPACEBAR to capture the image. Then, you need to enter the distance between the two ends of the caliper in mm. This will give you the mm_to_pix value for your sensor.Replace the value in config/digit.yaml file. Other config params in digit.yaml:
  • gel_height: Height of the gel in mm
  • gel_width: Width of the gel in mm
  • gel_thickness: Thickness of the gel in mm
  • gel_min_depth: Minimum depth of the gel in mm (max deformation)
  • ball_diameter: Diameter of the calibration ball in mm
  • max_depth: Maximum depth of the gel in mm (min deformation)
  • sensor/serial_num: Serial number of the sensor
  • sensor/fps: Frames per second of the sensor. Default is 30. There are some issues with 60 FPS.

Usage

Be careful about python path. It is assumed that you run all the scripts from the package folder(/digit-depth)

After changing the config params, run the following scripts in the following order:

  • pip install -r requirements.txt
  • pip install . Now you should have the package installed in your python environment. To train the model, you need to collect data first. You can use the following scripts to collect data:
  • python scripts/record.py : Press SPACEBAR to start recording. Collect 30-40 images.
  • python scripts/label_data.py : Press LEFTMOUSE to label center and RIGHTMOUSE to label circumference.
  • python scripts/create_image_dataset.py : Create a dataset of images and save it to csv files.
  • python scripts/train_mlp.py : Train an MLP model for RGB to Normal mapping.

color2normal model will be saved to a separate folder "models" in /digit-depth/ with its datetime.

Visualization

  • python scripts/point_cloud.py : Opens up Open3D screen to visualize point clouds generated by depth image
  • python scripts/depth.py : Publishes a ROS topic with the depth image. Modify the params inside for better visualization(threshold values,etc).

You can also try these ROS nodes to publish RGB image and maximum deformation value from depth images inside /scripts/ros folder:

  • python scripts/ros/depth_value_pub.py: Publishes the maximum depth (deformation) value for the entire image when object is pressed. Accuracy depends on your MLP-depth model.
  • python scripts/ros/digit_image_pub.py: Publishes the RGB image from the sensor.

Issues

  • If you are using a 60 FPS sensor, you might need to change the fps value in config/digit.yaml file. There are some issues with 60 FPS. Refer to this issue
  • MLP model accuracy really depends on the quality of RGB lighting. If you have produced your own DIGIT, make sure the light is not directly hitting the DIGIT internal camera.

Acknowledgements

I have modified/used the code from the following repos:

Feel free to post an issue and create PRs.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.