Giter VIP home page Giter VIP logo

3d-bat's Introduction

Dataset Badge Github Badge Github Badge Paper Badge Paper Badge Stars Badge Forks Badge Pull Requests Badge Issues Badge Issues Badge

3D Bounding Box Annotation Toolbox

Overview ✨

News 📢

Video

Features 🔥

  • Full-surround annotations
  • AI assisted labeling
  • Batch-mode editing
  • Interpolation mode
  • 3D to 2D label transfer (projections)
  • Automatic tracking
  • Side views (top, front, side)
  • Navigation in 3D
  • Auto ground detection
  • 3D transform controls
  • Perspective view editing
  • Orthographic view editing
  • 2D and 3D annotations
  • Web-based (online accessible & platform ind.)
  • Redo/undo functionality
  • Keyboard-only annotation mode
  • Auto save function
  • Review annotations
  • Sequence mode
  • Active learning support
  • HD map support
  • Copy labels to next frame
  • Switching between datasets and sequences
  • Custom dataset support
  • Custom classes support
  • Custom attributes support
  • V2X support
  • OpenLABEL support
  • Support multiple sensors
  • Object coloring
  • Focus mode
  • Support JPG/PNG files
  • Offline annotation support
  • Open source
  • Customizable and extendable
  • Zooming into images

Comparison

Animation

Release Notes 📝

  • 2024/03: 3D BAT v24.3.2
    • Added support to label V2X data
    • Load and display HD maps
    • Added support for custom object classes
    • Added support for custom attributes
    • Added support for custom datasets
    • Added support for OpenLABEL
    • Added support for active learning
    • Added support for AI-assisted labeling
  • 2019/02: 3D BAT v19.2.1
    • First release to label full-surround vehicle data (3D to 2D label transfer, side views, automatic tracking, interpolation mode, batch-mode editing)

Quick Start 🚀

1. Install npm

2. Clone repository:

git clone https://github.com/walzimmer/3d-bat.git & cd 3d-bat

3. Install required python packages:

conda create -n 3d-bat python==3.11.3
conda activate 3d-bat
pip install -r requirements.txt
conda install -c conda-forge nodejs==10.13.0

4. Install required node packages:

npm install

5. Start the backend-server

npm run start-server

6. Start the labeling tool application

npm run start

The index.html file should open now in the specified browser (chromium-browser by default). The default browser can be changed in the package.json file, line 32:

"start": "webpack serve --inline --open chromium-browser",

Custom Data Annotation 🌟

See Custom Data Annotation for more details.

Labeling Instructions 🗒

Instructions for data annotation can be found here.

Timelapse

Commands and Shortcuts 👨🏽‍💻

See Commands and Shortcuts for more details.

Tutorial Videos 📹

  • 3D Bounding Box Annotation Toolbox - Tutorial
  • Further tutorial videos are available under the ./tutorial_videos folder.
    • 3D Box Transformation (position, rotation, scale)
    • Image and Point Cloud Annotation
    • Interpolation mode
    • Using the side views (top, front, side)
    • Reset and undo/redo functionality

📚 Documentation

A readthedocs documentation will be available soon.

📝 Citation

If you use 3D Bounding Box Annotation Toolbox in your research, please cite the following papers:

@inproceedings{zimmermann20193d,
  title={3D BAT: A Semi-Automatic, Web-based 3D Annotation Toolbox for Full-Surround, Multi-Modal Data Streams},
  author={Zimmer, Walter and Rangesh, Akshay and Trivedi, Mohan M.},
  booktitle={2019 IEEE Intelligent Vehicles Symposium (IV)},
  pages={1--8},
  year={2019},
  organization={IEEE}
}
@inproceedings{cress2022a9,
  author={Creß, Christian and Zimmer, Walter and Strand, Leah and Fortkord, Maximilian and Dai, Siyi and Lakshminarasimhan, Venkatnarayanan and Knoll, Alois},
  booktitle={2022 IEEE Intelligent Vehicles Symposium (IV)}, 
  title={A9-Dataset: Multi-Sensor Infrastructure-Based Dataset for Mobility Research}, 
  year={2022},
  volume={},
  number={},
  pages={965-970},
  doi={10.1109/IV51971.2022.9827401}
}
@inproceedings{zimmer2023tumtraf,
  title={TUMTraf Intersection Dataset: All You Need for Urban 3D Camera-LiDAR Roadside Perception [Best Student Paper Award]},
  author={Zimmer, Walter and Cre{\ss}, Christian and Nguyen, Huu Tung and Knoll, Alois C},
  publisher = {IEEE},
  booktitle={2023 IEEE Intelligent Transportation Systems ITSC},
  year={2023}
}
@inproceedings{zimmer2024tumtrafv2x,
  title={TUMTraf V2X Cooperative Perception Dataset},
  author={Zimmer, Walter and Wardana, Gerhard Arya and Sritharan, Suren and Zhou, Xingcheng and Song, Rui and Knoll, Alois C.},
  publisher={IEEE/CVF},
  booktitle={2024 IEEE/CVF International Conference on Computer Vision and Pattern Recognition (CVPR)},
  year={2024}
}

📜 License

Copyright © 2019 The Regents of the University of California

All Rights Reserved. Permission to copy, modify, and distribute this tool for educational, research and non-profit purposes, without fee, and without a written agreement is hereby granted, provided that the above copyright notice, this paragraph and the following three paragraphs appear in all copies. Permission to make commercial use of this software may be obtained by contacting:

Office of Innovation and Commercialization
9500 Gilman Drive, Mail Code 0910
University of California
La Jolla, CA 92093-0910
(858) 534-5815
[email protected]

This tool is copyrighted by The Regents of the University of California. The code is supplied “as is”, without any accompanying services from The Regents. The Regents does not warrant that the operation of the tool will be uninterrupted or error-free. The end-user understands that the tool was developed for research purposes and is advised not to rely exclusively on the tool for any reason.

IN NO EVENT SHALL THE UNIVERSITY OF CALIFORNIA BE LIABLE TO ANY PARTY FOR DIRECT, INDIRECT, SPECIAL, INCIDENTAL, OR CONSEQUENTIAL DAMAGES, INCLUDING LOST PROFITS, ARISING OUT OF THE USE OF THIS TOOL, EVEN IF THE UNIVERSITY OF CALIFORNIA HAS BEEN ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. THE UNIVERSITY OF CALIFORNIA SPECIFICALLY DISCLAIMS ANY WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE. THE TOOL PROVIDED HEREUNDER IS ON AN “AS IS” BASIS, AND THE UNIVERSITY OF CALIFORNIA HAS NO OBLIGATIONS TO PROVIDE MAINTENANCE, SUPPORT, UPDATES, ENHANCEMENTS, OR MODIFICATIONS.

3d-bat's People

Contributors

walzimmer avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

3d-bat's Issues

Point cloud with RGB infos

HI,
I'm working on annotating point cloud data created from stereo camera and because the point clouds is too dense and incomprehensible for annotation, I wanted to ask if it's possible to include the RGB info to the point clouds in 3D-bat?

regards

Images not showing

Hi,
I did as isntructed in readme file, but the tool only shows me ladar point clouds.
I do not see the pictures at the top. I use Nuscenes dataset as per readme file.
Did somebody else also had this problem?
Screenshot (1056)

keypoints annotation

hello authors,

great work from you guys. does this tool provide keypoints annotation?

thank you

BUG

dear sir
i find that when i change the dataset , the sequece list only refresh at the fist time. when i change back ,the list of sequence still is the second one instead of the first one .
How can i fix this problem

Enable/Disable Scaling mode doesn't work

It seems like the "Enable/Disable Scaling mode" doesn't work, keyboard shortcut Y. I'm not able to resize a 3D bounding box with this option, it acts the same as the translation mode.

Web-browser annotation blank page

Hello, Thank you for your great work! When I follow the step to use this annotation tool, in the last step when I open index.html with chromium-browser, I have a empty page. Can you help me please ?

install mayavi error

error: Command "gcc -pthread -B /home/lwx/anaconda3/envs/3d-bat/compiler_compat -DNDEBUG -fwrapv -O2 -Wall -fPIC -O2 -isystem /home/lwx/anaconda3/envs/3d-bat/include -fPIC -O2 -isystem /home/lwx/anaconda3/envs/3d-bat/include -fPIC -I/tmp/pip-build-env-oymd2ph1/overlay/lib/python3.11/site-packages/numpy/core/include -Ibuild/src.linux-x86_64-3.11/numpy/distutils/include -I/home/lwx/anaconda3/envs/3d-bat/include/python3.11 -c tvtk/src/array_ext.c -o build/temp.linux-x86_64-cpython-311/tvtk/src/array_ext.o -MMD -MF build/temp.linux-x86_64-cpython-311/tvtk/src/array_ext.o.d -msse -msse2 -msse3" failed with exit status 1

unable to use under docker

hello,

I'm trying to use the application in a docker container.
i've downladed jetbrains/projector-webstorm:latest , installed all the requirements, managed to open webstorm , and with it the bat-3d project, but i'm unable to execute index.html using webstorm's open in browser feature.
chromiumn returns zygote_host_impl_linux running as root without --no-sandbox is not supported
firefox a similar error about root.

  • i've looked for a way to add the "--no-sandbox" switch to the webstorm's browser configuration but was not able to.

webstorm is also supposed to have a builtin webserver ,hopefully i should be able to access bat-3d from it, but it seems to not be installed/available in the container i've found or the trail account i've used.

these are my very first steps with webstorm.
any ideas how I can proceed ?

thanks,
Omer.

too slow for annotation

when deployed in a server, I open the website to use this toolbox but find it too slow. What problem is it?

There is a problem loading the scans

So as per the instructions I have created a new directory named input/ in the root of this project and I have added the point clouds in the following way ./input/tegel/sc3/point_clouds/000000.pcd. However, the following errors persist in the console of the app

http://localhost:63342/3d-bat/input/tegel/sc3/point_clouds_without_ground/undefined.pcd 404 (Not Found)

Is there anything that I am doing incorrect? Kindly let me know

Bug Report

@walzimmer @arangesh @daniridel
Hi thanks for opensourcing the work this is what i was looking for . i have four queries:

Q1. Modify the Euler angle in the point cloud, but the annotation box projected into the image will not change
Q2. Unable to adjust the size of point cloud points in annotation state
Q3. Insufficient transformation matrix between camera, radar, and ego
Q4. Unable to load all images when increasing the number of cameras

Thank you for answering these questions

Custom dataset annotation

@arangesh @nachiket92 @walzimmer @daniridel Hi thanks for opensourcing the work this is what i was looking for . i have few queries
Q1. what is the procedure to load custom pcd files since by default it taken in always nuscenes dataset
Q2. should we always have pcd file or is it fine to have bin's or csv's files
thanks in advance

Code Documentation

Hi,

Thanks for such an amazing repository but due to no documentation as well as commenting for functions functionality its extremely hard and time consuming to tweak or use the code or just understand it.

Please create a documentation with function description.

Will be a big help.

Thanks

How do I load my own point cloud file?

I would like to load my own point cloud file (.pcd) and annotate it by placing some bounding boxes. Can I do that with this tool?
I didn't see any instructions regarding how to load our own point cloud files.

I need help

Sir!!I want to translate this project into Chinese, But, I can not find your HTML`s elements code.

No sound on tutorial videos included in the github repository

Hi,
I tried cloning the github repo on my local machine and I don't get any sound for tutorial videos included inside 3d-bat/tutorial_videos.
I tried: lt_17_transformation_direct_short.mp4 and lt_38_interpolation_new.mp4
Any chance that the sound was not encoded?

creating new bounding box

Hi,

I wanted to ask if it is possible to create new boxes or is it only possible to load them when starting the annotation tool? I'm unable to create bounding boxes after clicking on an object. And also after trying manipulating the already existing boxes, I couldn't translate or rotate the boxes. I could only resize the box.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.