Giter VIP home page Giter VIP logo

panoptic-toolbox's Introduction

PanopticStudio Toolbox

This repository shows how to work with the Panoptic Studio (Panoptic) data.

Quick start guide

Follow these steps to set up a simple example:

1. Check out the codebase

git clone https://github.com/CMU-Perceptual-Computing-Lab/panoptic-toolbox
cd panoptic-toolbox

2. Download a sample data and other data

To download a dataset, named "171204_pose1_sample" in this example, run the following script.

./scripts/getData.sh 171204_pose1_sample

This bash script requires curl or wget.

This script will create a folder "./171204_pose1_sample" and download the following files.

  • 171204_pose1_sample/hdVideos/hd_00_XX.mp4 #synchronized HD video files (31 views)
  • 171204_pose1_sample/vgaVideos/KINECTNODE%d/vga_XX_XX.mp4 #synchrponized VGA video files (480 views)
  • 171204_pose1_sample/calibration_171204_pose1_sample.json #calibration files
  • 171204_pose1_sample/hdPose3d_stage1_coco19.tar #3D Body Keypoint Data (coco19 keypoint definition)
  • 171204_pose1_sample/hdFace3d.tar #3D Face Keypoint Data
  • 171204_pose1_sample/hdHand3d.tar #3D Hand Keypoint Data

Note that this sample example currently does not have VGA videos.

You can also download any other seqeunce through this script. Just use the the name of the target sequence instead of the "171204_pose1_sample".

For example,

./scripts/getData.sh 171204_pose1

for the full version of 171204_pose1 sequence.

You can also specify the number of videos you want to donwload.

./scripts/getData.sh (sequenceName) (VGA_Video_Number) (HD_Video_Number)

For example, the following command will download 240 vga videos and 10 videos.

./scripts/getData.sh 171204_pose1_sample 240 10

Note that we have sorted the VGA camera order so that you download uniformly distributed view.

3. List of Available Sequences

You can find the list of currently available sequences in the following link:

List of release sequences

You can see the example videos and other information of each sequence in our website:

Browsing dataset.

Check the 3D viewer in each sequence page where you can visualize 3D skeletons in your web browser. For example:

http://domedb.perception.cs.cmu.edu/171204_pose1.html

4. Extract the images & 3D keypoint data

This step requires ffmpeg.

./scripts/extractAll.sh 171204_pose1_sample

This will extract images, for example 171204_pose1_sample/hdImgs/00_00/00_00_00000000.jpg, and the corresponding 3D skeleton data, for example 171204_pose1_sample/hdPose3d_stage1_coco19/body3DScene_00000000.json.

extractAll.sh is a simple script that combines the following set of commands (you shouldn't need to run these again):

cd 171204_pose1_sample
../scripts/vgaImgsExtractor.sh # PNG files from VGA video (25 fps)
../scripts/hdImgsExtractor.sh # PNG files from HD video (29.97 fps)
tar -xf vgaPose3d_stage1.tar # Extract skeletons at VGA framerate
tar -xf hdPose3d_stage1.tar # Extract skeletons for HD
cd ..

5. Run demo programs (Python)

This codes require numpy, matplotlib.

Visualizing 3D keypoints (body, face, hand):

cd python
jupyter notebook demo_3Dkeypoints_3dview.ipynb

The result should look like this.

Reprojecting 3D keypoints (body, face, hand) on a selected HD view:

cd python
jupyter notebook demo_3Dkeypoints_reprojection_hd.ipynb

The result should look like this.

6. Run demo programs (Matlab)

Note: Matlab code is outdated, and does not handle 3D keypoint outputs (coco19 body, face, hand). Please see this code only for reference. We will update this later.

Matlab example (outdated):

>>> cd matlab
>>> demo

Skeleton Output Format

We reconstruct 3D skeleton of people using the method of Joo et al. 2018.

The output of each frame is written in a json file. For example,

{ "version": 0.7, 
"univTime" :53541.542,
"fpsType" :"hd_29_97",
"bodies" :
[
{ "id": 0,
"joints19": [-19.4528, -146.612, 1.46159, 0.724274, -40.4564, -163.091, -0.521563, 0.575897, -14.9749, -91.0176, 4.24329, 0.361725, -19.2473, -146.679, -16.1136, 0.643555, -14.7958, -118.804, -20.6738, 0.619599, -22.611, -93.8793, -17.7834, 0.557953, -12.3267, -91.5465, -6.55368, 0.353241, -12.6556, -47.0963, -4.83599, 0.455566, -10.8069, -8.31645, -4.20936, 0.501312, -20.2358, -147.348, 19.1843, 0.628022, -13.1145, -120.269, 28.0371, 0.63559, -20.1037, -94.3607, 30.0809, 0.625916, -17.623, -90.4888, 15.0403, 0.327759, -17.3973, -46.9311, 15.9659, 0.419586, -13.1719, -7.60601, 13.4749, 0.519653, -38.7164, -166.851, -3.25917, 0.46228, -28.7043, -167.333, -7.15903, 0.523224, -39.0433, -166.677, 2.55916, 0.395965, -30.0718, -167.264, 8.18371, 0.510041]
}
] }

Here, each subject has the following values.

id: a unique subject index within a sequence. Skeletons with the same id across time represent temporally associated moving skeletons (an individual). However, the same person may have multiple ids if he/she appeared multiple different times (e.g., left the studio and reappeared later).

joints19: 19 3D joint locations, formatted as [x1,y1,z1,c1,x2,y2,z2,c2,...] where each c is a per-joint confidence score.

The 3D skeletons have the following keypoint order:

0: Neck
1: Nose
2: BodyCenter (center of hips)
3: lShoulder
4: lElbow
5: lWrist,
6: lHip
7: lKnee
8: lAnkle
9: rShoulder
10: rElbow
11: rWrist
12: rHip
13: rKnee
14: rAnkle
15: rEye
16: lEye
17: rEar
18: lEar

Note that this is different from OpenPose output order, although our method is based on it.

Note that we used to use an old format (named mpi15 as described in our outdated document), but we do not this format anymore.

KinopticStudio Toolbox

Kinoptic Studio is a subsystem of Panoptic Studio, which is composed of 10 Kinect2 sensors.

Kinoptic Studio can be independently used from the Panoptic Studio.

See our PtCloudDB document for more details

Quick start guide

Follow these steps to set up a simple example:

1. Download a data

Assuming you want to donwload a sequence named "160422_haggling1"

./scripts/getData_kinoptic.sh 160422_haggling1

This script will download the following files.

  • 160422_haggling1/kinect_shared_depth/ksynctables.json #sync table
  • 160422_haggling1/kinect_shared_depth/KINECTNODE%d/depthdata.dat #depth files
  • 160422_haggling1/kcalibration_160422_haggling1.json #multiple kinects calibration files
  • 160422_haggling1/kinectVideos/kinect_50_%d.mp4 #rgb video files

2. Extract RGB frames

cd 160422_haggling1
../scripts/kinectImgsExtractor.sh

3. Run demo to generate point clouds from 10 kinects

matlab ./matlab/demo_kinoptic_gen_ptcloud.m

Note that you should set your "root_path" and "seqName" in this demo file.

4. Run demo to project point clouds on a HD view

matlab ./matlab/demo_kinoptic_projection.m

Similarly, note that you should set your "root_path" and "seqName" in this demo file.

Panoptic 3D PointCloud DB ver.1

You can download all sequences included in our 3D PointCloud DB ver.1 using the following script:

./scripts/getDB_ptCloud_ver1.sh

License

Panoptic Studio Dataset is freely available for free non-commercial use.

References

By using the dataset, you agree to cite at least one of the following papers.

@InProceedings{Joo_2015_ICCV, author = {Joo, Hanbyul and Liu, Hao and Tan, Lei and Gui, Lin and Nabbe, Bart and Matthews, Iain and Kanade, Takeo and Nobuhara, Shohei and Sheikh, Yaser}, title = {Panoptic Studio: A Massively Multiview System for Social Motion Capture}, booktitle = {ICCV}, year = {2015} }

@article{Joo_2017_TPAMI, title={Panoptic Studio: A Massively Multiview System for Social Interaction Capture}, author={Joo, Hanbyul and Simon, Tomas and Li, Xulong and Liu, Hao and Tan, Lei and Gui, Lin and Banerjee, Sean and Godisart, Timothy Scott and Nabbe, Bart and Matthews, Iain and Kanade, Takeo and Nobuhara, Shohei and Sheikh, Yaser}, journal={IEEE Transactions on Pattern Analysis and Machine Intelligence}, year={2017} }

@article{Simon_2017_CVPR, title={Hand Keypoint Detection in Single Images using Multiview Bootstrapping}, author={Simon, Tomas and Joo, Hanbyul and Sheikh, Yaser}, journal={CVPR}, year={2017} }

panoptic-toolbox's People

Contributors

jhugestar avatar tsimk avatar leehsiu avatar

Watchers

James Cloos avatar 姬忠鹏 avatar

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.