Giter VIP home page Giter VIP logo

pose2sim's People

Contributors

anaaim avatar danielskatz avatar davidpagnon avatar hunminkim98 avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

pose2sim's Issues

Require Help in understanding how to get .osim files for our model

Hi Team,

Firstly thank you for the wonderful repo, it's been of great use and help to me.

I am trying to perform inverse kinematics and I am interested in the COCO dataset, I see that the Empty_project/OpenSim doesn't have .osim and scaling setup files for COCO.
I am just curious about how to generate these files, are these machine-generated or is there any steps or tutorial to create my own file?

To give you a little background, I changed the config.toml, pose_model to COCO, and generated the TRC file (it has 14 key points).

I am trying to perform scaling and inverse kinematics, but I do not see the required files.

Any help from the community will be greatly appreciated.

Thanks in advance,
Anudeep

About the name and order of 2d json files

Because dont know the names of 2d/2d-tracked json files, I used my own name format N.json, but the codes in triangulate_3d.py may not sort the json files right , causing disorder 3d joints shown in OpenSim before I modified these codes.

json_files_names = [fnmatch.filter(os.listdir(os.path.join(pose_dir, js_dir)), '*.json') for js_dir in json_dirs_names]

Corners not found. Fail when I try to calculate intrinsic parameters

I have tried to run demo without any problems, but when I try to calculate my photo the corners can't be found.
I tried to put the pictures in demo into my folder. The pictures in demo could still be recognized when I ran them, but the corners of my pictures could not be found.
I have examined config.toml and I'm sure 'show_detection_intrinsics = true'.
I need help! Thanks for answering my question. I don't know what the problem is.
This is one of my picture
1_00016

Wrong Calibration file

Hello,

First of all, thanks for sharing your work with us!

I am trying to follow the steps mentioned in the README file, and I have reached the camera calibration. However, my calibration file is a .cal because I am using Optitrack PrimeColor cameras, which can be calibrated and synchronized through Optitrack Motive.
How can I make it work?

Thanks,
Clara

Error when Pose2Sim.triangulate3D()

Hi David!
I'm trying to use the Pose2Sim workflow with my own pictures. I think the camera calibration succeeded but when I'm now trying to triangulate, I get the error below.
posse2sim error

Can you give more info about the error? Many thanks for considering my request.

Own videos not working

Hello

I am a engineering student from Belgium and am doing research on video fusion. The last months I have been trying to use Pose2Sim with my own recorded videos, but I can't seem to figure out what goes wrong.
First I changed the config file to match with my settings. I have a project where I have a short video of one second of me standing in a static pose, recorded with two cameras. Both videos are the same resolution. In openpose everything seems fine. The calibration also looks fine when I check the corner detection (I use the pose2sim calibration tool.) . But when I do the triangulation, the trc file stays empty. Or only a few columns are filled.

I played around with the settings, tried multiple videos from different perspectives,... I also tried opencap, and used the same videos generated from there in pose2sim. I noticed the trc files are (almost) completely filled, but when I try it in openpose, the scaling is all wrong. The arms are 100 times bigger compared to the body. I believe this is caused by the calibration, or am I wrong here?

Do you have any idea what I could be doing wrong?

Thank you for your amazing work!

Kind regards,
Siebren

Extrinsic parameter retrieval using Optitrack

Hello,

In this post, I will tell you how to retrieve extrinsic parameters using Optitrack's Motive softwar.

First, you need to calibrate your cameras using Optitrack's Motive software to obtain the .cal calibration file generated by the software. Next, create a project in Visual Studio using C++ and write the following code to read the calibration file and retrieve the position and orientation matrix of your cameras.

extrinsics.txt

Then, you must transform the 3x3 orientation matrix into an orientation vector. For this purpose, I have created the following code as I was unable to convert it using the void cv::Rodrigues function.

transformée.txt

I hope that's clear enough

Clara

Warning during triangulate3D

Hi,

Could you help me understand the warning I get during "triangulate3D". It does seem to compute anyway but seing how bad are the results I am wondering if something could be wrong due to this warning.

Please see below the output from function:

---------------------------------------------------------------------
Triangulation of 2D points for DLCto3D, for all frames.
---------------------------------------------------------------------

Project directory: [c:\Users\felix\Documents\ScapML\DLCto3D](file:///C:/Users/felix/Documents/ScapML/DLCto3D)
  0%|          | 0/5 [00:00<?, ?it/s][c:\Users\felix\.conda\envs\DEEPLABCUTBeforeUpdate2-3\lib\site-packages\numpy\core\fromnumeric.py:3474](file:///C:/Users/felix/.conda/envs/DEEPLABCUTBeforeUpdate2-3/lib/site-packages/numpy/core/fromnumeric.py:3474): RuntimeWarning: Mean of empty slice.
  return _methods._mean(a, axis=axis, dtype=dtype,
[c:\Users\felix\.conda\envs\DEEPLABCUTBeforeUpdate2-3\lib\site-packages\numpy\core\_methods.py:189](file:///C:/Users/felix/.conda/envs/DEEPLABCUTBeforeUpdate2-3/lib/site-packages/numpy/core/_methods.py:189): RuntimeWarning: invalid value encountered in double_scalars
  ret = ret.dtype.type(ret / rcount)
100%|██████████| 5/5 [00:00<00:00, 11.51it/s]
Mean reprojection error for STER is 118.6 px (~ 0.167 m), reached with 6.0 excluded cameras. 
Mean reprojection error for XIPH is 127.7 px (~ 0.18 m), reached with 7.4 excluded cameras. 
Mean reprojection error for C7 is 56.1 px (~ 0.079 m), reached with 7.2 excluded cameras. 
Mean reprojection error for T8 is 128.7 px (~ 0.181 m), reached with 7.2 excluded cameras. 
Mean reprojection error for SC is 42.0 px (~ 0.059 m), reached with 6.4 excluded cameras. 
Mean reprojection error for AC is 86.8 px (~ 0.122 m), reached with 6.0 excluded cameras. 
Mean reprojection error for CP is 50.2 px (~ 0.071 m), reached with 6.4 excluded cameras. 
Mean reprojection error for ACp is 119.7 px (~ 0.169 m), reached with 7.2 excluded cameras. 
Mean reprojection error for AA is 34.3 px (~ 0.048 m), reached with 6.0 excluded cameras. 
Mean reprojection error for TS is 88.4 px (~ 0.125 m), reached with 6.4 excluded cameras. 
Mean reprojection error for AI is 45.6 px (~ 0.064 m), reached with 6.4 excluded cameras. 
...
In average, 7.02 cameras had to be excluded to reach these thresholds.

3D coordinates are stored at [c:\Users\felix\Documents\ScapML\DLCto3D\pose-3d\DLCto3D_0-5.trc.](file:///C:/Users/felix/Documents/ScapML/DLCto3D/pose-3d/DLCto3D_0-5.trc.)
Triangulation took 0.48 s.

My interrogation is about this part:

Project directory: [c:\Users\felix\Documents\ScapML\DLCto3D](file:///C:/Users/felix/Documents/ScapML/DLCto3D)
  0%|          | 0/5 [00:00<?, ?it/s][c:\Users\felix\.conda\envs\DEEPLABCUTBeforeUpdate2-3\lib\site-packages\numpy\core\fromnumeric.py:3474](file:///C:/Users/felix/.conda/envs/DEEPLABCUTBeforeUpdate2-3/lib/site-packages/numpy/core/fromnumeric.py:3474): RuntimeWarning: Mean of empty slice.
  return _methods._mean(a, axis=axis, dtype=dtype,
[c:\Users\felix\.conda\envs\DEEPLABCUTBeforeUpdate2-3\lib\site-packages\numpy\core\_methods.py:189](file:///C:/Users/felix/.conda/envs/DEEPLABCUTBeforeUpdate2-3/lib/site-packages/numpy/core/_methods.py:189): RuntimeWarning: invalid value encountered in double_scalars
  ret = ret.dtype.type(ret / rcount)

Could you help me with this ?

I can send data if that is needed.

Thank you very much,
Félix Lefebvre

Synchronous comparison with Vicon and Vicon fullbody model for compared

Hello, thank you very much for the development of this project! I have a problem. The frame rate of the device I use is 30 Hz, but the comparison software Vicon I use is 100 Hz. At present, I want to compare the two of them simultaneously, but I have not figured out how to synchronize the camera and Vicon. On the other hand, how to solve the frame rate gap? Besides,vicon When I use full-body points, is there a marker point model corresponding to a certain osim model? Looking forward to your reply!

JOSS Review Checklist - Comments and requests

I'm just going to leave a list of comments here rather than making a new issue for each thing, which seems like it blow up the number of issues and be hard to track and manage.

I'll use this issue as a rolling place to add my comments as I go. I'll make it clear when I'm "Done" with my first pass through (it might be a few days from this initial post)

Link to my review checklist comment in the official review thread - openjournals/joss-reviews#4362 (comment)

img = cv2.imread(img_path) seems to not return error when a path to a video is given

In the following code in Pose2Sim\calibration it seems that img=cv2.imread(img_path) does not return a error when a path to a video is given and return a None as a result there is an error later on ==> I think we should test if img is None as well.

If you agree, I can do a pull request.

def findCorners(img_path, corner_nb, objp=[], show=True):
.
.
.
    try:
        img = cv2.imread(img_path)
    except:
        with suppress_stdout_stderr():
        # with warnings.catch_warnings():
        #     warnings.simplefilter("ignore")
            cap = cv2.VideoCapture(img_path)
            ret, img = cap.read()
    gray = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)
    img = cv2.cvtColor(img, cv2.COLOR_BGR2RGB)

Strange behavior when calculating the extrinsics calibration with the results :

When doing my extrinsic calibration using checkerboard the RMS in mm seems... strange. I know that our checkerboard might be too small for our set up, but the error found are quite huge.

image
==> If 1px of error generate 1.5m of error i would be quite flabbergasted.

Do you think it might be due to an error in the caculation ? If yes, i can do some investigation 🕵️

Thanks :)

Scale in OpenSim

First I must thank you a lot for this project, it made possible to have the most amazing mocap data possible, and I'm trying for about one year and a half.

And I'm making some changes to use it with mediapipe, so I could pack it all to use in blender. And I made the changes needed to make it work with your tools, but I dont know if I have to create another scale file to suit the markers place of mediapipe.

I know its not with you, since the markers are configurations on opensim, but, do you have a direction to point so I could do more research about it? I googled about it, but its very difficult to find information explaining stuff in opensim.

thanks a lot again for making my dream possible (my dream was to make mocap data from cameras 😃)

Errror with markertrajectory

Hi!

I was converting video images from 3 cameras and following the steps:

from Pose2Sim import Pose2Sim
Pose2Sim.calibrateCams()
Pose2Sim.track2D()
Pose2Sim.triangulate3D()
Pose2Sim.filter3D()

The .trc files i get after filtering are no good to use in OpenSim. I added images of the failed markers:

LAnkle
LBigToe
LHeel
LSmallToe
RAnkle

Has anyone also had this issue?
I could really use some help.
Thanks in advance!
Kind regards,
RLP Janssen

How to capture videos from 8 cameras simultaneously?

Hello everyone,

I'm wondering what kind of equipment you use to capture videos from multiple cameras? I'm using a Windows 10 desktop with four USB 3.0 ports, and I'm using Python with OpenCV to capture high-definition video from cameras with an image size of 1920x1080.

However, when I try to read from four cameras at once, I get a USB controller error message. I'm considering purchasing a PCI-e USB expansion card. I've tested a PCI-e 2.0 1x expansion card with four USB ports, but it still failed. Should I use a higher-end expansion card, such as a PCI-e 3.0 4x expansion card?

I'm still a novice in this area, so I would appreciate it if anyone with experience could share their experience or equipment configuration. Thank you very much.

  • I was able to successfully perform camera calibration and triangulation using pose2sim with two cameras.

Question about COCO_133 Skeleton

Hi,

Thank you for developing the Pose2Sim project; it appears to be incredibly useful. I have a couple of queries I'm hoping you can help clarify:

  1. I noticed a potential indexing discrepancy in the skeleton.py file related to the Coco_133 skeleton model. According to the COCO dataset documentation, indexing starts from 1, as seen in the annotation illustration here. However, in the skeleton.py file, it seems the indices follow zero-based indexing, but still COCO_133 model has the same indices as in the documentation. Should the indices -1 or is this adjusted elsewhere within the codebase?

  2. Can I utilize an alternative pose estimation model, such as HRNet_w48, which also returns COCO_133 keypoints, and incorporate the COCO_133 skeleton model into my configuration file?

Thank you!

Strange thing in calibration.py

Just to let you know there is something strange at line 659 on calibration.py

# Find corners or label by hand
        if extrinsics_board_type == 'checkerboard':
            imgp = findCorners(img_vid_files[0], extrinsics_corners_nb, objp=[], show=show_reprojection_error)
            if imgp == []:
                logging.exception('No corners found. Set "show_detection_extrinsics" to true to click corners by hand, or change extrinsic_board_type to "scene"')
                raise ValueError('No corners found. Set "show_detection_extrinsics" to true to click corners by hand, or change extrinsic_board_type to "scene"')
            objp = np.zeros((extrinsics_corners_nb[0]*extrinsics_corners_nb[1],3), np.float32) 
            objp[:,:2] = np.mgrid[0:extrinsics_corners_nb[0],0:extrinsics_corners_nb[1]].T.reshape(-1,2)
            objp[:,:2] = objp[:,0:2]*extrinsics_square_size

# which give is later use like this in the function findcorner()
 imgp_objp_confirmed = imgp_objp_visualizer_clicker(img, imgp=imgp, objp=objp, img_path=img_path)

# And later on in on_click()
# Add clicked point to 3D object points if given
            if len(objp) != 0:
                count = [0 if 'count' not in globals() else count+1][0]

No objp are given here. As a result, in find corner it is impossible to run the code after using the c key as no objp are given we are never entering the following part of the code . I am doing a PR to correct this.

OpenSim import in Blender and/or Maya

Moving to new issue

And to import .osim file in blender i used another addon
https://github.com/JonathanCamargo/BlendOsim

But my approach was using blender armatures, it was better in my opinion.

And I was able to import the mot file, but the mot file as basically angles in degrees, and making it work properly in blender was a pain in the but, and I gave up. I imported the data, but I felt I didnt know how to use the order of angles properly. I dont know how to explain better, but if I had to import just one angle per part of the body, it works right, but if I had to import 2 or 3 angles, things get messed up.

So my approach was using the location of the marker. and to do that I had to enable one optin in the inverse knematics xml file.

the change is explaned here
https://simtk.org/plugins/phpBB/viewtopicPhpbb.php?f=91&t=13422&p=38657&start=0&view=

basically is to add the option <report_marker_locations>true</report_marker_locations>
and when running the inverse knimatics, it will create a .sto file with the location of the markers (i guess its the markers LOL)

Originally posted by @carlosedubarreto in #3 (comment)

Why the name of the camera in intrinsic calibration is set to cam_01 instead of the true name of the video ?

Hello,

In calibration.py the function calibrate_intrinsics ''change the name of the camera'' :
C.append(f'cam_{str(i+1).zfill(2)}')

Should not it be like in the conversion files (example in qualysis) where the name of the camera is the name of the camera in the folder ?
I need these information later on and having the camera in the calibration file having the same name as in the ''real camera'' would be easy.

Best regard,

Function using .toml file path accept the dictionary form of the .toml file ?

Hello,

Thanks for the tool-box. Maybe I am not using it correctly, but in order to batch process some data (in some nasty for loop) I am modifying a toml file used in the different function

config_dict['project']['pose_folder_name'] = pose_folder_name
config_dict['project']['poseAssociated_folder_name'] = poseAssociated_folder_name
config_dict['project']['pose3d_folder_name'] = pose3d_folder_name

os.remove("temp.toml")
f = open("temp.toml", 'w')
toml.dump(config_dict, f)
f.close()
Pose2Sim.personAssociation("temp.toml")
Pose2Sim.triangulation("temp.toml")
Pose2Sim.filtering("temp.toml")

Would it be possible in the different function to be able to take a dictionary as a possible variable ? It would make that kind of thing easier. But I do not know if it is something that you would be interested in.

def calibration(config=os.path.join('User', 'Config.toml')):
    '''
    Cameras calibration from checkerboards or from qualisys files.
    '''

    from Pose2Sim.calibration import calibrate_cams_all
    
    if config is dict:
         config_dict = config
    else:
        config_dict = read_config_file(config)
    project_dir, seq_name, frames = base_params(config_dict)

Thanks

HELP NEEDED - WANT TO CONTRIBUTE?

Dear all,

I am now doing a relatively unrelated post-doc, and am working on Pose2Sim only on the side. There are tons of features that I think could be useful, and that I cannot find time to implement. If anyone wants to contribute, I'll be more than happy to oblige!

The list is at the end of the Readme file, but I'll put it here for the sake of clarity.

MAIN PROJECTS:
☑ Blender visualizer add-on
☑ Batch processing
☑ Multiple persons triangulation
☑ Synchronization
☑ Integrate pose estimation
▢ Integrate monocular 3D kinematics with RTMPoseW3D
▢ Integrate scaling and inverse kinematics
▢ Graphical User Interface
▢ Self-calibration based on keypoint detection
▢ Video tutorials, documentation on github.io
▢ Calibration of moving cameras

I have a clear idea of how to do each of these, so if you are not sure of the details or are not sure whether you can use or develop the skills, I can definitely tell you more about it! Here is an invite to a Discord server if you are interested in discussing it (no commitment at this stage 😛).

If you want to modify the code, please please follow this guide on how to fork, modify, and push code, and submit a pull request. I would appreciate it if you provided as much useful information as possible about how you modified the code, and a rationale for why you're making this pull request. Please also specify on which operating system and on which Python version you have tested the code.

How can I run pose2sim on my videos ?

Hello,

I want to run pose2sim on some of my videos. I performed 2D pose estimation step. After that it says camera calibration, I'm not sure how to do this step ? Do we need cameras or just photos of checkerboard in a folder. Can we use the same .qca file present in the demo folder.

I added the checkboard images in calib-2d folder and ran Pose2Sim.calibrateCams().
My output - Residual (RMS) calibration errors for each camera are respectively [] px, which corresponds to [] mm.

After that Pose2Sim.track2D() gave me error
return error_min, persons_and_cameras_combination
UnboundLocalError: local variable 'persons_and_cameras_combination' referenced before assignment

Can you help me here @davidpagnon?

Thanks a lot!

.qca.txt

Hello, I want to know how to export the .qca.txt file of Quaanalysis, I think I can only get .qca file

Problem with Triangulation - ValueError: cannot reshape array of size 0 into shape (0,newaxis)

Dear David,
Thank you for your pipeline. I tried to use it with synchronized video from Vicon.
I have no problem with OpenPose, Calibration and Tracking the person. When I tried the triangulation, I have this error : ValueError: cannot reshape array of size 0 into shape (0,newaxis).
When I change interpolation parameter to 'none', it works but the trajectories are set to 0.
Do you have any idea ? I can send my project directory if you need.
Best regards,
Mathieu

Is this method possible without calibration?

Hello

I am doing research on 3d pose estimation. I was wondering if the same method could be applied without calibration? I am not the creator of the videos that I want to use, so calibration isn’t possible. I only have the camera setup available that shows the position of each camera relative to each other.

Thanks in advance!

Error with IK - Number of Work Units cannot be negative

Hi Team,

First of all I would like to thank all the contributors of this wonderful repo.

I am trying to follow the steps mentioned in the README file, After generating the filtered TRC files, when I tried to utilized the filtered TRC file in inverse kinematics, by replacing Balancing_for_IK.trc file, OpenSim throws the error - number of work units cannot be negative.

Can you please help me understand my error? Is my approach right?

Thanks,
Rishi.

Issue with Qualisys data

Dear David,

We tried Pose2Sim with 2 different Qualisys systems (Lyon and Montréal).
The triangulation does not work, probably due to the calibration conversion.
Removing /64 fix partly the problem. Could you please explain to us where the 64 comes from or other documents explaining the conversion?
Regards!
mickael

Rotate the video calibration files :

Hello,

I would like in order to be able to use openpose more precisely to rotate the video we are using (sorry for the illustration using paint):

Sans titre

Does some code like this would be able to modify correctly the calibration file ? I am not very aware on the different assumption on the frame definition of the camera in the final .toml file. I supposed currently that the z of the camera was pointing forward so for the above example we would need to rotate the camera of pi/2 to correct the image.

   # toml_path is the path with the original calibration 
    C, S, D, K, R, T = read_toml(toml_path)
    print(C)
    all_rotation = []
    for c in C:
        print(c)
        print(rotation_to_do[c])
        all_rotation.append(rotation_to_do[c])
    R = [np.array(cv2.Rodrigues(r)[0]) for r in R]
    T = np.array(T) * 1000
    # In the illustration above we would need rot to be equal to np.pi/2 (and all the adapted rotation would be in the all_rotation list for the other camera)
    RT = [rotate_cam(r, t, ang_x=0, ang_y=0, ang_z=rot) for r, t, rot in zip(R, T, all_rotation)]
    R = [rt[0] for rt in RT]
    T = [rt[1] for rt in RT]

    R = [np.array(cv2.Rodrigues(r)[0]).flatten() for r in R]
    T = np.array(T) / 1000

    toml_write(toml_path_export, C, S, D, K, R, T)

Thanks in advance.

TRC file question on OpenSim

Hello David,

I'm Jack.

I am trying to test your pose2sim program.

I'm using 5 camera Calibration but the results are not great, both int and ext are over 0.5px, I choose to ignore it :)

Then using the triangulation equation for OpenPose json of Body_25, I get an error of less than 15px, so,I filter trc file.

When using OpenSim, I have no problem of scaling, but using IK,I get upside down and jittery or distorted motion. I think the reverse can be solved by conversion, but there is no other solution.

I have sent you the Empty_project file to you([email protected]), which contains my data ,basic information, andtrc file. If you see it, I will be very happy, because it will be a big help for my research.

my emailaddress ([email protected])

handstand

Struggling with Camera Calibration

Hello David,

I am trying to test your pose2sim program on the fit3d dataset.
Triangulation starts like this:
Triangulation1
and ends like this:
Triangulation2

You have previously pointed out that this may point towards mistakes in camera calibration. If I dont use interpolation, the triangulation does finish but the results are terrible.

The fit3d dataset provides camera parameters in JSON format and I have done my best to manually translate them to your camera calibration format, but I may have made a mistake of course. For example I have used the cv2.Rodrigues method to generate a rodrigues vector from the rotation matrix.
I have attached my calibration files, I would super appreciate it if you could take a look as I'm a bit stuck :)

ProblemImages.zip

Body25B caffemodel download

Hello,

Thank you for your excellent work. Currently, I'm using the Body25 model for human pose detection, and it's performing really well. However, I noticed in the pose2sim documentation that using the Body25B model might yield even better results. Therefore, I would like to try using the Body25B model. Unfortunately, I couldn't find the Body25B pose_iter_XXXXXX.caffemodel file in the openpose git repository. I'm wondering if it is necessary to train this model myself with Caffe or if there are pre-trained model files available similar to the ones provided for the Body25 model. If there are pre-trained Body25B models available, could you kindly guide me on how to download them?

Thank you for your assistance.

Suggestion: Add smoothnet

Hello.
I just saw your commit with kalman filter.
and I was thinking that maybe pose2sim could use smoothnet.

I tested it with a project that I'm doing using 4d humans, and it made some really good work. Not all the time, but most of the time the result improved a lot.

If you are interested, here is the link for the smoothnet code
https://github.com/cure-lab/SmoothNet

Problem when running Pose2Sim.triangulate3d()

Config and Calib:

Calib.txt
Config.txt

Project directory structure:

├───calib-2d
├───opensim
│   └───Geometry
├───pose-2d
│   ├───pose_cam1_json
│   ├───pose_cam2_json
│   ├───pose_cam3_json
│   ├───pose_cam4_json
│   ├───pose_cam5_json
│   ├───pose_cam6_json
│   ├───pose_cam7_json
│   └───pose_cam8_json
├───pose-2d-tracked
│   ├───pose_cam1_json
│   ├───pose_cam2_json
│   ├───pose_cam3_json
│   ├───pose_cam4_json
│   ├───pose_cam5_json
│   ├───pose_cam6_json
│   ├───pose_cam7_json
│   └───pose_cam8_json
├───raw-2d
│   └───00_img
└───User

I think this might be an index-1 issue as if i change the config to go from frame 1-400 it seems to run fine.

Can't run scale tool when I try body25B model

Hello,
I try body25 and it work good, but I think it's not accurate enough because the motion would mismatch sometimes.
So I try body25B today and the previous steps work good, but when I try to scale the model, I'm sure I followed the correct procedure but the run button is not work. If I load body25 scale file, the scale tool can run, but when I load body25b scale file, it can't work.
Thank you for your answer very much.

捕获

question about body model file

I have a question about OpenSim kinematics in pose2sim.

  • Is model Model_Pose2Sim_Body25b.osim slightly modified from model [lifting full-body model]? It seems that only the markers part has been changed to the Openpose_Body25b format, is that correct?

  • How Scaling_Setup_Pose2Sim_Body25b.xml is made? As shown in the figure below, values related to scaling were inputted manually. For what reason is it decided as follows?.

image

how to reduce reprojection error my camera calibration with scene ?

Hello, sir.
I have been facing an issue for the last 2 months where I am unable to reduce the camera calibration reprojection error below a certain number.
Here are the results of the most recent calibration attempt. ( with 4 cameras)
--> Residual (RMS) calibration errors for each camera are respectively [11.431, 8.938, 12.383, 11.874] px,
which corresponds to [30.368, 25.741, 32.276, 20.572] mm.
I'm calibrating with a total of 12 or 16 points in the scene. This is a homemade calibration cage, so it's not made to very precise dimensions, but I tried to get the numbers as accurate as possible (to 4 decimal places).
The intrinsic parameter was measured using a matlab. ( below 1px )
below is example image that using my calibration
first_frame_cam1
image

i don't know how i doing to advanced calibration... What could I be missing?

  • additionally, i use size of image or video 2704 x 2028 ! with Gopro black 7 !
    Thank you so much for reading.

Extrinsic calibration fail

Greetings.

Extrinsic calibration fails at first camera with this:
image

Here is extrinsic scene (coords 25 & 26 are hidden behind cabinet):
image

Here are coords:
object_coords_3d = [[0.0000, 0.0000, 0.0000], [0.2900, 0.0000, 0.0000], [0.5800, 0.0000, 0.0000], [0.8800, 0.0000, 0.0000], [0.0000, 0.4000, 0.0000], [0.0000, 0.8000, 0.0000], [0.0000, 1.2000, 0.0000], [0.4000, 0.4000, 0.0000], [0.4000, 0.8000, 0.0000], [0.4000, 1.2000, 0.0000], [0.8000, 0.4000, 0.0000], [0.8000, 0.8000, 0.0000], [0.8000, 1.2000, 0.0000], [1.2000, 0.4000, 0.0000], [1.2000, 0.8000, 0.0000], [1.2000, 1.2000, 0.0000], [0.0000, 1.2950, 0.9500], [0.3000, 1.2950, 0.9500], [0.6000, 1.2950, 0.9500], [0.9000, 1.2950, 0.9500], [1.2000, 1.2950, 0.9500], [-0.4350, 0.0000, 0.0000], [-0.4350, 0.0000, 1.4000], [-0.4350, -0.5000, 1.4000], [-0.4350, 0.5000, 1.4000], [-0.4350, 1.0000, 1.4000]]

Might be interesting to be able to add a skeleton model without having to modify the skeleton.py function

Hello,

Just some thoughts on the fact that when using DeepLabCut model we have to ''dive'' quite far in the code to be able to modify the skeleton function to add a skeleton model to be able to perfoms triangulation. This is even more of a problem when using virtual environement because you have to go modify the function in each environment (quite a pain to have to say to each user where there are supposed to modify a built-in function when they are not very used to do programming)

I am perfectly aware that such use is quite rare, but being able to change easily the skeleton would ease the use of this with deep-lab-cut I think.

Would it be possible you think to give a dictionary or even a calibration file with all the needed information to modify such information ? Is there a reason for having this in a function and not in calibration file ?

Best regards,

Bug when assign start frame nonzero

If user assigns a nonzero start frame, an exception will be encountered.

How to fix it:

triangulate_3d.py

line 147
Q.index = np.array(range(f_range[0]-f_range[0], f_range[1]-f_range[0])) + 1
line 397
f_range = [[0,min([len(j) for j in json_files_names])] if frame_range==[] else [i - min(frame_range) for i in frame_range]][0]
line 436
trc_path = make_trc(config, Q_tot, keypoints_names, frame_range)

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.