Giter VIP home page Giter VIP logo

Comments (14)

schornakj avatar schornakj commented on May 26, 2024 1

BTW - Is it OK to post any questions I might encounter here or there a better way of doing this than clogging up the issues?

IMO that's part of what the issues page is for, so I'd say go for it!

from robot_cal_tools.

marip8 avatar marip8 commented on May 26, 2024

1: Take a set of images from multiple angles with a calibration grid where it is attached to the robot end effector tool
2: Record the base pose for each of the images taken (What format should these be in is (x,y,z, rx, ry,rz) acceptable?

I would suggest doing both of these steps using the CLI calibration tool in rct_ros_tools. Basically it exposes a service for capturing relevant calibration data from the ROS system (i.e. TF transform from a specified base frame to a specified tool frame, and 2D images) and a service for saving that data to a file structure. In your case you would set the tool frame to be the frame to which the target is mounted

3: Define a problem

You should be able to define the problem in almost the same way as the camera on wrist example. Basically you could load the extrinisic data set saved by the CLI tool described above, and assign those objects to the correct parts of the calibration problem struct

4: Add guesses to the problems - not sure on this part - can they be very broad?

As a rule of thumb, the closer you can get your guesses to the actual values the better the calibration will be. If your guess isn't close enough to the true solution, the optimization might end in a local minimum which may not be a very good solution. Generally the final_cost_per_obs member of the calibration result (i.e. the average squared error of measurement vs. predicted) will give you an indication of how good the calibration was. Typically 2D cameras can detect circle centers with sub-pixel accuracy, so a good final cost per observation might be less than 0.25

6: Do I need to change anything else in this example file - how does it know its an static camera in the cell vs on the robot (or does it market with regards the maths?

We recently updated the extrinsic hand-eye calibration such that it can represent both the static-camera-moving-target and static-target-moving-camera problems. The only thing you need to change is which observation transform the wrist pose is associated with. From here, you should change :

    // Let's add the wrist pose for this image as the "to_camera_mount" transform
    // Since the target is not moving relative to the camera, set the "to_target_mount" transform to identity
-    obs.to_camera_mount = wrist_poses[i];
-    obs.to_target_mount = Eigen::Isometry3d::Identity();
+   obs.to_camera_mount = Eigen::Isometry3d::Identity();
+   obs.to_target_mount = wrist_poses[i];

from robot_cal_tools.

johntraynor avatar johntraynor commented on May 26, 2024

Many thanks for the detailed answer. Really helpful. I'll give this a go.

BTW - Is it OK to post any questions I might encounter here or there a better way of doing this than clogging up the issues?

Thanks again

from robot_cal_tools.

johntraynor avatar johntraynor commented on May 26, 2024

Hi guys,

I am trying to use the 5 x 5 circular grid for the calibration but I can't seem to get it to detect any circles in the image. I've used chessboard patterns in the past with no problems but I was hoping to keep your code as is and get it working before making changes. Any idea what is doesn't like about them. Too big, too much clutter?? I have attached an example

Image_01_original

Thanks in advance

from robot_cal_tools.

marip8 avatar marip8 commented on May 26, 2024

There are a lot of parameters that you can play with in the circle detector class. The most common issues I seem to run into are:

  • Circle detector is looking for circles of the opposite color
    • filterByColor and circleColor parameters (in your case, true and 0)
  • The areas of the OpenCV-detected blobs do not lie within the range [minArea, maxArea]
    • You have a clean background, so there probably would be no harm in changing that range to [0, std::numeric_limits::max()]

If these don't solve your issue, I would suggest looking at the values you are using for the rest of the parameters to see if they make sense for this particular image

In the camera on wrist example, circles are detected at this line, where the findObservations function uses some default values for the circle detector. You could pass in your own circle detector parameters by using this overload of findObservations instead.

from robot_cal_tools.

johntraynor avatar johntraynor commented on May 26, 2024

Many thanks again for your inputs. I had tried min and max area but haven't tried the other ones you mentioned. I'll let you know how I get on

from robot_cal_tools.

johntraynor avatar johntraynor commented on May 26, 2024

Hi guys,
A quick update. I actually had to blur my images before in the code before I could get the circle detector to work. Also had to adjust the max area as suggested above for the blob detector but at least it's finding the circles now. I ran a calibration but the results are not good so I'm doing something stupid!.

I decided to test the code using the .launch files as it provides visual feedback on what is going on
This is what I called
"roslaunch roslaunch rct_examples camera_on_wrist_example.launch"

I believe I have the following in place
.yaml for each of the joint moves
.bmp images taken for each of the target positions
updated the static_camera_guesses.yaml with what I believe the values should be
created an intrinsic file for my camera
target.yaml file for the 5x5 with 0.0015m spacing

Have I missed some other step or it more likely to do with the data I have fed the software

I could upload the data and images if there was an easy way to do that? Otherwise any suggestions on how I can best debug

Thanks in advance

from robot_cal_tools.

johntraynor avatar johntraynor commented on May 26, 2024

A few screen shots that might help

image

image

Physical setup - UR3 with a camera
Frames

from robot_cal_tools.

johntraynor avatar johntraynor commented on May 26, 2024

Hi guys,

Just an update on this. I can't see to find where the problem is. I've checked the input data and it all looks correct and is the right units. I also played around with the guesses but doesn't seem to make any difference to the results I am getting.
When I look at the re-projection circles they look really small compared to the what they should be. Some circle patterns in images seem to be in the right orientation but they are definitely the wrong scale. Others are the wrong scale and orientation . Anyone able to point me in the direction of how best to debug the problem as I'm not sure where to go next?

Thanks in advance

from robot_cal_tools.

drchrislewis avatar drchrislewis commented on May 26, 2024

@johntraynor Sorry you're having so much trouble. I have lots of experience with the calibration you are performing. From your description, it sounds very much like your initial conditions are incorrect.
Eye-hand calibration is identical whether camera or target is on EOAT. However if you use TF to get the transform information, you must always listen from to the transform between where the camera is mounted to where the target is mounted in the right direction. If you give it camera to target when you want target to camera, you get the inverse and that can screw everything up. Semantics here are confusing. You want the matrix that multiplies points expressed in the frame on which the target is mounted and expresses them in the frame on which the camera is mounted. I suspect this is what you have wrong. If not, it would be best if you made a zip of your images and pose info and let someone take a look. This can be frustrating. We really need to make an easy to use GUI.

from robot_cal_tools.

johntraynor avatar johntraynor commented on May 26, 2024

Many thanks for the reply. Really appreciate it. Can I zip up the files and post here. Images are about 5MB each? Thanks in advance

from robot_cal_tools.

drchrislewis avatar drchrislewis commented on May 26, 2024

@johntraynor Don't post here. Rather, create a dropbox or something equivalent and send the link to clewis at-symbol swri dottt org. I'll take a look. I really do suspect your initial pose estimates.

from robot_cal_tools.

johntraynor avatar johntraynor commented on May 26, 2024

Many thanks. I’ll let you know when I send it to you.

from robot_cal_tools.

johntraynor avatar johntraynor commented on May 26, 2024

I just sent you a link Chris with the data. Let me know if you need anything else
Thanks again

from robot_cal_tools.

Related Issues (17)

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.