Giter VIP home page Giter VIP logo

ggcnn_kinova_grasping's People

Contributors

dougsm avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

ggcnn_kinova_grasping's Issues

The problem of stack object grasp

Thanks for all the problems that you answered previously.
I have another one about the stack objects grasping in the video, how did you achieve the segment the object you want to grasp from the stack objects. I find the code in this package can only grasp one object not in the stack, what is your solution on this problem.

Grasp width in mm

Hello there,

actually it is not an issue.

Could you please give me a reference of the equation you used you used to transform pixels into mm?

I am trying to get something to use in a UR5 arm with realsense d435 but I am a little bit lost.

I really appreciate you help ;)

The ValueError: Attempted relative import in non-package problem

I have found an error the when t run the "rosrun ggcnn_kinova_grasping kinova_closed_loop_grasp.py " the terminal pop up the "The ValueError: Attempted relative import in non-package problem" on "from .helpers.transforms import current_robot_pose, publish_tf_quaterion_as_transform, convert_pose, publish_pose_as_transform", I search on the internet if change this to "from helpers.transforms import current_robot_pose, publish_tf_quaterion_as_transform, convert_pose, publish_pose_as_transform", maybe the Relative and absolute import problem. I want to make sure i'm correct.

The question about connect the kinova

after i have connect the kinova_j2s7s300,the terminal pose this problem could you konw what's wrong
raise rospy.exceptions.ROSInitException("time is not initialized. Have you called init_node()?")
rospy.exceptions.ROSInitException: time is not initialized. Have you called init_node()?
Exception AttributeError: "TransformListener instance has no attribute 'tf_sub'" in <bound method TransformListener.del of <tf2_ros.transform_listener.TransformListener instance at 0x7f11ac83a0e0>> ignored

ROSInitException: time is not initialized,Exception AttributeError: "TransformListener instance has no attribute 'tf_sub'"

Hello! I use the kinova_j2s7s300 to reproduce the experiment, and encounter the following error when run the kinova_open_loop_grasp.py:

rospy.exceptions.ROSInitException: time is not initialized. Have you called init_node()?
Exception AttributeError: "TransformListener instance has no attribute 'tf_sub'" in <bound method TransformListener.del of <tf2_ros.transform_listener.TransformListener instance at 0x7fc27fa29ef0>> ignored

After reconneting the robot, the problem still exists. I run the kinova_robot.launch,wrist_camera.launch and kinova_open_loop_grasp.py in the system environment (ubuntu16.04, python2.7) and run run_ggcnn.py in the anaconda virtual environment created for the implement of tensorflow. Do you have other suggestions for solving the problem?

the transform of grasp pose

hi, i am reading the paper and code about ggcnn
but i find something inconsistent with my thought
in kinova_closed_loop_grasp.py
https://github.com/dougsm/ggcnn_kinova_grasping/blob/master/ggcnn_kinova_grasping/scripts/kinova_closed_loop_grasp.py#L100

  •      # Construct the Pose in the frame of the camera.
          gp = geometry_msgs.msg.Pose()
          gp.position.x = d[0]
          gp.position.y = d[1]
          gp.position.z = d[2]
          q = tft.quaternion_from_euler(0, 0, -1 * d[3])
          gp.orientation.x = q[0]
          gp.orientation.y = q[1]
          gp.orientation.z = q[2]
          gp.orientation.w = q[3]
    

why the yaw angle is -1*d[3]? (i think φ should be counterclockwise rotation around z axis ,so it should be d[3],and kinova_open_loop_grasp.py,,https://github.com/dougsm/ggcnn_kinova_grasping/blob/master/ggcnn_kinova_grasping/scripts/kinova_open_loop_grasp.py#L77also did not multiply -1 )

   # Average pose in base frame.
    gp_base.position.x = av[0]
    gp_base.position.y = av[1]
    gp_base.position.z = av[2]
    GOAL_Z = av[2]
    ang = av[3] - np.pi/2  # We don't want to align, we want to grip.
    q = tft.quaternion_from_euler(np.pi, 0, ang)
    gp_base.orientation.x = q[0]
    gp_base.orientation.y = q[1]
    gp_base.orientation.z = q[2]
    gp_base.orientation.w = q[3]

i am quite confues with ang = av[3] - np.pi/2 ,the meaning to minus -pi/2
here is fig in the paper
image
i think i made some conceptual mistaek,so i'll be gratefull if you do some explanation about my question

@dougsm

something about simulation and real-world

Hi,I am vrey interested in your ggcnn.However,I do not have a real robot or camera.Could you please tell me how to do some simulations in Gazebo or input the depth image,output the three images?

Sigmoid instead of ** 2 for run_ggcnn.points_out?

Hi @dougsm,

First off, this is a great codebase, thanks for sharing!

I had a question regarding this line in the run_ggcnn.py script:

I've noticed that the ggcnn network can output negative affordance (which makes sense since the activation on the pos_output layer is linear). By squaring the output of the model here, doesn't this cause areas with really low (negative) affordance to seem positive?

Would something like a sigmoid function be better here, or am I misunderstanding soemthing?

About the scale of width

width = width_out[max_pixel[0], max_pixel[1]]

Hi, thank you for sharing your code. I have been tinkering with your ggcnn for a few days and trained a few variants. I tested them on a Baxter Simulator in Gazebo and used a custom pick place pipeline.

I have a question about how the scale of width is dealt with here though. The input of the network is a 300300 image resized from a 400400 image. I can see that you have transformed the grasp point for this cropping and resizing, but it seems the width hasn't been resized accordingly, for example being multiplied by 400.0/300.0 because the output width corresponds to the 300*300 image.

MODEL_FILE = 'PATH/TO/model.hdf5

hello Doug Morrison,
I want to know how to get the "MODEL_FILE = 'PATH/TO/model.hdf5"

I can find the hdf5 file in the link "GG-CNN Model https://github.com/dougsm/ggcnn ". The two model seems use different deep learning framework. One is pytorch and the other is tensorflow.

Looking forward to your answer .
jiamin guo

The question about kinova_closed_loop_grasp.py

Thanks for your contribution. But i still have some questions about the code in kinova_closed_loop_grasp.py. On 260 line i cannot find the file of the service : kinova_msgs.srv.StartRequest() in the kinova_ros. Could you explain that.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.