Giter VIP home page Giter VIP logo

gym_envs_urdf's People

Contributors

alxschwrz avatar behradkhadem avatar casparvv avatar gijsgroote avatar luziakn avatar maxspahn avatar saraybakker1 avatar siyuanwu99 avatar skyloveqiu avatar web-flow avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar

gym_envs_urdf's Issues

Tiago Example Error

During the development, error message is reported.
File "/home/skylove/gym_envs_urdf/examples/tiago.py", line 29, in main print("base: ", ob["x"][0:3]) KeyError: 'x'
there is no key x in object ob.

A temporary solution could be removing lines 28-32.

Resetting limits if needed

Currently the joint position/velocity/acceleration limits are hard coded. We should add a setter method to modify it according to the need of an application.

Automated extraction of joint ids

Joint indices for reading in the limits, controlling the joints, and disable castor wheels is hard-coded at the moment.
It would be much better if this was done automatically, or at least by name.

A starting point could be:

# TODO: This could be used as a starting point to get joint indices from urdf <01-12-21, mspahn> #
"""
wheel_joint_names = ["wheel_right_joint", "wheel_left_joint"]
torso_joint_name = ["torso_lift_joint"]
head_joint_names = ["head_" + str(i) + "_joint" for i in range(3)]
arm_right_joint_names = ["arm_right_" + str(i) + "_joint" for i in range(8)]
arm_left_joint_names = ["arm_left_" + str(i) + "_joint" for i in range(8)]
self._joint_names = (
wheel_joint_names
+ torso_joint_name
+ head_joint_names
+ arm_right_joint_names
+ arm_left_joint_names
)
robot = URDF.load(self.fileName)
self.urdf_joints = []
for i, joint in enumerate(robot.joints):
if joint.name in self._joint_names:
self.urdf_joints.append(i)
self.robot_joints = []
self.caster_joints = []
for _id in range(p.getNumJoints(self.robot)):
joint_name = p.getJointInfo(self.robot, _id)[1].decode("UTF-8")
if joint_name in self._joint_names:
self.robot_joints.append(_id)
if "caster" in joint_name:
self.caster_joints.append(_id)
__import__('pdb').set_trace()
self.robot_joints_gripper = []
"""

Upload pip package

Eventually, we should upload this package to the official pip repositories.
Then the installation would be even simpler and we could attract some more users.

Limit Verification

Currently, state limits are not enforced.
Hence, it is possible for the robot to exceed state limits. We should enforce limits by either clipping the states or clipping the actions.

It might be also beneficial to stop an episode then, just like the robot would turn off if joint limits are exceeded.

Join common robot characteristics

Currently, all robots in the resources-folder share great parts of the code.
Similar to the environments, this should be unified and moved to the urdfCommon folder.
Then the individual robots inherit from the common robot.

Bicycle model not updated according to observation structure update

When the observation space and observations were updated to comply with the format ['robot_{i}']['joint_state'][..], the bicycle model was not updated accordingly, see code below.

This is the reason why the corresponding tests are skipped since #125 . This must be fixed before the next release.

return gym.spaces.Dict(
{
"x": gym.spaces.Box(
low=self._limit_pos_j[0, :],
high=self._limit_pos_j[1, :],
dtype=np.float64,
),
"steering": gym.spaces.Box(
low=self._limit_pos_steering[0],
high=self._limit_pos_steering[1],
shape=(1,),
dtype=np.float64,
),
"xdot": gym.spaces.Box(
low=self._limit_vel_j[0, :],
high=self._limit_vel_j[1, :],
dtype=np.float64,
),
"vel": gym.spaces.Box(
low=self._limit_vel_forward_j[0, :],
high=self._limit_vel_forward_j[1, :],
dtype=np.float64,
),
}
)

Resetting robots to initial configuration passed to reset function.

Reset function for most robots does not support to pass an initial configuration, see albert, nLinkReacher.
This can be achieved by the function pybullet.resetJointState() provided by pybullet. See the implementation for the tiago-robot

This functionality should be added to the other robots to allow better integration into motion planning librariries to randomize initial configurations.

A different approach to the reset function can be found in pandaReacher. This is not ideal as it require to run several time steps before actually starting the simulation. When addressing this issue, the panda-implementation should also be changed.

Setup.py missing requirement Matplotlib

When installing the gym_envs_urdf dependencies through the setup.py file using

pip3 install -e .

the setup is missing matplotlib when running the examples. Adding matplotlib to setup.py should resolve this.

path to imports if repository is used as git submodule

Cloning the gym_envs_urdf repository as git submodule:

git submodule add https://github.com/maxspahn/gym_envs_urdf

does not edit the import paths in the module. This results in a ModuleNotFoundError.

possible temporary solution, is adding the relative path from the working directory to the gym_envs_urdf/ directory

import sys
sys.path.insert(0, "/home/gijs/Documents/semantic-thinking-robot/gym_envs_urdf/")

or editing the import paths in every file of the module, an example is editing:

from tiagoReacher.envs.tiagoReacherEnv import TiagoReacherEnv

to:

from gym_envs_urdf.tiagoReacher.envs.tiagoReacherEnv import TiagoReacherEnv

Running on Windows

Here are the steps to run the code on windows (with spyder):

  • Download Git for Windows: https://git-scm.com/download/win
  • Open Git Bash
  • Direct to a folder where you want the code to be stored
  • Copy the HTTPS github link under the green 'code' box
  • Run the command git clone HTTPS link
    Now the files are in the folder and linked to the github. To run the files:
  • Open spyder
  • In the console, direct to the folder you made earlier, and one step further to _gym_envs_urdf_
  • run pip install .
    Now you should be able to run the codes found in the folders

Joint limits [velocity] should be realistic

The joint velocity limits of the panda robot are set quite randomly. It would be amazing to align them with the actual joint limits of the real panda robot. Joint position limits are all working and extracted from the URDF as I believe, except for one joint, right @maxspahn ?

Previously mentioned in #97 (review)

(Unit)Testing

Currently, pull-requests are checked manually using the examples.

This should be automated using either unittests or simple bash scripts to test individual agents.
Ideal, a github-hook can be used to test on automatically on new pr's.

Let me know, if you need help with this issue.

LiDAR sensor missing z-axis rotation of robot

The LiDAR rays are calculated from the x, y position of the LiDAR sensor towards a point at ray_length distance and at the angles thetas. The resulting distance per ray is either the ray_length or a value lower if there is an object between the two points.

However, if the robot itself is rotated along the z-axis, this information is not considered in calculating the ray_end position. This means that the LiDAR rays will show the same values if the robot is rotating around the z-axis as if the sensor is only moving along the x- and y-axes and not rotating if the robot is.

Tested using the point robot LiDAR example.

Naming and structure of returned Observations

Currently the naming of the observation not self explaining. Additionally observations with equal names have inconsistent structure.

x and obstaclesensor.obstacle_1.x are having a different structure [x_pos, y_pos, theta_orientation] and [x_pos, y_pos ,z_pos]

These should have self explaining names and if the names are equal, equal structure.

EDIT
I propose the following structure:

{"pose": {position, orientation}}
Position in Cartesian coordinates with shape (3, ) and orientation in quaternions with shape (4, )

{"twist:{linear, angular}}
Linear in Cartesian coordinates with shape (3, ) and angular in Cartesian coordinates with shape (3, )

{"base_state":{"pose_min": {position_min, orientation_min}, "twist_min": {linear_min, angular_min}, "base_output": {forward_velocity, angular_velocity}}
position_min contains x and y positions, Cartesian coordinates, shape is (2, )
orientation_min contains the orientation around the vertical z-axis, shape is (1, ), the value will be between -pi and pi.
linear_min contains x and y velocities (that's Cartesian), shape is (2, )
angular_min contains the angular velocity around the vertical z-axis, shape is (1, )
output is the output of the base. For robot pointRobotUrdf-vel-v0 this would be array [forward_velocity, angular_velocity], for pointRobotUrdf-ang-v0 this would be array [forward_acceleration, angular_acceleration], shape is (2, )

{"joint_state": {position, velocity}}
position contains the joint positions with the exception of the base
velocity contains the joint velocities with the exception of the base
The following piece will handle the joint_state:

    for i in range(2, self._n):
            pos, vel, _, _ = p.getJointState(self._robot, self._robot_joints[i])
            joint_pos_list.append(pos)
            joint_vel_list.append(vel)

Questions:

  • Are and twist_min and output not redundant? The difference is the frame Global frame and Robot frame?
  • variable vf is not self explaining (this is why I'd like to call it output). My guess for vf is "velocity forward". Could "vf" be renamed to "output"?
  • If orientation is in quaternions, should angular velocity not also be in quaternions?

Structuring Documentation, setting a standard for documenting on functions, documentation hosted online

Documentation should receive a review on the structure.
ATM it contains "introduction" and "getting_started", if done okey this would not be an issue, but it contains redundant information.
All the functional documentation is in "developers.rst" and "introduction.rst", which are not self explaining names.

What best standard can be used to explain a function in the docs? And how to add extra info to functions when generated automatically

Generating the site .html pages offline works. But hosting online lowers the threshold to use the docs.

Additionally, the README.md at the root of the project should link to the online documentation

Tiago inconsistency joint states and actuation

The tiago robot consists of several joints that can be roughly split into the following groups:

  • base, actuation: forward velocity, angular velocity, states: x, y, theta
  • torso, actuation: prismatic torso joint, states: torso_position
  • right arm, actuation: 7 revolute joints, states: 7 joint poisitions
  • left arm, actuation: 7 revolute joints, states: 7 joint positions
  • head, actuation: 2 revolute joints, states: 2 joint positions

The ordering in the actuations and observation is different.
It must be consistent for many motion planners:
I suggest the following order: base, torso, arm_1, arm_2, head

Missing package 'wheel' for depedency urdfpy

Tested the pip install on a clean virtual environment, but the wheel package dependency is missing in urdfpy/fix-networkx-dependency.

Some info to reproduce the error:

python version: 3.8.10
pip version: 20.0.2
OS: ubuntu 20.04.4 LTS

steps:

python3 -m venv venv # create clean virtual env
source venv/bin/activate #active virtual env
git clone [email protected]:maxspahn/gym_envs_urdf.git
cd gym_envs_urdf
pip3 install .

output:

image

Tiago Integration

We should integrate the Tiaga robot to the list of robots.
Should be a straight-forward adaptation to the albert robot.

Multirobot environments

Some users might consider using this environment, if there were multi-robot system.

I have started a branch on this and it seems feasible. The first idea is to make use of the generic urdf environment, but using
list instead of a single urdf-file.
Let's discuss this further in this thread @c-salmi.

The branch name is ft-multi-robots.

Explaination for x, xdot, vel in code and docs

It is unclear, what x, xdot and vel actually mean. This should be improved by adding comments to the code, but also updating the documentation accordingly.

See

posBase[2] -= np.pi / 2.0
if posBase[2] < -np.pi :
posBase[2] += 2 * np.pi
velWheels = p.getJointStates(self.robot, self.robot_joints)
v_right = velWheels[0][1]
v_left = velWheels[1][1]
vf = np.array([0.5 * (v_right + v_left) * self._r, (v_right - v_left) * self._r / self._l])
Jnh = np.array([[np.cos(posBase[2]), 0], [np.sin(posBase[2]), 0], [0, 1]])
velBase = np.dot(Jnh, vf)
# Get Joint Configurations
joint_pos_list = []
joint_vel_list = []
for i in range(2, self._n):
pos, vel, _, _ = p.getJointState(self.robot, self.robot_joints[i])
joint_pos_list.append(pos)
joint_vel_list.append(vel)
joint_pos = np.array(joint_pos_list)
joint_vel = np.array(joint_vel_list)

Add basic lidar sensor

It would be good to integrate a module for a lidar sensor that could be used by all robots.

Structure:

  1. Folder with sensors next to the robots. In the long run all sensors should be defined there.
  2. Then, every robot could add the lidar sensor, effecting the observation space.
  3. In the individual robot implementations (in resources), the get_observation method would loop through all sensors to add their observations.

For lidar implementation, I suggest to use pybullets rayTest function, see Pybullet Python API

Use case multi robot carry

I have followed all the instructions and tried to run the code however, I am getting this error

warnings.warn(str(err))
Traceback (most recent call last):
  File "/home/josyula/Programs/MAS_Project/gym_envs_urdf/examples/multi_robot.py", line 51, in <module>
    run_multi_robot(render=True, obstacles=True, goal=True)
  File "/home/josyula/Programs/MAS_Project/gym_envs_urdf/examples/multi_robot.py", line 34, in run_multi_robot
    from examples.scene_objects.goal import dynamicGoal
  File "/home/josyula/Programs/MAS_Project/gym_envs_urdf/examples/scene_objects/goal.py", line 1, in <module>
    from MotionPlanningGoal.staticSubGoal import StaticSubGoal
ModuleNotFoundError: No module named 'MotionPlanningGoal'

I did do a search it looks like there isn't a MotionPlanningGoal in the project. How can I resolve this error? thank you!

Unify friction parameter

When torque control is used to control the robots, it is possible to set friction values for all joints.
Currently, there is a uniform friction value across all joints of one robot, for now, this is a sufficient solution.

However, the friction parameter must be accessible when initializing the environment. This also applies for the acceleration control environments that rely on inverse dynamics, such as the pandaReacher

The friction parameter should be an optional argument to the init function for acc.py and tor.py in the pandaReacher.
To be verified for other robots.

Casadi version 3.5.5.post

Casadi provides a newer version which is not accessible by pip.
This results in a failing installation.

Generic URDF environment

It would be great to have fully generic environment to which you feed an arbitrary urdf file and the controlled joints.
Then, the environment with the correct action and observation space is automatically generated.

Installation error bdist_wheel

When installating the package fresh on a system where the python package wheel is not installed, an error message is displayed when running: pip3 install -e .

This seems to be an unresolved dependency, see discussion on StackOverflow.

Although, it does not effect the package itself, it is annoying.
Should be fixed by an explicit dependency.

Add some nice example videos in README

It would be nice to have some short gifs/videos on the main page.
Simply record a short video of some of the example files and add it to the README.
Ideally, there should be some captions and all environments should be displayed.

Supported Python versions

The pyproject.toml file suggests that python 3.6 until 3.10 are supported.
However, when installing it with python3.10, I run into trouble because of numpy package.
Can this be confirmed by anyone? Or maybe somebody has it running with python 3.10.

Documentation or improved naming

Currently, the naming is arbitrary and does not necessarily add readability.
This requires either adding some documentation as suggested by @GijsGroote in #34

Should we have a small description on top of the class diffDriveRobot?

or the naming should be improved.

Also, the structure of abstractRobot and urdfEnv is confusing and must be simplified.

improve pylint, update repo accordingly

only allow snake_case function/variable names
enforce docstring, but allow no docstring on self-explanatory function

example of bad useless docstring

/**
 * Sets the foo.
 * 
 * @param foo the foo to set
 */
public void setFoo(float foo);

Error dt not accessible in mobile_reacher example

When trying to run the example: mobile_reacher.py, I receive the following error after the pybullet environment is built:
"attempted to get missing private attribute '{}'".format(name)
AttributeError: attempted to get missing private attribute '_dt'

Similar issue as in the gym_envs_planar issue

This issue does affect all examples and environments.

Keyboard controlled light source

There's recently been added a function to have a light source, but for now there is only a possibility to make the subgoal static or dynamic pre programmed, but i would like to have a keyboard input for the location of the subgoal.

createCollisionShape/VisualShape using wrong unique id

Method function add_shapes of the class UrdfEnv uses the same unique id for the baseCollisionShapeIndex and baseVisualShapeIndex which can lead to problems if an obstacle or goal is defined beforehand.

An obstacle or goal only creates a visualShape id or only a collisionShape id. The add_shapes function used to create a wall, for example, will create a new collisionShape id and use that id for both the collisionShape and visualShape, while the visualShape under that id is still the previous created visualShape (if it exists) showing the visual of the previous created goal.

Possible fixes: create both a visualShape and collisionShape for the goal and obstacle, both for the shapes inside add_shapes or both for all.

rewrite return structure of observation

Currently if a sensor is added to the environment, the observation looks like:

{'x': array([......]), 'vel': array([.......]), 'xdot': array([........]), 'obstacleSensor': {..........}}

The structure would be improved if the robot has it's own key.

Velocity too high with braitenberg vehicle

If i run a simulation with the braitenberg vehicle, if it exceeds velocity = 4 the simulation stops and i get the following error:
File "C:\Users\rens_\Documents\AAWB3\BEP\gym_envs_urdf\urdfenvs\urdfCommon\urdf_env.py", line 106, in check_box
if val < os_box.low[0]:
IndexError: too many indices for array: array is 0-dimensional, but 1 were indexed

This error disappears if i constrain the speed to 3.9

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.