maxspahn / gym_envs_urdf Goto Github PK
View Code? Open in Web Editor NEWURDF environments for gym
Home Page: https://maxspahn.github.io/gym_envs_urdf/
License: GNU General Public License v3.0
URDF environments for gym
Home Page: https://maxspahn.github.io/gym_envs_urdf/
License: GNU General Public License v3.0
For the acceleration controlled environments, the simulation relies on inverse dynamics provided by pybullet.
For mobile manipulators, this fails at the moment, see apply_acc_action
Can be reproduced using the examples/albert.py and replacing 'vel' with 'acc'.
During the development, error message is reported.
File "/home/skylove/gym_envs_urdf/examples/tiago.py", line 29, in main print("base: ", ob["x"][0:3]) KeyError: 'x'
there is no key x
in object ob
.
A temporary solution could be removing lines 28-32.
Currently, all robots have their own environment.
This results in a lot of copied code among the agents.
Should be unified.
On some machines pybullet 3.2.0 can be installed, but not a more recent version.
It might be useful to downgrade the minimum requirement to 3.2.0.
Line 9 in 273ccfc
The lidar sensor can actually yield negative results. This should be adapted in the observation space.
The overflow of warnings can be reproduced by running the point robot example.
@GijsGroote
gym_envs_urdf/urdfenvs/sensors/lidar.py
Line 32 in 273ccfc
Currently the joint position/velocity/acceleration limits are hard coded. We should add a setter method to modify it according to the need of an application.
Joint indices for reading in the limits, controlling the joints, and disable castor wheels is hard-coded at the moment.
It would be much better if this was done automatically, or at least by name.
A starting point could be:
gym_envs_urdf/urdfenvs/tiagoReacher/resources/tiagoRobot.py
Lines 23 to 53 in d4d49dc
Import Error when using the function initSim()
:
The function initSim()
in generic_env of the pointRobotURDF environment is missing the import "pybullet" in line 61.
Changing self._p = bullet_client.BulletClient(connection_mode=pybullet.GUI)
to self._p = bullet_client.BulletClient(connection_mode=p.GUI)
should resolve this.
Originally posted by @alxschwrz in #3 (comment)
Eventually, we should upload this package to the official pip repositories.
Then the installation would be even simpler and we could attract some more users.
Currently, state limits are not enforced.
Hence, it is possible for the robot to exceed state limits. We should enforce limits by either clipping the states or clipping the actions.
It might be also beneficial to stop an episode then, just like the robot would turn off if joint limits are exceeded.
Currently, all robots in the resources-folder share great parts of the code.
Similar to the environments, this should be unified and moved to the urdfCommon folder.
Then the individual robots inherit from the common robot.
In some setups, pybullet=^3.2.1
seems to cause problems.
If replaced with pybullet=^3.2.0
, the problem is often resolved.
When the observation space and observations were updated to comply with the format ['robot_{i}']['joint_state'][..]
, the bicycle model was not updated accordingly, see code below.
This is the reason why the corresponding tests are skipped since #125 . This must be fixed before the next release.
gym_envs_urdf/urdfenvs/urdf_common/bicycle_model.py
Lines 86 to 110 in 4c86b28
Reset function for most robots does not support to pass an initial configuration, see albert, nLinkReacher.
This can be achieved by the function pybullet.resetJointState() provided by pybullet. See the implementation for the tiago-robot
This functionality should be added to the other robots to allow better integration into motion planning librariries to randomize initial configurations.
A different approach to the reset function can be found in pandaReacher. This is not ideal as it require to run several time steps before actually starting the simulation. When addressing this issue, the panda-implementation should also be changed.
When installing the gym_envs_urdf
dependencies through the setup.py
file using
pip3 install -e .
the setup is missing matplotlib when running the examples. Adding matplotlib to setup.py
should resolve this.
Cloning the gym_envs_urdf repository as git submodule:
git submodule add https://github.com/maxspahn/gym_envs_urdf
does not edit the import paths in the module. This results in a ModuleNotFoundError.
possible temporary solution, is adding the relative path from the working directory to the gym_envs_urdf/ directory
import sys
sys.path.insert(0, "/home/gijs/Documents/semantic-thinking-robot/gym_envs_urdf/")
or editing the import paths in every file of the module, an example is editing:
from tiagoReacher.envs.tiagoReacherEnv import TiagoReacherEnv
to:
from gym_envs_urdf.tiagoReacher.envs.tiagoReacherEnv import TiagoReacherEnv
Here are the steps to run the code on windows (with spyder):
The joint velocity limits of the panda robot are set quite randomly. It would be amazing to align them with the actual joint limits of the real panda robot. Joint position limits are all working and extracted from the URDF as I believe, except for one joint, right @maxspahn ?
Previously mentioned in #97 (review)
Currently the materials of the robots is plain white.
This should be changed to the actual materials in the urdf files.
Using workflow described here.
Currently, pull-requests are checked manually using the examples.
This should be automated using either unittests or simple bash scripts to test individual agents.
Ideal, a github-hook can be used to test on automatically on new pr's.
Let me know, if you need help with this issue.
The LiDAR rays are calculated from the x, y position of the LiDAR sensor towards a point at ray_length
distance and at the angles thetas
. The resulting distance per ray is either the ray_length
or a value lower if there is an object between the two points.
However, if the robot itself is rotated along the z-axis, this information is not considered in calculating the ray_end
position. This means that the LiDAR rays will show the same values if the robot is rotating around the z-axis as if the sensor is only moving along the x- and y-axes and not rotating if the robot is.
Tested using the point robot LiDAR example.
Currently the naming of the observation not self explaining. Additionally observations with equal names have inconsistent structure.
x and obstaclesensor.obstacle_1.x are having a different structure [x_pos, y_pos, theta_orientation] and [x_pos, y_pos ,z_pos]
These should have self explaining names and if the names are equal, equal structure.
EDIT
I propose the following structure:
{"pose": {position, orientation}}
Position in Cartesian coordinates with shape (3, ) and orientation in quaternions with shape (4, )
{"twist:{linear, angular}}
Linear in Cartesian coordinates with shape (3, ) and angular in Cartesian coordinates with shape (3, )
{"base_state":{"pose_min": {position_min, orientation_min}, "twist_min": {linear_min, angular_min}, "base_output": {forward_velocity, angular_velocity}}
position_min contains x and y positions, Cartesian coordinates, shape is (2, )
orientation_min contains the orientation around the vertical z-axis, shape is (1, ), the value will be between -pi and pi.
linear_min contains x and y velocities (that's Cartesian), shape is (2, )
angular_min contains the angular velocity around the vertical z-axis, shape is (1, )
output is the output of the base. For robot pointRobotUrdf-vel-v0 this would be array [forward_velocity, angular_velocity], for pointRobotUrdf-ang-v0 this would be array [forward_acceleration, angular_acceleration], shape is (2, )
{"joint_state": {position, velocity}}
position contains the joint positions with the exception of the base
velocity contains the joint velocities with the exception of the base
The following piece will handle the joint_state:
for i in range(2, self._n):
pos, vel, _, _ = p.getJointState(self._robot, self._robot_joints[i])
joint_pos_list.append(pos)
joint_vel_list.append(vel)
Questions:
Documentation should receive a review on the structure.
ATM it contains "introduction" and "getting_started", if done okey this would not be an issue, but it contains redundant information.
All the functional documentation is in "developers.rst" and "introduction.rst", which are not self explaining names.
What best standard can be used to explain a function in the docs? And how to add extra info to functions when generated automatically
Generating the site .html pages offline works. But hosting online lowers the threshold to use the docs.
Additionally, the README.md at the root of the project should link to the online documentation
The tiago robot consists of several joints that can be roughly split into the following groups:
The ordering in the actuations and observation is different.
It must be consistent for many motion planners:
I suggest the following order: base, torso, arm_1, arm_2, head
Tested the pip install on a clean virtual environment, but the wheel
package dependency is missing in urdfpy/fix-networkx-dependency.
Some info to reproduce the error:
python version: 3.8.10
pip version: 20.0.2
OS: ubuntu 20.04.4 LTS
steps:
python3 -m venv venv # create clean virtual env
source venv/bin/activate #active virtual env
git clone [email protected]:maxspahn/gym_envs_urdf.git
cd gym_envs_urdf
pip3 install .
output:
Currently, there is a duplicate of this repository on Gitlab, see https://gitlab.tudelft.nl/mspahn/urdfenvs.
It is difficult to keep to track of that.
Remove gitlab copy or find a better mirroring method.
When walls should be places other than the default, it takes to long to understand how to place walls.
We should integrate the Tiaga robot to the list of robots.
Should be a straight-forward adaptation to the albert robot.
Some users might consider using this environment, if there were multi-robot system.
I have started a branch on this and it seems feasible. The first idea is to make use of the generic urdf environment, but using
list instead of a single urdf-file.
Let's discuss this further in this thread @c-salmi.
The branch name is ft-multi-robots.
It is unclear, what x
, xdot
and vel
actually mean. This should be improved by adding comments to the code, but also updating the documentation accordingly.
See
gym_envs_urdf/urdfenvs/urdfCommon/differentialDriveRobot.py
Lines 118 to 135 in d4d49dc
In many situations, it would be beneficial to install the package in editable mode.
This is the default using poetry. However, it would be nice if pip3 install -e .
was also supported.
According to https://stackoverflow.com/questions/64150719/how-to-write-a-minimally-working-pyproject-toml-file-that-can-install-packages, we only need to add a very small setup.py
file in the root directory.
To be tested.
It would be good to integrate a module for a lidar sensor that could be used by all robots.
Structure:
For lidar implementation, I suggest to use pybullets rayTest function, see Pybullet Python API
I have followed all the instructions and tried to run the code however, I am getting this error
warnings.warn(str(err))
Traceback (most recent call last):
File "/home/josyula/Programs/MAS_Project/gym_envs_urdf/examples/multi_robot.py", line 51, in <module>
run_multi_robot(render=True, obstacles=True, goal=True)
File "/home/josyula/Programs/MAS_Project/gym_envs_urdf/examples/multi_robot.py", line 34, in run_multi_robot
from examples.scene_objects.goal import dynamicGoal
File "/home/josyula/Programs/MAS_Project/gym_envs_urdf/examples/scene_objects/goal.py", line 1, in <module>
from MotionPlanningGoal.staticSubGoal import StaticSubGoal
ModuleNotFoundError: No module named 'MotionPlanningGoal'
I did do a search it looks like there isn't a MotionPlanningGoal in the project. How can I resolve this error? thank you!
When torque control is used to control the robots, it is possible to set friction values for all joints.
Currently, there is a uniform friction value across all joints of one robot, for now, this is a sufficient solution.
However, the friction parameter must be accessible when initializing the environment. This also applies for the acceleration control environments that rely on inverse dynamics, such as the pandaReacher
The friction parameter should be an optional argument to the init function for acc.py and tor.py in the pandaReacher.
To be verified for other robots.
Casadi provides a newer version which is not accessible by pip.
This results in a failing installation.
It would be great to have fully generic environment to which you feed an arbitrary urdf file and the controlled joints.
Then, the environment with the correct action and observation space is automatically generated.
When installating the package fresh on a system where the python package wheel is not installed, an error message is displayed when running: pip3 install -e .
This seems to be an unresolved dependency, see discussion on StackOverflow.
Although, it does not effect the package itself, it is annoying.
Should be fixed by an explicit dependency.
Obstacles should be integrated into the gym environment.
This will require a unification of the environments.
Ideally, it should be done using motion planning scenes.
It would be nice to have some short gifs/videos on the main page.
Simply record a short video of some of the example files and add it to the README.
Ideally, there should be some captions and all environments should be displayed.
The pyproject.toml file suggests that python 3.6 until 3.10 are supported.
However, when installing it with python3.10, I run into trouble because of numpy package.
Can this be confirmed by anyone? Or maybe somebody has it running with python 3.10.
Currently, the naming is arbitrary and does not necessarily add readability.
This requires either adding some documentation as suggested by @GijsGroote in #34
Should we have a small description on top of the class diffDriveRobot?
or the naming should be improved.
Also, the structure of abstractRobot and urdfEnv is confusing and must be simplified.
only allow snake_case function/variable names
enforce docstring, but allow no docstring on self-explanatory function
example of bad useless docstring
/**
* Sets the foo.
*
* @param foo the foo to set
*/
public void setFoo(float foo);
When trying to run the example: mobile_reacher.py
, I receive the following error after the pybullet environment is built:
"attempted to get missing private attribute '{}'".format(name)
AttributeError: attempted to get missing private attribute '_dt'
Similar issue as in the gym_envs_planar issue
This issue does affect all examples and environments.
There's recently been added a function to have a light source, but for now there is only a possibility to make the subgoal static or dynamic pre programmed, but i would like to have a keyboard input for the location of the subgoal.
Method function add_shapes
of the class UrdfEnv
uses the same unique id for the baseCollisionShapeIndex
and baseVisualShapeIndex
which can lead to problems if an obstacle or goal is defined beforehand.
An obstacle or goal only creates a visualShape id or only a collisionShape id. The add_shapes
function used to create a wall, for example, will create a new collisionShape id and use that id for both the collisionShape and visualShape, while the visualShape under that id is still the previous created visualShape (if it exists) showing the visual of the previous created goal.
Possible fixes: create both a visualShape and collisionShape for the goal and obstacle, both for the shapes inside add_shapes
or both for all.
With the citing function users of the repository can easily cite gym_urdf repository. It is yet to be implemented for this repository.
Currently if a sensor is added to the environment, the observation looks like:
{'x': array([......]), 'vel': array([.......]), 'xdot': array([........]), 'obstacleSensor': {..........}}
The structure would be improved if the robot has it's own key.
If i run a simulation with the braitenberg vehicle, if it exceeds velocity = 4 the simulation stops and i get the following error:
File "C:\Users\rens_\Documents\AAWB3\BEP\gym_envs_urdf\urdfenvs\urdfCommon\urdf_env.py", line 106, in check_box
if val < os_box.low[0]:
IndexError: too many indices for array: array is 0-dimensional, but 1 were indexed
This error disappears if i constrain the speed to 3.9
A declarative, efficient, and flexible JavaScript library for building user interfaces.
๐ Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. ๐๐๐
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google โค๏ธ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.