nely-epfl / flygym Goto Github PK
View Code? Open in Web Editor NEWGym environments for NeuroMechFly in various physics simulators
Home Page: https://neuromechfly.org/
License: Apache License 2.0
Gym environments for NeuroMechFly in various physics simulators
Home Page: https://neuromechfly.org/
License: Apache License 2.0
... so that we can feel free to call it as much as we want without running sensory preprocessing redundantly
Helpful tool: export Fusion objects as URDF files https://github.com/syuntoku14/fusion2urdf
doc/source/changelog.rst
setup.py
... as it is being deprecated. Use other ways to find data files.
Contact force reading is currently passed as a pointer and not a copy, so the readings at past steps change unless the user made a copy explicitly. We should probably just return a copy.
Furthermore, it could be useful to return the contact forces as a (6, 3) array (6 legs, xyz) instead of an (18,) array.
I've been using jupyter nbconvert to convert the demo notebooks to RST files for the website. This works OK but some manual finetuning of the RST files are required:
This is quite annoying. Maybe we can automate this somehow? How about jupyter book? This is low priority but let's leave this issue here.
save_video_with_vision_insets
and add the same logic to NeuroMechFly.save_video
by defaultIn the long term, should we move away from dm_control and use just the mujoco Python binding, at least for the core simulation?
This would allow us to use GPU/TPU for the physics simulation using MJX that's been available since MuJoCo 3.0.0 (there's no plan on the dm_control side to retrofit MJX into dm_control). I doubt a morphologically complex model like NeuroMechFly can be run efficiently on the GPU (esp. given my experience with Isaac Gym), but I'd be curious to find out.
On the other hand, we can keep using dm_control's nice camera class for projecting from xyz coordinates to row-column coordinates on rendered image, or dm_control.mjcf for modifying the XML files, or add an interactive viewer with dm_control.viewer.
eg. how direction is defined
... instead of passing a string like "flat" to NeuroMechFlyMuJoCo.__init__
.
Gym actually specifies the format of .step()
's return values more strictly than I thought: https://gymnasium.farama.org/api/env/#gymnasium.Env.step
Namely it requires the following:
- observation (ObsType) – An element of the environment’s observation_space as the next observation due to the agent actions. An example is a numpy array containing the positions and velocities of the pole in CartPole.
- reward (SupportsFloat) – The reward as a result of taking the action.
- terminated (bool) – Whether the agent reaches the terminal state (as defined under the MDP of the task) which can be positive or negative. An example is reaching the goal state or moving into the lava from the Sutton and Barton, Gridworld. If true, the user needs to call reset().
- truncated (bool) – Whether the truncation condition outside the scope of the MDP is satisfied. Typically, this is a timelimit, but could also be used to indicate an agent physically going out of bounds. Can be used to end the episode prematurely before a terminal state is reached. If true, the user needs to call reset().
- info (dict) – Contains auxiliary diagnostic information (helpful for debugging, learning, and logging). This might, for instance, contain: metrics that describe the agent’s performance state, variables that are hidden from observations, or individual reward terms that are combined to produce the total reward. In OpenAI Gym <v26, it contains “TimeLimit.truncated” to distinguish truncation and termination, however this is deprecated in favour of returning terminated and truncated variables.
There are two solutions:
return obs, {}
where the empty dict can be extended to contain arbitrary info.return obs, 0, False, False, {}
. As before, the user can extend this class to implement different reward/termination criteria if desired.If we opt for option 2 (which I personally prefer), we should do it ASAP (but perhaps after COBAR) before it becomes even more annoying. What do you think @stimpfli ?
For Isaac Gym I'm definitely going to stick with Gym's specified API above.
... return odor intensity reading at antennae
MuJoCo released its 3.0.0 version last week. There are a few API-breaking changes that made it incompatible with the MJCF file provided by FlyGym.
If you encounter an error that looks like the following upon loading the fly model, this is the reason:
self = MJCF Element: <option timestep="0.0001" gravity="0 0 -9810" integrator="Euler" solver="Newton" iterations="1000" tolerance="9.9999999999999998e-13" noslip_iterations="100" noslip_tolerance="1e-08" mpr_iterations="100"/>
attribute_name = 'collision'
def _check_valid_attribute(self, attribute_name):
if attribute_name not in self._spec.attributes:
> raise AttributeError(
'{!r} is not a valid attribute for <{}>'.format(
attribute_name, self._spec.name))
E AttributeError: Line 4: error while parsing element <option>: 'collision' is not a valid attribute for <option>
../miniconda3/envs/nmf/lib/python3.11/site-packages/dm_control/mjcf/element.py:534: AttributeError
Generally, I prefer to keep FlyGym compatible with the most up-to-date versions of its core dependencies such as MuJoCo. However, since this is a major update (2.x.x -> 3.x.x), I would like to spend more time verifying the compatibility and defer this effort for now.
A new release is made to address this issue.
I can do this unless anyone else wants to help
... similar to what Victor did in PyBullet.
... so the unit is in mm and not um. The stiffness, damping, gravity, etc might also need to be tuned accordingly.
I think it's a good idea to either
They make the rendering results and vision-related things just slightly different. In visual navigation tasks this makes the whole behavior stochastic (because the slightly different visual input leads to slightly different descending drive and the difference just accumulates).
What do you think? @stimpfli
It seems generally useful to be able to access the observation/state without having to supply an action and step the physics simulation. Let's make_get_observation
a public method.
Refactor the code keeping in mind that the environment could accommodate:
Now that we're migrating away from PyBullet, I will only implement very limited functionality pro forma (just so it's easier to pick it up in case i need it in the future).
Upon creation of the NeuroMechFly instance, if the actuated_joints
parameter does not contain all the controllable limb joints (when controlling a subset of joints is desired) a ValueError
is raised in the line below. This variable seems to only be used for leg adhesion, so wrapping it under a conditional on whether leg adhesion is enabled fixes it, but I believe it should handle the case of unactuated leg joints? Or is there a better way to control a subset of joints?
Lines 385 to 390 in d155616
... as is the case in Isaac Gym.
NeuroMechFLy.vision_update_mask
is supposed to be a 1D binary array of length num_sim_steps
, indicating whether the visual input is updated at each physics step.
However, currently the list that this mask array is generated from is appended to every time get_observation
is called. Therefore the length of the array is the number of times the observation was queried, not the number of times the physics engine was advanced.
Looks like the fly model is not compatible with MuJoCo 3.0.0 and above because of a breaking change:
- Removed mjOption.collision and the associated option/collision attribute.
See the changelog here.
(this is a pro forma issue documenting a bug that's been fixed)
If the sliding friction (first number) is too high, then objects can run through each other even if collision is enabled.
Define a new class Pose
. NeuroMechFlyMuJoCo.__init__
would receive a Pose
object instead of a string like "default"
.
... with a sample 3D terrain type?
NeuroMechFlyMuJoCo.observation_space
A declarative, efficient, and flexible JavaScript library for building user interfaces.
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google ❤️ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.