Giter VIP home page Giter VIP logo

robosuite's Introduction

robosuite

gallery of_environments

[Homepage][White Paper][Documentations][ARISE Initiative]


Latest Updates

[02/17/2021] v1.2.0: Added observable sensor models 👀 and dynamics randomization 🎲

[12/17/2020] v1.1.0: Refactored infrastructure and standardized model classes for much easier environment prototyping 🔧


robosuite is a simulation framework powered by the MuJoCo physics engine for robot learning. It also offers a suite of benchmark environments for reproducible research. The current release (v1.2) features manipulation tasks with feature supports of procedural generation, advanced controllers, teleoperation, etc. This project is part of the broader Advancing Robot Intelligence through Simulated Environments (ARISE) Initiative, with the aim of lowering the barriers of entry for cutting-edge research at the intersection of AI and Robotics.

Data-driven algorithms, such as reinforcement learning and imitation learning, provide a powerful and generic tool in robotics. These learning paradigms, fueled by new advances in deep learning, have achieved some exciting successes in a variety of robot control problems. However, the challenges of reproducibility and the limited accessibility of robot hardware (especially during a pandemic) have impaired research progress. The overarching goal of robosuite is to provide researchers with:

  • a standardized set of benchmarking tasks for rigorus evaluation and algorithm development;
  • a modular design that offers great flexibility to design new robot simulation environments;
  • a high-quality implementation of robot controllers and off-the-shelf learning algorithms to lower the barriers to entry.

This framework was originally developed since late 2017 by researchers in Stanford Vision and Learning Lab (SVL) as an internal tool for robot learning research. Now it is actively maintained and used for robotics research projects in SVL and the UT-Austin Robot Perception and Learning Lab (RPL). We welcome community contributions to this project. For details please check out our contributing guidelines.

This release of robosuite contains seven robot models, eight gripper models, six controller modes, and nine standardized tasks. It also offers a modular design of APIs for building new environments with procedural generation. We highlight these primary features below:

  • standardized tasks: a set of standardized manipulation tasks of large diversity and varying complexity and RL benchmarking results for reproducible research;
  • procedural generation: modular APIs for programmatically creating new environments and new tasks as a combinations of robot models, arenas, and parameterized 3D objects;
  • controller supports: a selection of controller types to command the robots, such as joint-space velocity control, inverse kinematics control, operational space control, and 3D motion devices for teleoperation;
  • multi-modal sensors: heterogeneous types of sensory signals, including low-level physical states, RGB cameras, depth maps, and proprioception;
  • human demonstrations: utilities for collecting human demonstrations, replaying demonstration datasets, and leveraging demonstration data for learning.

Citations

Please cite robosuite if you use this framework in your publications:

@inproceedings{robosuite2020,
  title={robosuite: A Modular Simulation Framework and Benchmark for Robot Learning},
  author={Yuke Zhu and Josiah Wong and Ajay Mandlekar and Roberto Mart\'{i}n-Mart\'{i}n},
  booktitle={arXiv preprint arXiv:2009.12293},
  year={2020}
}

robosuite's People

Contributors

cremebrule avatar yukezhu avatar amandlek avatar rojas70 avatar jirenz avatar hartikainen avatar roberto-martinmartin avatar hermanjakobsen avatar zhuyifengzju avatar jcreus avatar kpaonaut avatar mrkulk avatar mohanksriram avatar youngwoon avatar

Stargazers

十年一梦 avatar Ray Xia avatar Abdullah Nazir avatar  avatar Alan, CHUNG avatar

Watchers

Alan, CHUNG avatar

robosuite's Issues

issue_add_3rd_bin

Currently we only have 2 bins (1 picking bin & 1 placing bin)

Need to add the third one on the left of the picking bin to implement ITER.

Screenshot 2021-08-03 at 10 02 30 AM

How are the bins loaded?

  1. In picking.py we need to import BinsArena from robosuite.models.arena
  2. In Arena: bins_arena.xml we have table_full_size (3-tuple): x, y, z of the table, table_friction (3-tuple): 3 mujoco friction, bin1_pos (3-tuple): Absolute cartesian coordinates
  3. In picking.py __init__ we can initialise bin1_pos = (0.1, -0.25, 0.8), self.bin1_pos = np.array(bin1_pos)
  4. In picking.py get_placement_initializer we have reference_pos = self.bin1_pos for pickObject

increasing memory consumption when using object_randomization

In picking.py,
When the self.object_randomization flag is set to True, _reset_internal() also activate the self.hard_reset flag on.

The flag is used in environments/base.py:reset().
When self.hard_reset is on, the following code is called starting on L311:

        if self.hard_reset and not self.deterministic_reset: #TODO: investigate increasing memory consumption when calling this every rollout
            self._destroy_viewer()
            self._load_model()              #  Create a manipulation task objec (arena/robot/object/placement of objects/goal objects)
            self._postprocess_model()
            self._initialize_sim()

When this is called at the end of every rollout, our memory usage increases significantly over the lifetime of the experiment, always resulting in an out of RAM segfault.

If we only randomize every 200 epochs we see this stabilizes, but we have not figured out the underlying cause.

issue_buried_objects

These objects are buried down in the table.

  1. o0027 plate
    issue3_plateBottom part of the plate got buried.

  2. o0035 padlock
    issue5_lockCircular part of the lock (lock head) got buried.

  3. o0054 dice
    issue8_diceDice almost got completely buried.

Need to look deeper at z_offset in _get_placement_initializer function in picking.py

falling_objects_issue

Checking for Objects that fall

  1. We need to develop a logic such that:
  2. Detect if the object falls (i.e. if its height is less than the bin height)
  3. If so, remove the object from lists where we track objects. like the sorted_object_list

Screenshot 2021-08-20 at 10 44 52 AM

Objects falling to floor: 2 cases

In branch picking_environment_new_reset we introduced a new reset formulation where we do soft resets after 1 object is picked up and hard resets when all objects or ill conditions appear.

Objects may fall in two situations:

  1. #model==#load
    In cases, where the number of objects loaded is equal to the number of objects modeled, objects fall only from bin2, when a new object is placed unto bin2. This does not occur because that new object falls on top of the previous model, it must be related to the way the position of the objects is maintained across resets.

  2. #model<#load
    In cases, where the number of modeled objects is less than the loaded number of objects, the objects tend to fall down while doing a regular pick in bin1.

There must be some problem in the way the position of an object is updated when it is the list of "self.not_yet_considered_objects"... Need to dig in here.

picking_issue_2

Integrate baxter as a single arm robot

  1. Modify Baxter properties in robosuite/models/robots/baxter_robot.py to fit a single arm robot properties
  2. Need to find other locations to enable baxter robot deployment in SingleArmEnv

Issue 2

EEF limtis or Fallen Object + EVAL done triggered every rollout if EEF breaches limit

Currently, when EEF limtis + EVAL are running, and those limits are violated, every rollout in the EVAL gets terminated early on. This continues until the eval number of steps/rollouts is completed.

The same thing happens if an object falls.

  • Should we accept this behavior? The behavior should be common early on but improve as the robot learns.
  • Should we turn off this limits with a flag (todo)?
  • The code does not crash, but you get a null evaluation result.

move_table_issue

Moving the Bins

  1. we can move the bins closer to the robot.
  2. if we move the bins will any calculations fail?
  3. we could use a new branch for this check... this is important because if the objects falls to the floor we will never be able to pick it up

Empty goal returned after a fallen object is detected

image

Pull commit 3080530..9a6b193 (minor clearing of comments)
Bug:

  1. object loaded and modelled with self.object_randomization=False (using same objects after reset).
  2. objects falls
  3. during the next Picking._reset_internal() cannot generate a new goal as this object has been removed from self.object_names (or possibly not_yet_considered_object_names).

Brainstorming:

  • This bug probably is not dependent on whether we are doing object_randomization or not, simply that if we only have 1 object, and remove it, we return an empty goal...

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.