Giter VIP home page Giter VIP logo

assistive-gym's People

Contributors

zackory avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

assistive-gym's Issues

Clarification about robot_obs in custom Gym environments

Hello,

Thanks a lot for sharing this amazing code!

I am looking at the instructions for creating a custom environment here and I noticed that the robot observations are generated as space vectors (for example gripper_pos-torso_pos).

robot_obs = np.concatenate([gripper_pos-torso_pos, gripper_pos-self.target_pos, robot_right_joint_positions, gripper_orient, head_orient, forces]).ravel()

Why are the robot observations created this way and not as a list of joint angle positions and velocities, which seems more intuitive to me? Is it more effective for training perhaps?

Also, why is self.target_pos used in the robot observation? The robot state does not depend on the target position. Should not self.target_pos be used in the computation of the reward only?

This seems to be the case in most Assistive-gym environments so there must be a good reason but I can't find it.

Thanks

TypeError: 'linkLowerLimits' is an invalid keyword argument for this function

I have installed the env according to Install Guide,

pip3 install git+https://github.com/Zackory/bullet3.git
git clone https://github.com/Healthcare-Robotics/assistive-gym.git
cd assistive-gym
pip3 install .

And run env_viewer.py. But there is a error like below.
The version of assistive_gym is 0.100, and the pybullet is 2.6.0 .

Traceback (most recent call last):
  File "E:/MySrc/assistive-gym/env_viewer.py", line 19, in <module>
    observation = env.reset()
  File "D:\Anaconda3\envs\assistivegym\lib\site-packages\gym\wrappers\time_limit.py", line 25, in reset
    return self.env.reset(**kwargs)
  File "E:\MySrc\assistive-gym\assistive_gym\envs\scratch_itch.py", line 94, in reset
    self.human, self.wheelchair, self.robot, self.robot_lower_limits, self.robot_upper_limits, self.human_lower_limits, self.human_upper_limits, self.robot_right_arm_joint_indices, self.robot_left_arm_joint_indices, self.gender = self.world_creation.create_new_world(furniture_type='wheelchair', static_human_base=True, human_impairment='random', print_joints=False, gender='random')
  File "E:\MySrc\assistive-gym\assistive_gym\envs\world_creation.py", line 66, in create_new_world
    human, human_lower_limits, human_upper_limits = self.init_human(static_human_base, self.human_limit_scale, print_joints, gender=gender)
  File "E:\MySrc\assistive-gym\assistive_gym\envs\world_creation.py", line 89, in init_human
    human = self.human_creation.create_human(static=static_human_base, limit_scale=limit_scale, specular_color=[0.1, 0.1, 0.1], gender=gender, config=self.config)
  File "E:\MySrc\assistive-gym\assistive_gym\envs\human_creation.py", line 265, in create_human
    human = p.createMultiBody(baseMass=0 if static else m*0.1, baseCollisionShapeIndex=chest_c, baseVisualShapeIndex=chest_v, basePosition=chest_p, baseOrientation=[0, 0, 0, 1], linkMasses=linkMasses, linkCollisionShapeIndices=linkCollisionShapeIndices, linkVisualShapeIndices=linkVisualShapeIndices, linkPositions=linkPositions, linkOrientations=linkOrientations, linkInertialFramePositions=linkInertialFramePositions, linkInertialFrameOrientations=linkInertialFrameOrientations, linkParentIndices=linkParentIndices, linkJointTypes=linkJointTypes, linkJointAxis=linkJointAxis, linkLowerLimits=linkLowerLimits, linkUpperLimits=linkUpperLimits, useMaximalCoordinates=False, flags=p.URDF_USE_SELF_COLLISION, physicsClientId=self.id)
TypeError: 'linkLowerLimits' is an invalid keyword argument for this function

Suggestion about the install_requires

Since the tensorflow package updates very fast and does not have a good compatibility, it's better to fix the tf version in setup.py
In my case, the latest tf==2.3.0 will cause some error.

Thanks!

Transfer learned policy to actual robot

Hi,

I have been playing with assistive-gym for a while and I just got my hands on an actual Sawyer robot, so I was wondering if there is a way where I can transfer the trained model of the simulation to the actual Robot.

Let me know if you have any ideas on how to do so.

Run trained policies for active human environments on static human environments

Hi,

would it be possible to run trained policies for active human environments on static human environments?

In other words, imagine if I trained a policy for the environment "FeedingJacoHuman-v1" and now I want to render this policy for the environment "FeedingJaco-v1".

How can I achieve this?

I tried changing the folder name for the trained policy from FeedingJacoHuman-v1 to FeedingJaco-v1 and run the following command:

python3 -m assistive_gym.learn --env "FeedingJaco-v1" --algo ppo --render --seed 0 --load-policy-path ./trained_models/ --render-episodes 10

however, I get the following error:

Traceback (most recent call last):
  File "/usr/lib/python3.8/runpy.py", line 194, in _run_module_as_main
    return _run_code(code, main_globals, None,
  File "/usr/lib/python3.8/runpy.py", line 87, in _run_code
    exec(code, run_globals)
  File "/home/gabrigoo/assistive-gym/assistive_gym/learn.py", line 226, in <module>
    render_policy(None, args.env, args.algo, checkpoint_path if checkpoint_path is not None else args.load_policy_path, coop=coop, colab=args.colab, seed=args.seed, n_episodes=args.render_episodes)
  File "/home/gabrigoo/assistive-gym/assistive_gym/learn.py", line 104, in render_policy
    test_agent, _ = load_policy(env, algo, env_name, policy_path, coop, seed, extra_configs)
  File "/home/gabrigoo/assistive-gym/assistive_gym/learn.py", line 58, in load_policy
    agent.restore(checkpoint_path)
  File "/home/gabrigoo/env/lib/python3.8/site-packages/ray/tune/trainable.py", line 388, in restore
    self.load_checkpoint(checkpoint_path)
  File "/home/gabrigoo/env/lib/python3.8/site-packages/ray/rllib/agents/trainer.py", line 818, in load_checkpoint
    self.__setstate__(extra_data)
  File "/home/gabrigoo/env/lib/python3.8/site-packages/ray/rllib/agents/trainer_template.py", line 289, in __setstate__
    Trainer.__setstate__(self, state)
  File "/home/gabrigoo/env/lib/python3.8/site-packages/ray/rllib/agents/trainer.py", line 1698, in __setstate__
    self.workers.local_worker().restore(state["worker"])
  File "/home/gabrigoo/env/lib/python3.8/site-packages/ray/rllib/evaluation/rollout_worker.py", line 1267, in restore
    self.sync_filters(objs["filters"])
  File "/home/gabrigoo/env/lib/python3.8/site-packages/ray/rllib/evaluation/rollout_worker.py", line 1229, in sync_filters
    assert all(k in new_filters for k in self.filters)
AssertionError

Thanks a lot.

About p.setGravity

 File "/home/dell/RL/assistive-gym/assistive_gym/envs/agents/agent.py", line 228, in set_gravity
    p.setGravity(ax, ay, az, body=self.body, physicsClientId=self.id)
TypeError: function takes at most 4 arguments (5 given)

Everything was normal before .
After I modified the version of pybullet, the above error occurred .

《PyBullet Quickstart Guide》shows that setGravity only needs 4 arguments.But I do need to set different gravity for different agents .
So .... how to solve this problem

Hi, i can't run the env_viewer.py

when I run the env_view.py in /assistive_gym, the error log is as follows,

/home/zing/anaconda3/envs/python3.6/bin/python /home/zing/anaconda3/envs/python3.6/assistive-gym/assistive_gym/env_viewer.py
/home/zing/anaconda3/envs/python3.6/lib/python3.6/site-packages/tensorflow/python/framework/dtypes.py:516: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.
  _np_qint8 = np.dtype([("qint8", np.int8, 1)])
/home/zing/anaconda3/envs/python3.6/lib/python3.6/site-packages/tensorflow/python/framework/dtypes.py:517: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.
  _np_quint8 = np.dtype([("quint8", np.uint8, 1)])
/home/zing/anaconda3/envs/python3.6/lib/python3.6/site-packages/tensorflow/python/framework/dtypes.py:518: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.
  _np_qint16 = np.dtype([("qint16", np.int16, 1)])
/home/zing/anaconda3/envs/python3.6/lib/python3.6/site-packages/tensorflow/python/framework/dtypes.py:519: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.
  _np_quint16 = np.dtype([("quint16", np.uint16, 1)])
/home/zing/anaconda3/envs/python3.6/lib/python3.6/site-packages/tensorflow/python/framework/dtypes.py:520: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.
  _np_qint32 = np.dtype([("qint32", np.int32, 1)])
/home/zing/anaconda3/envs/python3.6/lib/python3.6/site-packages/tensorflow/python/framework/dtypes.py:525: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.
  np_resource = np.dtype([("resource", np.ubyte, 1)])
/home/zing/anaconda3/envs/python3.6/lib/python3.6/site-packages/tensorboard/compat/tensorflow_stub/dtypes.py:541: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.
  _np_qint8 = np.dtype([("qint8", np.int8, 1)])
/home/zing/anaconda3/envs/python3.6/lib/python3.6/site-packages/tensorboard/compat/tensorflow_stub/dtypes.py:542: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.
  _np_quint8 = np.dtype([("quint8", np.uint8, 1)])
/home/zing/anaconda3/envs/python3.6/lib/python3.6/site-packages/tensorboard/compat/tensorflow_stub/dtypes.py:543: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.
  _np_qint16 = np.dtype([("qint16", np.int16, 1)])
/home/zing/anaconda3/envs/python3.6/lib/python3.6/site-packages/tensorboard/compat/tensorflow_stub/dtypes.py:544: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.
  _np_quint16 = np.dtype([("quint16", np.uint16, 1)])
/home/zing/anaconda3/envs/python3.6/lib/python3.6/site-packages/tensorboard/compat/tensorflow_stub/dtypes.py:545: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.
  _np_qint32 = np.dtype([("qint32", np.int32, 1)])
/home/zing/anaconda3/envs/python3.6/lib/python3.6/site-packages/tensorboard/compat/tensorflow_stub/dtypes.py:550: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.
  np_resource = np.dtype([("resource", np.ubyte, 1)])
Traceback (most recent call last):
  File "/home/zing/anaconda3/envs/python3.6/lib/python3.6/site-packages/gym/envs/registration.py", line 158, in spec
    return self.env_specs[id]
KeyError: 'ScratchItchJaco-v1'

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "/home/zing/anaconda3/envs/python3.6/assistive-gym/assistive_gym/env_viewer.py", line 40, in <module>
    viewer(args.env)
  File "/home/zing/anaconda3/envs/python3.6/assistive-gym/assistive_gym/env_viewer.py", line 17, in viewer
    env = make_env(env_name, coop=True) if coop else gym.make(env_name)
  File "/home/zing/anaconda3/envs/python3.6/lib/python3.6/site-packages/gym/envs/registration.py", line 235, in make
    return registry.make(id, **kwargs)
  File "/home/zing/anaconda3/envs/python3.6/lib/python3.6/site-packages/gym/envs/registration.py", line 128, in make
    spec = self.spec(path)
  File "/home/zing/anaconda3/envs/python3.6/lib/python3.6/site-packages/gym/envs/registration.py", line 203, in spec
    raise error.UnregisteredEnv("No registered env with id: {}".format(id))
gym.error.UnregisteredEnv: No registered env with id: ScratchItchJaco-v1

Process finished with exit code 1

I don't know what the cause is. Can someone give some ideas? Thank you!

screeninfo error

File "/home/gberseth/playground/assistive-gym/assistive_gym/envs/env.py", line 9, in
from screeninfo import get_monitors
File "/home/gberseth/playground/env/lib/python3.5/site-packages/screeninfo/init.py", line 1, in
from .common import Enumerator, Monitor
File "/home/gberseth/playground/env/lib/python3.5/site-packages/screeninfo/common.py", line 10
x: int
^
SyntaxError: invalid syntax

I found installing pip3 install screeninfo==0.2 fix the issue.

Error with building custom bullet3

Hi,

I tried installing the custom bullet3 with pip3 on mac, and ran into this issue error: command 'gcc' failed with exit status 1.

So I tried cloning from the custom bullet3 repo, and building from there using ./build_cmake_pybullet_double.sh. I ran into this following error:

/Users/jerry/Dropbox/Projects/AssistRobotics/bullet3/test/SharedMemory/./test.c:58:10: error: no matching function for call to 'b3PhysicsParamSetGravity'
                        ret = b3PhysicsParamSetGravity(command, gravx, gravy, gravz);
                              ^~~~~~~~~~~~~~~~~~~~~~~~
/Users/jerry/Dropbox/Projects/AssistRobotics/bullet3/test/SharedMemory/../../examples/SharedMemory/PhysicsClientC_API.h:326:20: note: candidate function not viable:
      requires 5 arguments, but 4 were provided
        B3_SHARED_API int b3PhysicsParamSetGravity(b3SharedMemoryCommandHandle commandHandle, double gravx, double gravy, double gravz, int body);
                          ^

Looks like int body is causing conflicts with the example code. Have you run into this issue before? Thanks for your insight.

Strange issue with training cooperative scratch environment

When training co-optimization policy in scratch environment (instruction python -m ppo.train_coop --env-name "ScratchItchJaco-v0" --num-env-steps ...), I ran into this error attached below. The strange thing is that it doesn't show up when training non-cooperative policies in Scratch environment, or when training cooperative policies in other tasks. It seems that this could be an issue with the coop training script.

Any idea on why this happens?

Traceback (most recent call last):
  File "/usr/lib/python3.8/runpy.py", line 194, in _run_module_as_main
    return _run_code(code, main_globals, None,
  File "/usr/lib/python3.8/runpy.py", line 87, in _run_code
    exec(code, run_globals)
  File "/home/jerry/Projects/Assist/pytorch-a2c-ppo-acktr/ppo/train_coop.py", line 309, in <module>
    main()
  File "/home/jerry/Projects/Assist/pytorch-a2c-ppo-acktr/ppo/train_coop.py", line 109, in main
    actor_critic_human = Policy([obs_human_len], action_space_human,
  File "/home/jerry/Projects/Assist/pytorch-a2c-ppo-acktr/ppo/a2c_ppo_acktr/model.py", line 28, in __init__
    self.base = base(obs_shape[0], **base_kwargs)
  File "/home/jerry/Projects/Assist/pytorch-a2c-ppo-acktr/ppo/a2c_ppo_acktr/model.py", line 224, in __init__
    init_(nn.Linear(num_inputs, hidden_size)),
  File "/home/jerry/Projects/Assist/env/lib/python3.8/site-packages/torch/nn/modules/linear.py", line 77, in __init__
    self.reset_parameters()
  File "/home/jerry/Projects/Assist/env/lib/python3.8/site-packages/torch/nn/modules/linear.py", line 80, in reset_parameters
    init.kaiming_uniform_(self.weight, a=math.sqrt(5))
  File "/home/jerry/Projects/Assist/env/lib/python3.8/site-packages/torch/nn/init.py", line 324, in kaiming_uniform_
    std = gain / math.sqrt(fan)
ZeroDivisionError: float division by zero
Exception ignored in: <function SubprocVecEnv.__del__ at 0x7fde831be820>
Traceback (most recent call last):
  File "/home/jerry/Projects/Assist/env/lib/python3.8/site-packages/baselines/common/vec_env/subproc_vec_env.py", line 121, in __del__
    self.close()
  File "/home/jerry/Projects/Assist/env/lib/python3.8/site-packages/baselines/common/vec_env/vec_env.py", line 98, in close
    self.close_extras()
  File "/home/jerry/Projects/Assist/env/lib/python3.8/site-packages/baselines/common/vec_env/subproc_vec_env.py", line 104, in close_extras
    remote.send(('close', None))
  File "/usr/lib/python3.8/multiprocessing/connection.py", line 206, in send
    self._send_bytes(_ForkingPickler.dumps(obj))
  File "/usr/lib/python3.8/multiprocessing/connection.py", line 411, in _send_bytes
    self._send(header + buf)
  File "/usr/lib/python3.8/multiprocessing/connection.py", line 368, in _send
    n = write(self._handle, buf)
BrokenPipeError: [Errno 32] Broken pipe

('Observation ({}) outside given space ({})!' Error when trying to train model

Hi,

I don't have any trouble getting the env viewer to work, but when I try to actually train the models I get issues. When I run the line:

"python3 -m assistive_gym.learn --env "FeedingSawyerHuman-v1" --algo ppo --train --train-timesteps 100000 --save-dir ./trained_models/"

I get an error to do with ray[rllib] that's something like:

"
ray.exceptions.RayTaskError(ValueError): ray::RolloutWorker.par_iter_next() (pid=71524, ip=10.40.193.201)
File "python/ray/_raylet.pyx", line 505, in ray._raylet.execute_task
File "python/ray/_raylet.pyx", line 449, in ray._raylet.execute_task.function_executor
File "/home/zoe/miniconda3/envs/assistivegymtwo/lib/python3.7/site-packages/ray/_private/function_manager.py", line 556, in actor_method_executor
return method(__ray_actor, *args, **kwargs)
File "/home/zoe/miniconda3/envs/assistivegymtwo/lib/python3.7/site-packages/ray/util/iter.py", line 1152, in par_iter_next
return next(self.local_it)
File "/home/zoe/miniconda3/envs/assistivegymtwo/lib/python3.7/site-packages/ray/rllib/evaluation/rollout_worker.py", line 332, in gen_rollouts
yield self.sample()
File "/home/zoe/miniconda3/envs/assistivegymtwo/lib/python3.7/site-packages/ray/rllib/evaluation/rollout_worker.py", line 706, in sample
batches = [self.input_reader.next()]
File "/home/zoe/miniconda3/envs/assistivegymtwo/lib/python3.7/site-packages/ray/rllib/evaluation/sampler.py", line 96, in next
batches = [self.get_data()]
File "/home/zoe/miniconda3/envs/assistivegymtwo/lib/python3.7/site-packages/ray/rllib/evaluation/sampler.py", line 223, in get_data
item = next(self.rollout_provider)
File "/home/zoe/miniconda3/envs/assistivegymtwo/lib/python3.7/site-packages/ray/rllib/evaluation/sampler.py", line 613, in _env_runner
sample_collector=sample_collector,
File "/home/zoe/miniconda3/envs/assistivegymtwo/lib/python3.7/site-packages/ray/rllib/evaluation/sampler.py", line 808, in _process_observations
policy_id).transform(raw_obs)
File "/home/zoe/miniconda3/envs/assistivegymtwo/lib/python3.7/site-packages/ray/rllib/models/preprocessors.py", line 187, in transform
self.check_shape(observation)
File "/home/zoe/miniconda3/envs/assistivegymtwo/lib/python3.7/site-packages/ray/rllib/models/preprocessors.py", line 68, in check_shape
observation, self._obs_space)
ValueError: ('Observation ({}) outside given space ({})!', array([ 0.74139816, -0.5481506 , 0.13728762, 0.67066866, -0.0354565 ,
-0.04076764, 0.73978668, -0.25926405, -0.50434195, -0.06268156,
-0.94625477, -0.31422273, -0.55584883, 0.89838678, -0.86592606,
-1.34115179, -1.86178359, 0.97019053, 0.03369198, 0.13699996,
-0.21314327, 0.21102498, 0.05298619, 0.95248669, 0. ]), Box(-1000000000.0, 1000000000.0, (25,), float32))
Exception ignored in: <function ActorHandle.del at 0x7f675cd07f80>

"

I'm wondering if I'm using the wrong version of something like rllib or gym?

I'm using python==3.7.10, gym==0.23.1, ray[rllib] == 1.3.0

I've tried other version of these and still get the same error. I'm a bit unsure of how to fix. Any help or guidance would be greatly appreciated!!

Thanks

Issue Create New Environment

Hello,

I was following the "6. Creating a New Assistive Environment" tutorial on the wiki page.

However, when I get to the end and I try to train the model with the following command I get an error (I am working in a virtual env):

Command 1
python3 -m ppo.train --env-name "ReachingJaco-v0" --num-env-steps 1000000 --save-dir ./trained_models_new/

ERROR:
/home/gabrigoo/Assistive Gym Stuff/env/bin/python3: No module named ppo.train

I also tried the following command and I get this error:

Command 2:
python3 -m assistive_gym --env "ReachingJaco-v0"

ERROR:

pybullet build time: Oct 14 2021 09:51:13
Using TensorFlow backend.
Traceback (most recent call last):
  File "/usr/lib/python3.8/runpy.py", line 194, in _run_module_as_main
    return _run_code(code, main_globals, None,
  File "/usr/lib/python3.8/runpy.py", line 87, in _run_code
    exec(code, run_globals)
  File "/home/gabrigoo/Assistive Gym Stuff/assistive-gym/assistive_gym/__main__.py", line 10, in <module>
    viewer(args.env)
  File "/home/gabrigoo/Assistive Gym Stuff/assistive-gym/assistive_gym/env_viewer.py", line 17, in viewer
    env = make_env(env_name, coop=True) if coop else gym.make(env_name)
  File "/home/gabrigoo/Assistive Gym Stuff/env/lib/python3.8/site-packages/gym/envs/registration.py", line 235, in make
    return registry.make(id, **kwargs)
  File "/home/gabrigoo/Assistive Gym Stuff/env/lib/python3.8/site-packages/gym/envs/registration.py", line 129, in make
    env = spec.make(**kwargs)
  File "/home/gabrigoo/Assistive Gym Stuff/env/lib/python3.8/site-packages/gym/envs/registration.py", line 90, in make
    env = cls(**_kwargs)
  File "/home/gabrigoo/Assistive Gym Stuff/assistive-gym/assistive_gym/envs/reaching_robots.py", line 8, in __init__
    super(ReachingJacoEnv, self).__init__(robot_type='jaco', human_control=False)
  File "/home/gabrigoo/Assistive Gym Stuff/assistive-gym/assistive_gym/envs/reaching.py", line 9, in __init__
    super(ReachingEnv, self).__init__(robot_type=robot_type, task='reaching', human_control=human_control, frame_skip=5, time_step=0.02, action_robot_len=7, action_human_len=(4 if human_control else 0), obs_robot_len=21, obs_human_len=(19 if human_control else 0))
TypeError: __init__() got an unexpected keyword argument 'robot_type'

Any idea why this happens and what is the cause?
Thanks in advance for your help

After some tweaking, I think the issue stands from the fact that the tutorial is for version 0.1 while I am running version 1. Do you think the wiki could be updated for the newer version?

Model (keras) error

Brilliant work!!
I am running the env_viewer.py but I get one error on the model. I think it is due to the version of keras.
Which version of keras and tensorflow did you use?
FYI, here is the error log:

E:\anaconda\python.exe F:/assistive-gym-test/examples/random_actions.py
pybullet build time: Feb 18 2020 16:57:18
Using TensorFlow backend.
2020-02-18 17:19:39.678134: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library cudart64_100.dll
WARNING:tensorflow:From E:\anaconda\lib\site-packages\tensorflow_core\python\compat\v2_compat.py:65: disable_resource_variables (from tensorflow.python.ops.variable_scope) is deprecated and will be removed in a future version.
Instructions for updating:
non-resource variables are not supported in the long term
Traceback (most recent call last):
File "F:/assistive-gym-test/examples/random_actions.py", line 3, in
env = gym.make('FeedingPR2-v0')
File "E:\anaconda\lib\site-packages\gym\envs\registration.py", line 156, in make
return registry.make(id, **kwargs)
File "E:\anaconda\lib\site-packages\gym\envs\registration.py", line 101, in make
env = spec.make(kwargs)
File "E:\anaconda\lib\site-packages\gym\envs\registration.py", line 73, in make
env = cls(
kwargs)
File "F:\assistive-gym-test\assistive_gym\envs\feeding_robots.py", line 5, in init
super(FeedingPR2Env, self).init(robot_type='pr2', human_control=False)
File "F:\assistive-gym-test\assistive_gym\envs\feeding.py", line 10, in init
super(FeedingEnv, self).init(robot_type=robot_type, task='feeding', human_control=human_control, frame_skip=10, time_step=0.01, action_robot_len=7, action_human_len=(4 if human_control else 0), obs_robot_len=25, obs_human_len=(23 if human_control else 0))
File "F:\assistive-gym-test\assistive_gym\envs\env.py", line 63, in init
self.human_limits_model = load_model(os.path.join(self.world_creation.directory, 'realistic_arm_limits_model.h5'))
File "E:\anaconda\lib\site-packages\keras\models.py", line 239, in load_model
model = model_from_config(model_config, custom_objects=custom_objects)
File "E:\anaconda\lib\site-packages\keras\models.py", line 313, in model_from_config
return layer_module.deserialize(config, custom_objects=custom_objects)
File "E:\anaconda\lib\site-packages\keras\layers_init
.py", line 54, in deserialize
printable_module_name='layer')
File "E:\anaconda\lib\site-packages\keras\utils\generic_utils.py", line 139, in deserialize_keras_object
list(custom_objects.items())))
File "E:\anaconda\lib\site-packages\keras\models.py", line 1208, in from_config
if 'class_name' not in config[0] or config[0]['class_name'] == 'Merge':
KeyError: 0

Reproducing Cooperative ItchScratch results

Hi,

I'm working on reproducing cooperative ItchScratch results from the paper. I tried ItchScratchJacoHuman-v0 with original hyperparaters and trained for 10M steps on my local 12 core machine. The training process took ~15 hours, yet the trained model isn't quite as good as the pretrained model/model in the paper (reward mean 443.2)

Reward Mean: -62.023032418203236 (from 100 rollouts)
Reward Std: 37.62848439453216
Task Success Mean: 0.0
Task Success Std: 0.0

I'm wondering if there's any hyperparameter settings/key steps that I missed? Thanks for your insight!

No friction on cloth

Hi, thanks for developing and sharing this interesting benchmark! I'm trying to further develop a cloth manipulation task based on your customized bullet environment. My problem is that the cloth is too slippery and seems to have no friction, so my PR2 robot will never be able to grasp it. I would really appreciate it if you could give me some suggestions. Thanks!

ValueError when using SAC with co-optimization

Thank you for sharing this wonderful repository. When I try to run experiments with co-optimization, PPO is fine. But when I try SAC there is a strange error.

ValueError: Have multiple policies {'human': <ray.rllib.policy.tf_policy_template.SACTFPolicy object at 0x7f8ec4436470>, 'robot': <ray.rllib.policy.tf_policy_template.SACTFPolicy object at 0x7f8ebc685ef0>}, but the env <NormalizeActionWrapper<FeedingSawyerHumanEnv instance>> is not a subclass of BaseEnv, MultiAgentEnv or ExternalMultiAgentEnv?

This seems to be related to this issue of RLlib.

Unable to build Dockerfile

Hi, I have been trying to build the Dockerfile but without success.
While performing command docker build -t test1 . terminal outputs this:

#0 177.1 Downloading decorator-4.4.2-py2.py3-none-any.whl (9.2 kB)
#0 177.4 WARNING: The candidate selected for download or install is a yanked version: 'protobuf' candidate (version 4.21.0 at https://files.pythonhosted.org/packages/27/82/986065ef305c0989c99d8ef3f29e58a03fac6e64bb2c36ffe64500cc6955/protobuf-4.21.0-py3-none-any.whl#sha256=4e78116673ba04e01e563f6a9cca2c72db0be8a3e1629094816357e81cc39d36 (from https://pypi.org/simple/protobuf/))
#0 177.4 Reason for being yanked: Required python version not configured correctly (protocolbuffers/protobuf#10076)
#0 177.4 Using legacy 'setup.py install' for screeninfo, since package 'wheel' is not installed.
#0 177.4 Using legacy 'setup.py install' for pybullet, since package 'wheel' is not installed.
#0 177.4 Using legacy 'setup.py install' for termcolor, since package 'wheel' is not installed.
#0 177.4 Using legacy 'setup.py install' for dm-tree, since package 'wheel' is not installed.
#0 177.4 Building wheels for collected packages: gym
#0 177.4 Building wheel for gym (pyproject.toml): started
#0 177.8 Building wheel for gym (pyproject.toml): finished with status 'done'
#0 177.8 Created wheel for gym: filename=gym-0.26.2-py3-none-any.whl size=827647 sha256=e9de058b37e5f7a970af3fb2188fe0b78380c763b4c0e1d520a73bbcc48d01ff
#0 177.8 Stored in directory: /home/ubuntu/.cache/pip/wheels/35/54/f1/608768a57e3b4c6d0c8dd7bc32f039903b0370712909ba5f99
#0 177.8 Successfully built gym
#0 178.6 Installing collected packages: zipp, typing-extensions, six, urllib3, setuptools, python-dateutil, pyrsistent, pyparsing, platformdirs, pillow, numpy, kiwisolver, importlib-resources, importlib-metadata, idna, frozenlist, filelock, distlib, decorator, cycler, charset-normalizer, certifi, attrs, wheel, werkzeug, virtualenv, tifffile, scipy, requests, pyyaml, PyWavelets, pytz, pygments, protobuf, packaging, networkx, msgpack, matplotlib, markdown, jsonschema, imageio, h5py, gymnasium-notices, grpcio, commonmark, cloudpickle, click, aiosignal, absl-py, wrapt, typer, torch, termcolor, tensorflow-estimator, tensorboardX, tensorboard, tabulate, scikit-image, rich, ray, pandas, lz4, keras-preprocessing, keras-applications, gymnasium, gym-notices, google-pasta, gast, dm-tree, astor, trimesh, tensorflow-probability, tensorflow, smplx, screeninfo, pybullet, numpngw, keras, gym, assistive-gym
#0 178.7 Attempting uninstall: setuptools
#0 178.7 Found existing installation: setuptools 39.0.1
#0 178.7 Uninstalling setuptools-39.0.1:
#0 178.8 Successfully uninstalled setuptools-39.0.1
#0 196.6 Running setup.py install for termcolor: started
#0 196.8 Running setup.py install for termcolor: finished with status 'done'
#0 202.5 Running setup.py install for dm-tree: started
#0 202.7 Running setup.py install for dm-tree: finished with status 'error'
#0 202.7 ERROR: Command errored out with exit status 1:
#0 202.7 command: /home/ubuntu/.pyenv/versions/3.6.5/bin/python3.6 -u -c 'import io, os, sys, setuptools, tokenize; sys.argv[0] = '"'"'/tmp/pip-install-zj4qmzy8/dm-tree_561d46f26e844af9adc7923cd590fd1f/setup.py'"'"'; file='"'"'/tmp/pip-install-zj4qmzy8/dm-tree_561d46f26e844af9adc7923cd590fd1f/setup.py'"'"';f = getattr(tokenize, '"'"'open'"'"', open)(file) if os.path.exists(file) else io.StringIO('"'"'from setuptools import setup; setup()'"'"');code = f.read().replace('"'"'\r\n'"'"', '"'"'\n'"'"');f.close();exec(compile(code, file, '"'"'exec'"'"'))' install --record /tmp/pip-record-box_b883/install-record.txt --single-version-externally-managed --compile --install-headers /home/ubuntu/.pyenv/versions/3.6.5/include/python3.6m/dm-tree
#0 202.7 cwd: /tmp/pip-install-zj4qmzy8/dm-tree_561d46f26e844af9adc7923cd590fd1f/
#0 202.7 Complete output (59 lines):
#0 202.7 running install
#0 202.7 /home/ubuntu/.pyenv/versions/3.6.5/lib/python3.6/site-packages/setuptools/command/install.py:37: SetuptoolsDeprecationWarning: setup.py install is deprecated. Use build and pip and other standards-based tools.
#0 202.7 setuptools.SetuptoolsDeprecationWarning,
#0 202.7 running build
#0 202.7 running build_py
#0 202.7 creating build
#0 202.7 creating build/lib.linux-x86_64-3.6
#0 202.7 creating build/lib.linux-x86_64-3.6/tree
#0 202.7 copying tree/tree_test.py -> build/lib.linux-x86_64-3.6/tree
#0 202.7 copying tree/tree_benchmark.py -> build/lib.linux-x86_64-3.6/tree
#0 202.7 copying tree/init.py -> build/lib.linux-x86_64-3.6/tree
#0 202.7 copying tree/sequence.py -> build/lib.linux-x86_64-3.6/tree
#0 202.7 running build_ext
#0 202.7 Traceback (most recent call last):
#0 202.7 File "/tmp/pip-install-zj4qmzy8/dm-tree_561d46f26e844af9adc7923cd590fd1f/setup.py", line 77, in _check_build_environment
#0 202.7 subprocess.check_call(['cmake', '--version'])
#0 202.7 File "/home/ubuntu/.pyenv/versions/3.6.5/lib/python3.6/subprocess.py", line 286, in check_call
#0 202.7 retcode = call(*popenargs, **kwargs)
#0 202.7 File "/home/ubuntu/.pyenv/versions/3.6.5/lib/python3.6/subprocess.py", line 267, in call
#0 202.7 with Popen(*popenargs, **kwargs) as p:
#0 202.7 File "/home/ubuntu/.pyenv/versions/3.6.5/lib/python3.6/subprocess.py", line 709, in init
#0 202.7 restore_signals, start_new_session)
#0 202.7 File "/home/ubuntu/.pyenv/versions/3.6.5/lib/python3.6/subprocess.py", line 1344, in _execute_child
#0 202.7 raise child_exception_type(errno_num, err_msg, err_filename)
#0 202.7 FileNotFoundError: [Errno 2] No such file or directory: 'cmake': 'cmake'
#0 202.7
#0 202.7 The above exception was the direct cause of the following exception:
#0 202.7
#0 202.7 Traceback (most recent call last):
#0 202.7 File "", line 1, in
#0 202.7 File "/tmp/pip-install-zj4qmzy8/dm-tree_561d46f26e844af9adc7923cd590fd1f/setup.py", line 155, in
#0 202.7 keywords='tree nest flatten',
#0 202.7 File "/home/ubuntu/.pyenv/versions/3.6.5/lib/python3.6/site-packages/setuptools/init.py", line 153, in setup
#0 202.7 return distutils.core.setup(**attrs)
#0 202.7 File "/home/ubuntu/.pyenv/versions/3.6.5/lib/python3.6/distutils/core.py", line 148, in setup
#0 202.7 dist.run_commands()
#0 202.7 File "/home/ubuntu/.pyenv/versions/3.6.5/lib/python3.6/distutils/dist.py", line 955, in run_commands
#0 202.7 self.run_command(cmd)
#0 202.7 File "/home/ubuntu/.pyenv/versions/3.6.5/lib/python3.6/distutils/dist.py", line 974, in run_command
#0 202.7 cmd_obj.run()
#0 202.7 File "/home/ubuntu/.pyenv/versions/3.6.5/lib/python3.6/site-packages/setuptools/command/install.py", line 68, in run
#0 202.7 return orig.install.run(self)
#0 202.7 File "/home/ubuntu/.pyenv/versions/3.6.5/lib/python3.6/distutils/command/install.py", line 545, in run
#0 202.7 self.run_command('build')
#0 202.7 File "/home/ubuntu/.pyenv/versions/3.6.5/lib/python3.6/distutils/cmd.py", line 313, in run_command
#0 202.7 self.distribution.run_command(command)
#0 202.7 File "/home/ubuntu/.pyenv/versions/3.6.5/lib/python3.6/distutils/dist.py", line 974, in run_command
#0 202.7 cmd_obj.run()
#0 202.7 File "/home/ubuntu/.pyenv/versions/3.6.5/lib/python3.6/distutils/command/build.py", line 135, in run
#0 202.7 self.run_command(cmd_name)
#0 202.7 File "/home/ubuntu/.pyenv/versions/3.6.5/lib/python3.6/distutils/cmd.py", line 313, in run_command
#0 202.7 self.distribution.run_command(command)
#0 202.7 File "/home/ubuntu/.pyenv/versions/3.6.5/lib/python3.6/distutils/dist.py", line 974, in run_command
#0 202.7 cmd_obj.run()
#0 202.7 File "/tmp/pip-install-zj4qmzy8/dm-tree_561d46f26e844af9adc7923cd590fd1f/setup.py", line 70, in run
#0 202.7 self._check_build_environment()
#0 202.7 File "/tmp/pip-install-zj4qmzy8/dm-tree_561d46f26e844af9adc7923cd590fd1f/setup.py", line 82, in _check_build_environment
#0 202.7 ) from e
#0 202.7 RuntimeError: CMake must be installed to build the following extensions: _tree
#0 202.7 ----------------------------------------
#0 202.7 ERROR: Command errored out with exit status 1: /home/ubuntu/.pyenv/versions/3.6.5/bin/python3.6 -u -c 'import io, os, sys, setuptools, tokenize; sys.argv[0] = '"'"'/tmp/pip-install-zj4qmzy8/dm-tree_561d46f26e844af9adc7923cd590fd1f/setup.py'"'"'; file='"'"'/tmp/pip-install-zj4qmzy8/dm-tree_561d46f26e844af9adc7923cd590fd1f/setup.py'"'"';f = getattr(tokenize, '"'"'open'"'"', open)(file) if os.path.exists(file) else io.StringIO('"'"'from setuptools import setup; setup()'"'"');code = f.read().replace('"'"'\r\n'"'"', '"'"'\n'"'"');f.close();exec(compile(code, file, '"'"'exec'"'"'))' install --record /tmp/pip-record-box_b883/install-record.txt --single-version-externally-managed --compile --install-headers /home/ubuntu/.pyenv/versions/3.6.5/include/python3.6m/dm-tree Check the logs for full command output.

Dockerfile:18

16 | RUN pip3 install screeninfo
17 | # RUN pip3 install git+https://github.com/Zackory/bullet3.git
18 | >>> RUN git clone https://github.com/Healthcare-Robotics/assistive-gym.git && cd assistive-gym && pip3 install -e .
19 | RUN pip3 install git+https://github.com/Zackory/pytorch-a2c-ppo-acktr --no-cache-dir
20 | RUN pip3 install git+https://github.com/openai/baselines.git

ERROR: failed to solve: process "/bin/sh -c git clone https://github.com/Healthcare-Robotics/assistive-gym.git && cd assistive-gym && pip3 install -e ." did not complete successfully: exit code: 1

I tried various solutions but none of them work. Could you know maybe what could be the case and what am I doing wrong? Thank you in advance.

Google Colab

Is it possible to run it on the Google Colab because Python Bullet does not seem to work there since it cannot connect to the X-Server?

Your PyBullet vs Official PyBullet Implementations?

Hi @Zackory this is great work!

I have been working closely with PyBullet over the last few months, and am using the official PyBullet code installed via pip install pybullet==3.0.4 where this version supports deformables. (It is in the official documentation now.) I see that your installation instructions https://github.com/Healthcare-Robotics/assistive-gym/wiki/1.-Install

specify your custom version of PyBullet. Your setup.py:

https://github.com/Zackory/bullet3/blob/master/setup.py

suggests you used PyBullet 2.4.8

  1. Is this correct, in that you forked off of code that was for the PyPI 2.4.8 version?

More broadly I seek to get a better understanding of the differences between the two implementations. So, I have two follow-up questions:

  1. Is the main difference between your code and PyBullet's is your way of handling gripping of deformables?
  2. Did you change any of the physics implementation for cloth simulation?

When I do "env = gym.make", I get an error "PicklingError: Could not pickle object as excessively deep recursion required.

Hello, I am currently trying to use Assistive Gym with "ubunut20".
When I do
env= gym.make, I get an error PicklingError: Could not pickle object as excessively deep recursion required.
If I use sys.setrecursionlimit(3000), the error disappears, but env = gym.make does not finish after 10 hours.
python3 -m assistive_gym --env "BedBathingSawyer-v1 also gives the same error. Is there any solution?

Thank you.

ValueError: high - low < 0 for env.reset()

When I run env.reset() now and then, I get this error ValueError: high - low < 0; it is hard to reproduce this error. It occurs somewhat randomly. I have no idea why this is occurring. In particular, it seems to occur when calling self.init_robot_pose when calling env.reset()`. This also occurs when setting a random seed. I'm testing with the scratch-itch environment. Any help would be much appreciated.

Edit: This error does not occur in Feeding environments have yet to test other so it might be isolated to scratch itch.

Full error:

ValueError                                Traceback (most recent call last)
[/var/folders/_l/435lwsyd56n753cy_66n91xh0000gn/T/ipykernel_9937/1077114707.py](https://file+.vscode-resource.vscode-cdn.net/var/folders/_l/435lwsyd56n753cy_66n91xh0000gn/T/ipykernel_9937/1077114707.py) in <module>
     21 
     22         # TRY NOT TO MODIFY: execute the game and log data.
---> 23         next_obs, reward, terminations, infos = envs.step(action.cpu().numpy())
     24         # print(terminations)
     25         # print(f"next_obs: {next_obs} \n dtype: {next_obs.dtype}")

[~/anaconda3/envs/assistive_robotics/lib/python3.7/site-packages/gym/vector/vector_env.py](https://file+.vscode-resource.vscode-cdn.net/Users/leohink/Documents/1_PhD/assistive_gym/~/anaconda3/envs/assistive_robotics/lib/python3.7/site-packages/gym/vector/vector_env.py) in step(self, actions)
    110 
    111         self.step_async(actions)
--> 112         return self.step_wait()
    113 
    114     def call_async(self, name, *args, **kwargs):

[~/anaconda3/envs/assistive_robotics/lib/python3.7/site-packages/gym/vector/sync_vector_env.py](https://file+.vscode-resource.vscode-cdn.net/Users/leohink/Documents/1_PhD/assistive_gym/~/anaconda3/envs/assistive_robotics/lib/python3.7/site-packages/gym/vector/sync_vector_env.py) in step_wait(self)
    139             if self._dones[i]:
    140                 info["terminal_observation"] = observation
--> 141                 observation = env.reset()
    142             observations.append(observation)
    143             infos.append(info)

[~/anaconda3/envs/assistive_robotics/lib/python3.7/site-packages/gym/core.py](https://file+.vscode-resource.vscode-cdn.net/Users/leohink/Documents/1_PhD/assistive_gym/~/anaconda3/envs/assistive_robotics/lib/python3.7/site-packages/gym/core.py) in reset(self, **kwargs)
    322 class RewardWrapper(Wrapper):
    323     def reset(self, **kwargs):
--> 324         return self.env.reset(**kwargs)
    325 
    326     def step(self, action):

[~/anaconda3/envs/assistive_robotics/lib/python3.7/site-packages/gym/core.py](https://file+.vscode-resource.vscode-cdn.net/Users/leohink/Documents/1_PhD/assistive_gym/~/anaconda3/envs/assistive_robotics/lib/python3.7/site-packages/gym/core.py) in reset(self, **kwargs)
    281 
    282     def reset(self, **kwargs) -> Union[ObsType, tuple[ObsType, dict]]:
--> 283         return self.env.reset(**kwargs)
    284 
    285     def render(self, mode="human", **kwargs):

[~/anaconda3/envs/assistive_robotics/lib/python3.7/site-packages/gym/core.py](https://file+.vscode-resource.vscode-cdn.net/Users/leohink/Documents/1_PhD/assistive_gym/~/anaconda3/envs/assistive_robotics/lib/python3.7/site-packages/gym/core.py) in reset(self, **kwargs)
    309             return self.observation(obs), info
    310         else:
--> 311             return self.observation(self.env.reset(**kwargs))
    312 
    313     def step(self, action):

[~/anaconda3/envs/assistive_robotics/lib/python3.7/site-packages/gym/wrappers/normalize.py](https://file+.vscode-resource.vscode-cdn.net/Users/leohink/Documents/1_PhD/assistive_gym/~/anaconda3/envs/assistive_robotics/lib/python3.7/site-packages/gym/wrappers/normalize.py) in reset(self, **kwargs)
     69             obs, info = self.env.reset(**kwargs)
     70         else:
---> 71             obs = self.env.reset(**kwargs)
     72         if self.is_vector_env:
     73             obs = self.normalize(obs)

[~/anaconda3/envs/assistive_robotics/lib/python3.7/site-packages/gym/core.py](https://file+.vscode-resource.vscode-cdn.net/Users/leohink/Documents/1_PhD/assistive_gym/~/anaconda3/envs/assistive_robotics/lib/python3.7/site-packages/gym/core.py) in reset(self, **kwargs)
    335 class ActionWrapper(Wrapper):
    336     def reset(self, **kwargs):
--> 337         return self.env.reset(**kwargs)
    338 
    339     def step(self, action):

[~/anaconda3/envs/assistive_robotics/lib/python3.7/site-packages/gym/wrappers/record_episode_statistics.py](https://file+.vscode-resource.vscode-cdn.net/Users/leohink/Documents/1_PhD/assistive_gym/~/anaconda3/envs/assistive_robotics/lib/python3.7/site-packages/gym/wrappers/record_episode_statistics.py) in reset(self, **kwargs)
     20 
     21     def reset(self, **kwargs):
---> 22         observations = super().reset(**kwargs)
     23         self.episode_returns = np.zeros(self.num_envs, dtype=np.float32)
     24         self.episode_lengths = np.zeros(self.num_envs, dtype=np.int32)

[~/anaconda3/envs/assistive_robotics/lib/python3.7/site-packages/gym/core.py](https://file+.vscode-resource.vscode-cdn.net/Users/leohink/Documents/1_PhD/assistive_gym/~/anaconda3/envs/assistive_robotics/lib/python3.7/site-packages/gym/core.py) in reset(self, **kwargs)
    281 
    282     def reset(self, **kwargs) -> Union[ObsType, tuple[ObsType, dict]]:
--> 283         return self.env.reset(**kwargs)
    284 
    285     def render(self, mode="human", **kwargs):

[~/anaconda3/envs/assistive_robotics/lib/python3.7/site-packages/gym/core.py](https://file+.vscode-resource.vscode-cdn.net/Users/leohink/Documents/1_PhD/assistive_gym/~/anaconda3/envs/assistive_robotics/lib/python3.7/site-packages/gym/core.py) in reset(self, **kwargs)
    309             return self.observation(obs), info
    310         else:
--> 311             return self.observation(self.env.reset(**kwargs))
    312 
    313     def step(self, action):

[~/anaconda3/envs/assistive_robotics/lib/python3.7/site-packages/gym/wrappers/time_limit.py](https://file+.vscode-resource.vscode-cdn.net/Users/leohink/Documents/1_PhD/assistive_gym/~/anaconda3/envs/assistive_robotics/lib/python3.7/site-packages/gym/wrappers/time_limit.py) in reset(self, **kwargs)
     24     def reset(self, **kwargs):
     25         self._elapsed_steps = 0
---> 26         return self.env.reset(**kwargs)

[~/anaconda3/envs/assistive_robotics/lib/python3.7/site-packages/gym/wrappers/order_enforcing.py](https://file+.vscode-resource.vscode-cdn.net/Users/leohink/Documents/1_PhD/assistive_gym/~/anaconda3/envs/assistive_robotics/lib/python3.7/site-packages/gym/wrappers/order_enforcing.py) in reset(self, **kwargs)
     16     def reset(self, **kwargs):
     17         self._has_reset = True
---> 18         return self.env.reset(**kwargs)

[~/Documents/1_PhD/assistive_gym/assistive-gym/assistive_gym/envs/scratch_itch.py](https://file+.vscode-resource.vscode-cdn.net/Users/leohink/Documents/1_PhD/assistive_gym/~/Documents/1_PhD/assistive_gym/assistive-gym/assistive_gym/envs/scratch_itch.py) in reset(self)
    116         target_ee_pos = np.array([-0.6, 0, 0.8]) + self.np_random.uniform(-0.05, 0.05, size=3)
    117         target_ee_orient = self.get_quaternion(self.robot.toc_ee_orient_rpy[self.task])
--> 118         self.init_robot_pose(target_ee_pos, target_ee_orient, [(target_ee_pos, target_ee_orient)], [(shoulder_pos, None), (elbow_pos, None), (wrist_pos, None)], arm='left', tools=[self.tool], collision_objects=[self.human, self.furniture])
    119 
    120         # Open gripper to hold the tool

[~/Documents/1_PhD/assistive_gym/assistive-gym/assistive_gym/envs/env.py](https://file+.vscode-resource.vscode-cdn.net/Users/leohink/Documents/1_PhD/assistive_gym/~/Documents/1_PhD/assistive_gym/assistive-gym/assistive_gym/envs/env.py) in init_robot_pose(self, target_ee_pos, target_ee_orient, start_pos_orient, target_pos_orients, arm, tools, collision_objects, wheelchair_enabled, right_side, max_iterations)
    294             elif self.robot.wheelchair_mounted and wheelchair_enabled:
    295                 # Use IK to find starting joint angles for mounted robots
--> 296                 self.robot.ik_random_restarts(right=(arm == 'right'), target_pos=target_ee_pos, target_orient=target_ee_orient, max_iterations=1000, max_ik_random_restarts=1000, success_threshold=0.01, step_sim=False, check_env_collisions=False, randomize_limits=True, collision_objects=collision_objects)
    297             else:
    298                 # Use TOC with JLWKI to find an optimal base position for the robot near the person

[~/Documents/1_PhD/assistive_gym/assistive-gym/assistive_gym/envs/agents/robot.py](https://file+.vscode-resource.vscode-cdn.net/Users/leohink/Documents/1_PhD/assistive_gym/~/Documents/1_PhD/assistive_gym/assistive-gym/assistive_gym/envs/agents/robot.py) in ik_random_restarts(self, right, target_pos, target_orient, max_iterations, max_ik_random_restarts, success_threshold, step_sim, check_env_collisions, randomize_limits, collision_objects)
     89         best_ik_distance = 0
     90         for r in range(max_ik_random_restarts):
---> 91             target_joint_angles = self.ik(self.right_end_effector if right else self.left_end_effector, target_pos, target_orient, ik_indices=self.right_arm_ik_indices if right else self.left_arm_ik_indices, max_iterations=max_iterations, half_range=self.half_range, randomize_limits=(randomize_limits and r >= 10))
     92             self.set_joint_angles(self.right_arm_joint_indices if right else self.left_arm_joint_indices, target_joint_angles)
     93             gripper_pos, gripper_orient = self.get_pos_orient(self.right_end_effector if right else self.left_end_effector)

[~/Documents/1_PhD/assistive_gym/assistive-gym/assistive_gym/envs/agents/agent.py](https://file+.vscode-resource.vscode-cdn.net/Users/leohink/Documents/1_PhD/assistive_gym/~/Documents/1_PhD/assistive_gym/assistive-gym/assistive_gym/envs/agents/agent.py) in ik(self, target_joint, target_pos, target_orient, ik_indices, max_iterations, half_range, use_current_as_rest, randomize_limits)
    253         if target_orient is not None and len(target_orient) < 4:
    254             target_orient = self.get_quaternion(target_orient)
--> 255         ik_lower_limits = self.ik_lower_limits if not randomize_limits else self.np_random.uniform(0, self.ik_lower_limits)
    256         ik_upper_limits = self.ik_upper_limits if not randomize_limits else self.np_random.uniform(0, self.ik_upper_limits)
    257         ik_joint_ranges = ik_upper_limits - ik_lower_limits

_generator.pyx in numpy.random._generator.Generator.uniform()

_common.pyx in numpy.random._common.cont()

_common.pyx in numpy.random._common.cont_broadcast_2()

_common.pyx in numpy.random._common.check_array_constraint()

ValueError: high - low < 0

Import Custom 3D Objects

Hello,

I was wondering if there is a way to import custom 3D objects. I created a simple table on Fusion 360 and exported it as .obj. I then used object2urdf to create the urdf file as well as the vhacd file however when I load this into the simulation it doesn't show up.

Any hints/ideas on how to do this? Thanks a lot

Issue running Assistive_Gym_Basics.ipynb colab notebook

Hi, I have been trying to run the Assistive_Gym_Basics.ipynb colab notebook. I am able to run the first cell, but get a ModuleNotFoundError: No module named 'assistive_gym' error when I try to run the second cell.
Screenshot 2023-03-28 at 2 16 12 PM

Would this possibly be an issue with the python version? Thanks!

Add a new type of robot

Hello,

I tried to add a new robot to this simulator. First I added the robot .urfd file in the corresponding path and initialized it. But it always showed error: Cannot load URDF file.. Could you write a guide for adding a new type of robot? Thanks a lot!

AttributeError: 'NoneType' object has no attribute 'BytesIO'

Hi,

I keep getting this error while trying to train it after the first results.

Updates 0, num timesteps 12800, FPS 1323
Last 64 training episodes: mean/median reward -152.7/-153.6, min/max reward -257.6/-87.9

Exception ignored in: <bound method SubprocVecEnv.del of <baselines.common.vec_env.subproc_vec_env.SubprocVecEnv object at 0x7f15d7111e10>>
Traceback (most recent call last):
File "/home/samira1/.local/lib/python3.6/site-packages/baselines/common/vec_env/subproc_vec_env.py", line 121, in del
File "/home/samira1/.local/lib/python3.6/site-packages/baselines/common/vec_env/vec_env.py", line 98, in close
File "/home/samira1/.local/lib/python3.6/site-packages/baselines/common/vec_env/subproc_vec_env.py", line 104, in close_extras
File "/usr/local/lib/python3.6/multiprocessing/connection.py", line 206, in send
File "/usr/local/lib/python3.6/multiprocessing/reduction.py", line 50, in dumps
AttributeError: 'NoneType' object has no attribute 'BytesIO'

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.