Giter VIP home page Giter VIP logo

mocapact's Issues

Unusual Trend in Validation Loss

Hi,

Thanks for your help in advance.
I tried to reproduce the result for the motion completion, I trained the GPT policy and visualized the action prediction in MuJoCo. I stopped the training early, thinking that I might have made a mistake since the validation losses did not follow the usual trend (following a decreasing trend) and there might have been a possibility of overfitting.
However, the visualization in MuJoCo looked fine, indicating that the model has learnt to predict the actions quite well.
Is there any reason why the validation losses are misleading? Or am I interpreting them wrong?

MotionCompletionGraph

Please find the reproduced results attached to this message. The model was trained on the 600 GB dataset and I have used different values of learning rate, batch_size, and validation frequency, but the increasing trend is observed in most.

Thanks again for your help.

Error while trying to fetch the experts

Hi I seem to be running into some authentication error while trying to get the expert clips using
python -m mocapact.download_dataset -t experts -c CMU_009_12 -d ./data

the error that I get is


Downloading experts dataset from: https://mocapact.blob.core.windows.net/public?sv=2020-10-02&si=public-1819108CAA5&sr=c&sig=Jw1zsVs%2BK2G6QP%2Bo%2FFPQb1rSUY8AL%2F24k4zhQuw5WPo%3D to ./data
Traceback (most recent call last):
  File "<frozen runpy>", line 198, in _run_module_as_main
  File "<frozen runpy>", line 88, in _run_code
  File "/home/rakesh/Desktop/project/MoCapAct/mocapact/download_dataset.py", line 107, in <module>
    download_dataset_from_url(DATASET_URL, blob_prefix, local_dest_path=args.dest_path, clips=expanded_clips)
  File "/home/rakesh/Desktop/project/MoCapAct/mocapact/download_dataset.py", line 25, in download_dataset_from_url
    for blob in expert_blobs:
  File "/home/rakesh/anaconda3/envs/monet/lib/python3.11/site-packages/azure/core/paging.py", line 123, in __next__
    return next(self._page_iterator)
           ^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/rakesh/anaconda3/envs/monet/lib/python3.11/site-packages/azure/core/paging.py", line 75, in __next__
    self._response = self._get_next(self.continuation_token)
                     ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/rakesh/anaconda3/envs/monet/lib/python3.11/site-packages/azure/storage/blob/_list_blobs_helper.py", line 79, in _get_next_cb
    process_storage_error(error)
  File "/home/rakesh/anaconda3/envs/monet/lib/python3.11/site-packages/azure/storage/blob/_shared/response_handlers.py", line 177, in process_storage_error
    exec("raise error from None")   # pylint: disable=exec-used # nosec
    ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "<string>", line 1, in <module>
azure.core.exceptions.ClientAuthenticationError: Server failed to authenticate the request. Make sure the value of Authorization header is formed correctly including the signature.
RequestId:6447f222-401e-001e-15fb-990a08000000
Time:2024-04-29T06:09:40.8774697Z
ErrorCode:AuthenticationFailed
authenticationerrordetail:Signature did not match. String to sign used was 


/blob/mocapact/public
public-1819108CAA5


2020-10-02
c






Content: <?xml version="1.0" encoding="utf-8"?><Error><Code>AuthenticationFailed</Code><Message>Server failed to authenticate the request. Make sure the value of Authorization header is formed correctly including the signature.
RequestId:6447f222-401e-001e-15fb-990a08000000
Time:2024-04-29T06:09:40.8774697Z</Message><AuthenticationErrorDetail>Signature did not match. String to sign used was 


/blob/mocapact/public
public-1819108CAA5


2020-10-02
c





</AuthenticationErrorDetail></Error>


I am not sure what is the issue here, about a month ago I had managed to fetch the clips using the same command not sure why it's not working now.

Thank you

Error while trying to run the Multiclip policy

Hi,

I used the multiclip policy provided in the storage explorer (both locomotion and the fulldataset).

I run the following command

python mocapact/distillation/evaluate.py --device cuda:0 --visualize --act_noise 0 --always_init_at_clip_start --ghost_offset 1 --policy_path multiclip_policy/locomotion_dataset/model/model.ckpt --clip_snippets CMU_016_22

This opens up the visualizer but when I hit the play there is no movement and a error pops up.

ERROR:absl:dm_control viewer intercepted an environment error.
Original message: The observation provided is a dict but the obs space is Dict('walker/actuator_activation': Box(-inf, inf, (56,), float32), 'walker/appendages_pos': Box(-inf, inf, (15,), float32), 'walker/body_height': Box(-inf, inf, (1,), float32), 'walker/end_effectors_pos': Box(-inf, inf, (12,), float32), 'walker/gyro_anticlockwise_spin': Box(-inf, inf, (1,), float32), 'walker/gyro_backward_roll': Box(-inf, inf, (1,), float32), 'walker/gyro_control': Box(-inf, inf, (3,), float32), 'walker/gyro_rightward_roll': Box(-inf, inf, (1,), float32), 'walker/head_height': Box(-inf, inf, (1,), float32), 'walker/joints_pos': Box(-inf, inf, (56,), float32), 'walker/joints_vel': Box(-inf, inf, (56,), float32), 'walker/joints_vel_control': Box(-inf, inf, (56,), float32), 'walker/orientation': Box(-inf, inf, (9,), float32), 'walker/position': Box(-inf, inf, (3,), float32), 'walker/reference_appendages_pos': Box(-inf, inf, (75,), float32), 'walker/reference_ego_bodies_quats': Box(-inf, inf, (620,), float32), 'walker/reference_rel_bodies_pos_global': Box(-inf, inf, (465,), float32), 'walker/reference_rel_bodies_pos_local': Box(-inf, inf, (465,), float32), 'walker/reference_rel_bodies_quats': Box(-inf, inf, (620,), float32), 'walker/reference_rel_joints': Box(-inf, inf, (280,), float32), 'walker/reference_rel_root_pos_local': Box(-inf, inf, (15,), float32), 'walker/reference_rel_root_quat': Box(-inf, inf, (20,), float32), 'walker/sensors_accelerometer': Box(-inf, inf, (3,), float32), 'walker/sensors_gyro': Box(-inf, inf, (3,), float32), 'walker/sensors_torque': Box(-inf, inf, (6,), float32), 'walker/sensors_touch': Box(-inf, inf, (10,), float32), 'walker/sensors_velocimeter': Box(-inf, inf, (3,), float32), 'walker/time_in_clip': Box(-inf, inf, (1,), float32), 'walker/torso_xvel': Box(-inf, inf, (1,), float32), 'walker/torso_yvel': Box(-inf, inf, (1,), float32), 'walker/veloc_forward': Box(-inf, inf, (1,), float32), 'walker/veloc_strafe': Box(-inf, inf, (1,), float32), 'walker/veloc_up': Box(-inf, inf, (1,), float32), 'walker/velocimeter_control': Box(-inf, inf, (3,), float32), 'walker/world_zaxis': Box(-inf, inf, (3,), float32))
dm_control viewer intercepted an environment error.
Original message: The observation provided is a dict but the obs space is Dict('walker/actuator_activation': Box(-inf, inf, (56,), float32), 'walker/appendages_pos': Box(-inf, inf, (15,), float32), 'walker/body_height': Box(-inf, inf, (1,), float32), 'walker/end_effectors_pos': Box(-inf, inf, (12,), float32), 'walker/gyro_anticlockwise_spin': Box(-inf, inf, (1,), float32), 'walker/gyro_backward_roll': Box(-inf, inf, (1,), float32), 'walker/gyro_control': Box(-inf, inf, (3,), float32), 'walker/gyro_rightward_roll': Box(-inf, inf, (1,), float32), 'walker/head_height': Box(-inf, inf, (1,), float32), 'walker/joints_pos': Box(-inf, inf, (56,), float32), 'walker/joints_vel': Box(-inf, inf, (56,), float32), 'walker/joints_vel_control': Box(-inf, inf, (56,), float32), 'walker/orientation': Box(-inf, inf, (9,), float32), 'walker/position': Box(-inf, inf, (3,), float32), 'walker/reference_appendages_pos': Box(-inf, inf, (75,), float32), 'walker/reference_ego_bodies_quats': Box(-inf, inf, (620,), float32), 'walker/reference_rel_bodies_pos_global': Box(-inf, inf, (465,), float32), 'walker/reference_rel_bodies_pos_local': Box(-inf, inf, (465,), float32), 'walker/reference_rel_bodies_quats': Box(-inf, inf, (620,), float32), 'walker/reference_rel_joints': Box(-inf, inf, (280,), float32), 'walker/reference_rel_root_pos_local': Box(-inf, inf, (15,), float32), 'walker/reference_rel_root_quat': Box(-inf, inf, (20,), float32), 'walker/sensors_accelerometer': Box(-inf, inf, (3,), float32), 'walker/sensors_gyro': Box(-inf, inf, (3,), float32), 'walker/sensors_torque': Box(-inf, inf, (6,), float32), 'walker/sensors_touch': Box(-inf, inf, (10,), float32), 'walker/sensors_velocimeter': Box(-inf, inf, (3,), float32), 'walker/time_in_clip': Box(-inf, inf, (1,), float32), 'walker/torso_xvel': Box(-inf, inf, (1,), float32), 'walker/torso_yvel': Box(-inf, inf, (1,), float32), 'walker/veloc_forward': Box(-inf, inf, (1,), float32), 'walker/veloc_strafe': Box(-inf, inf, (1,), float32), 'walker/veloc_up': Box(-inf, inf, (1,), float32), 'walker/velocimeter_control': Box(-inf, inf, (3,), float32), 'walker/world_zaxis': Box(-inf, inf, (3,), float32))
Traceback:
  File "/home/rakesh/anaconda3/envs/newenv/lib/python3.10/site-packages/dm_control/viewer/runtime.py", line 251, in _step
    action = self._policy(self._time_step)
  File "/home/rakesh/anaconda3/envs/newenv/lib/python3.10/site-packages/torch/utils/_contextlib.py", line 115, in decorate_context
    return func(*args, **kwargs)
  File "/home/rakesh/Desktop/project/MoCapAct/main.py", line 123, in policy_fn
    action, state = policy.predict(env.get_observation(time_step), state, deterministic= deterministic)
  File "/home/rakesh/Desktop/project/MoCapAct/mocapact/distillation/model.py", line 194, in predict
    observation, vectorized_env = self.obs_to_tensor(observation)
  File "/home/rakesh/anaconda3/envs/newenv/lib/python3.10/site-packages/stable_baselines3/common/policies.py", line 247, in obs_to_tensor
    assert isinstance(

I feel there is a mismatch in the observations cause the first line says that it expect a dict but given obs space is ....

Error while running the evaluate policy

Hi I ran the evaluate policy as mentioned
python mocapact/clip_expert/evaluate.py --policy_root data/experts/CMU_009_12-165-363/eval_rsi/model --always_init_at_clip_start --device cpu --ghost_offset 1 --act_noise 0
This throws an error as

ialization.py", line 1025, in load raise pickle.UnpicklingError(UNSAFE_MESSAGE + str(e)) from None _pickle.UnpicklingError: Weights only load failed. Re-running torch.loadwithweights_onlyset toFalsewill likely succeed, but it can result in arbitrary code execution.Do it only if you get the file from a trusted source. WeightsUnpickler error: Unsupported class numpy.core.multiarray.scalar

I also Tried the python scripts

from mocapact import observables
from mocapact.sb3 import utils

expert_path = 'data/experts/CMU_009_12-165-363/eval_rsi/model'
expert = utils.load_policy(expert_path, observables.TIME_INDEX_OBSERVABLES)


from mocapact.envs import tracking
from dm_control.locomotion.tasks.reference_pose import types
dataset = types.ClipCollection(ids=['CMU_009_12'], start_steps=[165], end_steps=[363])
env = tracking.MocapTrackingGymEnv(dataset)
obs, done = env.reset(), False
while not done:
    action, _ = expert.predict(obs, deterministic=True)
    obs, rew, done, _ = env.step(action)
    print(rew)

Still gave that error
image

What might be the cause of this?

edit: I tried setting the weights_only to false and tried running it again. It still showed some error, this time regarding gym. My guess is that since gym is deprecated and the developers have shifted to gymnasium we should also be doing the same. Not sure how to accomplish that though, any idea or suggestions on what might be done now ??

edit 2: Okay I managed to run gym also but When trying to visualize the expert it's throwing errors to open a window

`python mocapact/clip_expert/evaluate.py --act_noise 0. --visualize --termination_error_threshold 1000000 --ghost_offset 1 --always_init_at_clip_start --policy_root data/experts/CMU_009_12-0-198/eval_rsi/model

`
I used the above one to try and visualize, and it gave me an error like
image

When trying the above API of python it just prints the numbers and no window of visualizing is opened

Thank you

train models?

This is very interesting work. How can we train our own robot models?

MuJoCo version pinning?

Hi folks,

I'm one of the MuJoCo developers. We read your note and are a bit puzzled by it:

Note: All included policies (experts, multi-clip, etc.) will only work with MuJoCo 2.1.5 or earlier. MuJoCo 2.2.0 uses analytic derivatives in place of finite-difference derivatives to determine actuator forces, which effectively changes the transition function of the simulator. Accordingly, MoCapAct installs MuJoCo 2.1.5 and dm_control 1.0.2. The rollout datasets were also generated under MuJoCo 2.1.5.

As far as we know no such change had occurred between 2.1.5 and 2.2.0 (the subsequent release). We did add some analytical derivatives but they are only used by the (non default) implicit integrator. We'd be happy to help you diagnose the cause of the change that you are seeing. In this matter and any other question you might have, please feel free to contact us at github.com/deepmind/mujoco.

Cheers!

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.