Giter VIP home page Giter VIP logo

Comments (24)

ChengshuLi avatar ChengshuLi commented on August 15, 2024

Hi @liuqi8827

Sorry for my belated reply. Thank you for your question. gibson2learning is from legacy code and I have cleaned it up.

Feel free to follow the latest README.md and let me know if you are able to run the training code.

Thanks you!

from hrl4in.

liuqi8827 avatar liuqi8827 commented on August 15, 2024

@ChengshuLi
Thanks for your reply.

There is a python multiprocessing connection problem.

  1. I had followed the latest README.md
    1.1 Install iGibson with the hrl4in branch
    1.2 Download iGibson assets
    1.3 Install HRL4IN
    1.4 Copy the updated JR URDF file from this repo to iGibson's asset folder
    1.5 Download iGibson dataset and put it into /home/hitsz/iGibson/gibson2/assets/dataset
  2. When I run ./run_train.sh, the terminator printed some parameters as following:
hrl_reward_dense_pos_fixed_sgm_arm_world_irs_30.0_sgr_0.0_lr_1e-4_meta_lr_1e-5_fr_lr_0_death_30.0_init_std_0.6_0.6_0.1_failed_pnt_0.0_nsteps_1024_ext_col_0.0_6x6_from_scr_exp_run_0
_read
using asset: path /home/hitsz/iGibson/gibson2/assets dataset path: /home/hitsz/iGibson/gibson2/assets/dataset
pybullet build time: May 13 2020 20:28:46
2020-05-14 19:22:39,250 action_init_std_dev: 0.3333333333333333
2020-05-14 19:22:39,250 action_min_std_dev: 0.1
2020-05-14 19:22:39,250 action_std_dev_anneal_schedule: None
2020-05-14 19:22:39,250 action_timestep: 0.1
2020-05-14 19:22:39,251 arena: complex_hl_ll
2020-05-14 19:22:39,251 checkpoint_index: -1
2020-05-14 19:22:39,251 checkpoint_interval: 10
2020-05-14 19:22:39,251 clip_param: 0.1
2020-05-14 19:22:39,251 config_file: jr_interactive_nav.yaml
2020-05-14 19:22:39,251 entropy_coef: 0.01
2020-05-14 19:22:39,251 env_mode: headless
2020-05-14 19:22:39,251 env_type: interactive_gibson
2020-05-14 19:22:39,251 eps: 1e-05
2020-05-14 19:22:39,251 eval_interval: 50
2020-05-14 19:22:39,251 eval_only: False
2020-05-14 19:22:39,251 experiment_folder: ckpt/hrl_reward_dense_pos_fixed_sgm_arm_world_irs_30.0_sgr_0.0_lr_1e-4_meta_lr_1e-5_fr_lr_0_death_30.0_init_std_0.6_0.6_0.1_failed_pnt_0.0_nsteps_1024_ext_col_0.0_6x6_from_scr_exp_run_0
2020-05-14 19:22:39,251 extrinsic_collision_reward_weight: 0.0
2020-05-14 19:22:39,251 extrinsic_reward_weight: 0.0
2020-05-14 19:22:39,251 freeze_lr_n_updates: 0
2020-05-14 19:22:39,251 gamma: 0.99
2020-05-14 19:22:39,251 hidden_size: 512
2020-05-14 19:22:39,251 intrinsic_reward_scaling: 30.0
2020-05-14 19:22:39,252 log_interval: 1
2020-05-14 19:22:39,252 lr: 0.0001
2020-05-14 19:22:39,252 max_grad_norm: 0.5
2020-05-14 19:22:39,252 meta_agent_normalize_advantage: True
2020-05-14 19:22:39,252 meta_gamma: 0.99
2020-05-14 19:22:39,252 meta_lr: 1e-05
2020-05-14 19:22:39,252 num_eval_episodes: 1
2020-05-14 19:22:39,252 num_eval_processes: 1
2020-05-14 19:22:39,252 num_mini_batch: 1
2020-05-14 19:22:39,252 num_steps: 1024
2020-05-14 19:22:39,252 num_train_processes: 1
2020-05-14 19:22:39,252 num_updates: 50000
2020-05-14 19:22:39,252 perf_window_size: 50
2020-05-14 19:22:39,252 physics_timestep: 0.025
2020-05-14 19:22:39,252 ppo_epoch: 4
2020-05-14 19:22:39,252 pth_gpu_id: 0
2020-05-14 19:22:39,252 random_height: False
2020-05-14 19:22:39,252 random_position: False
2020-05-14 19:22:39,252 seed: 100
2020-05-14 19:22:39,252 sim_gpu_id: 0
2020-05-14 19:22:39,253 subgoal_achieved_reward: 0.0
2020-05-14 19:22:39,253 subgoal_failed_penalty: 0.0
2020-05-14 19:22:39,253 subgoal_init_std_dev: [0.6, 0.6, 0.1]
2020-05-14 19:22:39,253 subgoal_min_std_dev: [0.05, 0.05, 0.05]
2020-05-14 19:22:39,253 summary_interval: 1
2020-05-14 19:22:39,253 tau: 0.95
2020-05-14 19:22:39,253 time_scale: 50
2020-05-14 19:22:39,253 use_action_hindsight: False
2020-05-14 19:22:39,253 use_action_masks: True
2020-05-14 19:22:39,253 use_gae: True
2020-05-14 19:22:39,253 use_linear_clip_decay: True
2020-05-14 19:22:39,253 use_linear_lr_decay: True
2020-05-14 19:22:39,253 value_loss_coef: 0.5
2020-05-14 19:22:39,276 scene: stadium
2020-05-14 19:22:39,276 robot: JR2_Kinova
2020-05-14 19:22:39,276 wheel_velocity: 0.025
2020-05-14 19:22:39,276 arm_velocity: 0.005
2020-05-14 19:22:39,276 arm_reset_noise_in_pi: 0.0
2020-05-14 19:22:39,276 task: pointgoal
2020-05-14 19:22:39,276 fisheye: False
2020-05-14 19:22:39,276 door_angle: 90
2020-05-14 19:22:39,276 initial_pos: [0, -5, 0.0]
2020-05-14 19:22:39,276 initial_orn: [0.0, 0.0, 0.0]
2020-05-14 19:22:39,276 target_pos: [0, 5, 0.0]
2020-05-14 19:22:39,276 target_orn: [0.0, 0.0, 0.0]
2020-05-14 19:22:39,276 is_discrete: False
2020-05-14 19:22:39,276 additional_states_dim: 3
2020-05-14 19:22:39,276 auxiliary_sensor_dim: 66
2020-05-14 19:22:39,276 normalize_observation: True
2020-05-14 19:22:39,277 observation_normalizer: {'sensor': [[-3.0, -3.0, 0.0], [3.0, 6.0, 1.3]], 'auxiliary_sensor': [[-3.0, -3.0, -0.001, -0.2, -0.7, 0.0, -0.6, -0.12, -0.05, -3.141592653589793, -3.141592653589793, -3.141592653589793, -6.0, -130.0, -1.0, -1.0, -3.141592653589793, -6.0, -130.0, -1.0, -1.0, -3.141592653589793, -1.0, -100.0, -1.0, -1.0, -3.141592653589793, -1.2, -45.0, -1.0, -1.0, -3.141592653589793, -1.2, -10.0, -1.0, -1.0, -3.141592653589793, -1.5, -2.5, -1.0, -1.0, -3.141592653589793, -1.3, -1.5, -1.0, -1.0, -0.15, -0.15, -2.0, -3.141592653589793, -1.0, -1.0, -3.141592653589793, -1.0, -1.0, -1.0, -3.0, -3.0, -0.01, -6.0, -6.0, -0.03, -9.0, -9.0, -0.02, -1.0], [3.0, 6.0, 0.001, 0.8, 0.0, 1.3, 0.6, 0.12, 0.05, 3.141592653589793, 3.141592653589793, 3.141592653589793, 6.0, 130.0, 1.0, 1.0, 3.141592653589793, 6.0, 130.0, 1.0, 1.0, 3.141592653589793, 1.0, 100.0, 1.0, 1.0, 3.141592653589793, 1.2, 10.0, 1.0, 1.0, 3.141592653589793, 1.2, 10.0, 1.0, 1.0, 3.141592653589793, 1.5, 2.5, 1.0, 1.0, 3.141592653589793, 1.3, 1.5, 1.0, 1.0, 0.15, 0.15, 2.0, 3.141592653589793, 1.0, 1.0, 3.141592653589793, 1.0, 1.0, 1.0, 3.0, 6.0, 0.01, 6.0, 6.0, -0.01, 9.0, 9.0, 0.02, 1.0]], 'rgb': [0.0, 1.0], 'depth': [0.0, 5.0], 'scan': [0.0, 5.0]}
2020-05-14 19:22:39,277 reward_type: dense
2020-05-14 19:22:39,277 success_reward: 50.0
2020-05-14 19:22:39,277 slack_reward: -0.01
2020-05-14 19:22:39,277 potential_reward_weight: 2.0
2020-05-14 19:22:39,277 electricity_reward_weight: -0.001
2020-05-14 19:22:39,277 stall_torque_reward_weight: 0.0
2020-05-14 19:22:39,277 collision_reward_weight: -0.01
2020-05-14 19:22:39,277 collision_ignore_body_ids: [0, 1, 2, 3]
2020-05-14 19:22:39,277 discount_factor: 0.99
2020-05-14 19:22:39,277 dist_tol: 0.5
2020-05-14 19:22:39,277 max_step: 1000
2020-05-14 19:22:39,277 output: ['sensor', 'auxiliary_sensor', 'depth']
2020-05-14 19:22:39,277 resolution: 64
2020-05-14 19:22:39,277 fov: 150
2020-05-14 19:22:39,277 n_horizontal_rays: 128
2020-05-14 19:22:39,277 n_vertical_beams: 1
2020-05-14 19:22:39,277 use_filler: True
2020-05-14 19:22:39,278 display_ui: False
2020-05-14 19:22:39,278 show_diagnostics: False
2020-05-14 19:22:39,278 ui_num: 2
2020-05-14 19:22:39,278 ui_components: ['RGB_FILLED', 'DEPTH']
2020-05-14 19:22:39,278 random: {'random_initial_pose': False, 'random_target_pose': False, 'random_init_x_range': [-0.1, 0.1], 'random_init_y_range': [-0.1, 0.1], 'random_init_z_range': [-0.1, 0.1], 'random_init_rot_range': [-0.1, 0.1]}
2020-05-14 19:22:39,278 speed: {'timestep': 0.001, 'frameskip': 10}
2020-05-14 19:22:39,278 mode: web_ui
2020-05-14 19:22:39,278 verbose: False
2020-05-14 19:22:39,278 fast_lq_render: True
2020-05-14 19:22:39,278 visual_object_at_initial_target_pos: True
2020-05-14 19:22:39,278 target_visual_object_visible_to_agent: False
2020-05-14 19:22:39,278 debug: False
  1. However, the code always waited here and did nothing.
  2. Then, I press Ctrl+C in the terminator and it spits out:
^CTraceback (most recent call last):
  File "train_hrl_gibson.py", line 1246, in <module>
    main()
  File "train_hrl_gibson.py", line 439, in main
    train_envs = ParallelNavEnvironment(train_envs, blocking=False)
  File "/home/hitsz/iGibson/gibson2/envs/parallel_env.py", line 45, in __init__
    self.start()
  File "/home/hitsz/iGibson/gibson2/envs/parallel_env.py", line 54, in start
    env.start()
  File "/home/hitsz/iGibson/gibson2/envs/parallel_env.py", line 186, in start
    result = self._conn.recv()
  File "/home/hitsz/anaconda3/envs/py3-igibson/lib/python3.6/multiprocessing/connection.py", line 254, in recv
    buf = self._recv_bytes()
  File "/home/hitsz/anaconda3/envs/py3-igibson/lib/python3.6/multiprocessing/connection.py", line 421, in _recv_bytes
    buf = self._recv(4)
  File "/home/hitsz/anaconda3/envs/py3-igibson/lib/python3.6/multiprocessing/connection.py", line 390, in _recv
    chunk = read(handle, remaining)
KeyboardInterrupt

I know this is a python multiprocessing connection problem. However, I can't fix it.
Can you give me some suggestions to solve this problem?
Thanks a lot!

from hrl4in.

ChengshuLi avatar ChengshuLi commented on August 15, 2024

Hi @liuqi8827,

This might be because it will take a few minutes to load the iGibson environments.

I would suggest doing the following:

  1. Try to run run_train_toy_env.sh. This should load the environments relatively fast and start printing out training losses, success rates, etc.
  2. If that goes well, try to run run_train_gibson.sh again. Go grab a coffee and give it 5-10 mins.

If it still doesn't load and get stuck, let me know.

Note: this probably is not a python multiprocessing connection problem. If I do Ctrl+C in the middle of the environment loading, I see the same error messages.

from hrl4in.

liuqi8827 avatar liuqi8827 commented on August 15, 2024

Hi @ChengshuLi ,

It dosen't work.

1.I run run_train_toy_env.sh, I got an error in the terminator which is as follwing:

Traceback (most recent call last):
  File "train_hrl_toy_env.py", line 1157, in <module>
    main()
  File "train_hrl_toy_env.py", line 424, in main
    train_envs = ParallelNavEnvironment(train_envs, blocking=False)
NameError: name 'ParallelNavEnvironment' is not defined
  1. In order to sovle the error in 1
    I added from gibson2.envs.parallel_env import ParallelNavEnvironment into train_hrl_toy_env.py
    The error occurred in 1 was sovled.
  2. I run run_train_toy_env.sh again, I got an error in the terminator which is as follwing:
Traceback (most recent call last):
  File "train_hrl_toy_env.py", line 1160, in <module>
    main()
  File "train_hrl_toy_env.py", line 876, in main
    (1 - masks).byte()], dim=1)  # episode is done
RuntimeError: Expected object of scalar type Bool but got scalar type Byte for sequence element 2 in sequence argument at position #1 'tensors'
  1. I cann't fix the error occurred in 3
  2. I run run_train_gibson.sh and given it 30 mins,
    the code got stuck and did nothing as the same problem that I had described in the last issue.

My environment is:
Ubuntu 16.04
Nvidia GPU GeForce RTX 2070 with RAM = 8.0GB
Nvidia driver 430
CUDA 10.0, CuDNN v7
conda create -n py3-igibson python=3.6 anaconda
torch = 1.2.0, torchversion = 0.4

from hrl4in.

ChengshuLi avatar ChengshuLi commented on August 15, 2024

@liuqi8827 Thanks for pointing it out.

  1. I deleted from gibson2.envs.parallel_env import ParallelNavEnvironment by mistake. I have added it back.

  2. RuntimeError: Expected object of scalar type Bool but got scalar type Byte for sequence element 2 in sequence argument at position #1 'tensors' This is caused by different torch versions. Could you downgrade to torch==1.1.0 and torchvision=0.2.2 and try again? Alternatively, you could change those tensors' type from Byte to Bool to be compatible with the newer version of torch.

  3. Could you copy & paste the stdout when you run run_train_gibson.sh, in addition to the output when you run nvidia-smi? If everything goes as expected, it should instantiate one instance of Gibson environment in GPU 0.

Thanks!

from hrl4in.

liuqi8827 avatar liuqi8827 commented on August 15, 2024

@ChengshuLi Hi,

Thanks for your help!

run_train_gibson.sh still dosen't work. (I wait it 1 hour)
run_train_toy_env.sh works well.

1.My environment is:
Ubuntu 16.04
torch==1.1.0, torchversion=0.2.2
nvidia-smi
Screenshot from 2020-05-23 17-17-16
2.The output of run_train_toy_env.sh is:

/home/hitsz/HRL4IN/hrl4in/utils/utils.py:11: YAMLLoadWarning: calling yaml.load() without Loader=... is deprecated, as the default Loader is unsafe. Please read https://msg.pyyaml.org/load for full details.
  config_data = yaml.load(f)
2020-05-23 16:41:18,960 width: 11
2020-05-23 16:41:18,960 height: 11
2020-05-23 16:41:18,960 door_row: 5
2020-05-23 16:41:18,960 door_col: 5
2020-05-23 16:41:18,960 door_max_state: 5
2020-05-23 16:41:18,960 sparse_reward: True
2020-05-23 16:41:18,960 outputs: ['sensor', 'auxiliary_sensor']
2020-05-23 16:41:18,960 local_map_range: 5
2020-05-23 16:41:18,960 max_step: 500
Dict(sensor:Box(4,), auxiliary_sensor:Box(9,)) MultiDiscrete([4 3])
2020-05-23 16:41:21,829 agent number of parameters: 1192968
2020-05-23 16:41:21,830 meta agent number of parameters: 1190424
2020-05-23 16:42:39,819 update: 1	env_steps: 65536	env_steps_per_sec: 840.750	env-time: 6.856s	pth-time: 71.088s
2020-05-23 16:42:39,820 update: 1	env_steps: 65536	value_loss: 0.447	action_loss: -0.018	dist_entropy: 1.236
2020-05-23 16:42:39,820 update: 1	env_steps: 65536	meta_value_loss: 0.004	subgoal_loss: 0.001	meta_dist_entropy: 0.673
2020-05-23 16:42:39,821 average window size 2	reward: -0.492280	success_rate: 0.000000	episode length: 500.000000
2020-05-23 16:42:39,822 window_size: 2	subgoal_reward: -0.561884	subgoal_success_rate: 0.335836	subgoal_length: 3.075912
2020-05-23 16:43:18,823 update: 2	env_steps: 98304	env_steps_per_sec: 840.541	env-time: 10.315s	pth-time: 106.628s
2020-05-23 16:43:18,823 update: 2	env_steps: 98304	value_loss: 0.368	action_loss: -0.021	dist_entropy: 1.228
2020-05-23 16:43:18,823 update: 2	env_steps: 98304	meta_value_loss: 0.054	subgoal_loss: -0.001	meta_dist_entropy: 0.673
2020-05-23 16:43:18,825 average window size 3	reward: -0.406249	success_rate: 0.007812	episode length: 497.546875
2020-05-23 16:43:18,826 window_size: 3	subgoal_reward: -0.512655	subgoal_success_rate: 0.337223	subgoal_length: 3.072132
2020-05-23 16:43:58,648 update: 3	env_steps: 131072	env_steps_per_sec: 836.036	env-time: 13.816s	pth-time: 142.946s
2020-05-23 16:43:58,648 update: 3	env_steps: 131072	value_loss: 0.313	action_loss: -0.022	dist_entropy: 1.215
2020-05-23 16:43:58,648 update: 3	env_steps: 131072	meta_value_loss: 0.003	subgoal_loss: 0.000	meta_dist_entropy: 0.673
2020-05-23 16:43:58,650 average window size 4	reward: -0.426426	success_rate: 0.005208	episode length: 498.364594
2020-05-23 16:43:58,650 window_size: 4	subgoal_reward: -0.473913	subgoal_success_rate: 0.344023	subgoal_length: 3.054877
2020-05-23 16:44:38,112 update: 4	env_steps: 163840	env_steps_per_sec: 834.886	env-time: 17.251s	pth-time: 178.970s
2020-05-23 16:44:38,113 update: 4	env_steps: 163840	value_loss: 0.269	action_loss: -0.021	dist_entropy: 1.200
2020-05-23 16:44:38,113 update: 4	env_steps: 163840	meta_value_loss: 0.004	subgoal_loss: 0.000	meta_dist_entropy: 0.673
2020-05-23 16:44:38,115 average window size 5	reward: -0.435655	success_rate: 0.003906	episode length: 498.773438
2020-05-23 16:44:38,116 window_size: 5	subgoal_reward: -0.429547	subgoal_success_rate: 0.352241	subgoal_length: 3.034960
2020-05-23 16:45:17,505 update: 5	env_steps: 196608	env_steps_per_sec: 834.376	env-time: 20.695s	pth-time: 214.912s
2020-05-23 16:45:17,505 update: 5	env_steps: 196608	value_loss: 0.242	action_loss: -0.019	dist_entropy: 1.184
2020-05-23 16:45:17,505 update: 5	env_steps: 196608	meta_value_loss: 0.004	subgoal_loss: 0.002	meta_dist_entropy: 0.672
2020-05-23 16:45:17,506 average window size 6	reward: -0.439233	success_rate: 0.003125	episode length: 499.018738
2020-05-23 16:45:17,507 window_size: 6	subgoal_reward: -0.391832	subgoal_success_rate: 0.355410	subgoal_length: 3.027756

3.The output of run_train_gibson.sh is:

hrl_reward_dense_pos_fixed_sgm_arm_world_irs_30.0_sgr_0.0_lr_1e-4_meta_lr_1e-5_fr_lr_0_death_30.0_init_std_0.6_0.6_0.1_failed_pnt_0.0_nsteps_1024_ext_col_0.0_6x6_from_scr_exp_run_0
using asset: path /home/hitsz/iGibson/gibson2/assets dataset path: /home/hitsz/iGibson/gibson2/assets/dataset
pybullet build time: May 13 2020 20:28:46
2020-05-23 17:12:42,721 action_init_std_dev: 0.3333333333333333
2020-05-23 17:12:42,721 action_min_std_dev: 0.1
2020-05-23 17:12:42,721 action_std_dev_anneal_schedule: None
2020-05-23 17:12:42,721 action_timestep: 0.1
2020-05-23 17:12:42,721 arena: complex_hl_ll
2020-05-23 17:12:42,721 checkpoint_index: -1
2020-05-23 17:12:42,721 checkpoint_interval: 10
2020-05-23 17:12:42,721 clip_param: 0.1
2020-05-23 17:12:42,721 config_file: jr_interactive_nav.yaml
2020-05-23 17:12:42,721 entropy_coef: 0.01
2020-05-23 17:12:42,721 env_mode: headless
2020-05-23 17:12:42,722 env_type: interactive_gibson
2020-05-23 17:12:42,722 eps: 1e-05
2020-05-23 17:12:42,722 eval_interval: 50
2020-05-23 17:12:42,722 eval_only: False
2020-05-23 17:12:42,722 experiment_folder: ckpt/hrl_reward_dense_pos_fixed_sgm_arm_world_irs_30.0_sgr_0.0_lr_1e-4_meta_lr_1e-5_fr_lr_0_death_30.0_init_std_0.6_0.6_0.1_failed_pnt_0.0_nsteps_1024_ext_col_0.0_6x6_from_scr_exp_run_0
2020-05-23 17:12:42,722 extrinsic_collision_reward_weight: 0.0
2020-05-23 17:12:42,722 extrinsic_reward_weight: 0.0
2020-05-23 17:12:42,722 freeze_lr_n_updates: 0
2020-05-23 17:12:42,722 gamma: 0.99
2020-05-23 17:12:42,722 hidden_size: 512
2020-05-23 17:12:42,722 intrinsic_reward_scaling: 30.0
2020-05-23 17:12:42,722 log_interval: 1
2020-05-23 17:12:42,722 lr: 0.0001
2020-05-23 17:12:42,722 max_grad_norm: 0.5
2020-05-23 17:12:42,722 meta_agent_normalize_advantage: True
2020-05-23 17:12:42,722 meta_gamma: 0.99
2020-05-23 17:12:42,723 meta_lr: 1e-05
2020-05-23 17:12:42,723 num_eval_episodes: 1
2020-05-23 17:12:42,723 num_eval_processes: 1
2020-05-23 17:12:42,723 num_mini_batch: 1
2020-05-23 17:12:42,723 num_steps: 1024
2020-05-23 17:12:42,723 num_train_processes: 1
2020-05-23 17:12:42,723 num_updates: 50000
2020-05-23 17:12:42,723 perf_window_size: 50
2020-05-23 17:12:42,723 physics_timestep: 0.025
2020-05-23 17:12:42,723 ppo_epoch: 4
2020-05-23 17:12:42,723 pth_gpu_id: 0
2020-05-23 17:12:42,723 random_height: False
2020-05-23 17:12:42,723 random_position: False
2020-05-23 17:12:42,723 seed: 100
2020-05-23 17:12:42,723 sim_gpu_id: 0
2020-05-23 17:12:42,723 subgoal_achieved_reward: 0.0
2020-05-23 17:12:42,723 subgoal_failed_penalty: 0.0
2020-05-23 17:12:42,723 subgoal_init_std_dev: [0.6, 0.6, 0.1]
2020-05-23 17:12:42,723 subgoal_min_std_dev: [0.05, 0.05, 0.05]
2020-05-23 17:12:42,723 summary_interval: 1
2020-05-23 17:12:42,723 tau: 0.95
2020-05-23 17:12:42,724 time_scale: 50
2020-05-23 17:12:42,724 use_action_hindsight: False
2020-05-23 17:12:42,724 use_action_masks: True
2020-05-23 17:12:42,724 use_gae: True
2020-05-23 17:12:42,724 use_linear_clip_decay: True
2020-05-23 17:12:42,724 use_linear_lr_decay: True
2020-05-23 17:12:42,724 value_loss_coef: 0.5
2020-05-23 17:12:42,746 scene: stadium
2020-05-23 17:12:42,746 robot: JR2_Kinova
2020-05-23 17:12:42,746 wheel_velocity: 0.025
2020-05-23 17:12:42,746 arm_velocity: 0.005
2020-05-23 17:12:42,746 arm_reset_noise_in_pi: 0.0
2020-05-23 17:12:42,746 task: pointgoal
2020-05-23 17:12:42,746 fisheye: False
2020-05-23 17:12:42,747 door_angle: 90
2020-05-23 17:12:42,747 initial_pos: [0, -5, 0.0]
2020-05-23 17:12:42,747 initial_orn: [0.0, 0.0, 0.0]
2020-05-23 17:12:42,747 target_pos: [0, 5, 0.0]
2020-05-23 17:12:42,747 target_orn: [0.0, 0.0, 0.0]
2020-05-23 17:12:42,747 is_discrete: False
2020-05-23 17:12:42,747 additional_states_dim: 3
2020-05-23 17:12:42,747 auxiliary_sensor_dim: 66
2020-05-23 17:12:42,747 normalize_observation: True
2020-05-23 17:12:42,747 observation_normalizer: {'sensor': [[-3.0, -3.0, 0.0], [3.0, 6.0, 1.3]], 'auxiliary_sensor': [[-3.0, -3.0, -0.001, -0.2, -0.7, 0.0, -0.6, -0.12, -0.05, -3.141592653589793, -3.141592653589793, -3.141592653589793, -6.0, -130.0, -1.0, -1.0, -3.141592653589793, -6.0, -130.0, -1.0, -1.0, -3.141592653589793, -1.0, -100.0, -1.0, -1.0, -3.141592653589793, -1.2, -45.0, -1.0, -1.0, -3.141592653589793, -1.2, -10.0, -1.0, -1.0, -3.141592653589793, -1.5, -2.5, -1.0, -1.0, -3.141592653589793, -1.3, -1.5, -1.0, -1.0, -0.15, -0.15, -2.0, -3.141592653589793, -1.0, -1.0, -3.141592653589793, -1.0, -1.0, -1.0, -3.0, -3.0, -0.01, -6.0, -6.0, -0.03, -9.0, -9.0, -0.02, -1.0], [3.0, 6.0, 0.001, 0.8, 0.0, 1.3, 0.6, 0.12, 0.05, 3.141592653589793, 3.141592653589793, 3.141592653589793, 6.0, 130.0, 1.0, 1.0, 3.141592653589793, 6.0, 130.0, 1.0, 1.0, 3.141592653589793, 1.0, 100.0, 1.0, 1.0, 3.141592653589793, 1.2, 10.0, 1.0, 1.0, 3.141592653589793, 1.2, 10.0, 1.0, 1.0, 3.141592653589793, 1.5, 2.5, 1.0, 1.0, 3.141592653589793, 1.3, 1.5, 1.0, 1.0, 0.15, 0.15, 2.0, 3.141592653589793, 1.0, 1.0, 3.141592653589793, 1.0, 1.0, 1.0, 3.0, 6.0, 0.01, 6.0, 6.0, -0.01, 9.0, 9.0, 0.02, 1.0]], 'rgb': [0.0, 1.0], 'depth': [0.0, 5.0], 'scan': [0.0, 5.0]}
2020-05-23 17:12:42,747 reward_type: dense
2020-05-23 17:12:42,747 success_reward: 50.0
2020-05-23 17:12:42,747 slack_reward: -0.01
2020-05-23 17:12:42,747 potential_reward_weight: 2.0
2020-05-23 17:12:42,747 electricity_reward_weight: -0.001
2020-05-23 17:12:42,747 stall_torque_reward_weight: 0.0
2020-05-23 17:12:42,747 collision_reward_weight: -0.01
2020-05-23 17:12:42,748 collision_ignore_body_ids: [0, 1, 2, 3]
2020-05-23 17:12:42,748 discount_factor: 0.99
2020-05-23 17:12:42,748 dist_tol: 0.5
2020-05-23 17:12:42,748 max_step: 1000
2020-05-23 17:12:42,748 output: ['sensor', 'auxiliary_sensor', 'depth']
2020-05-23 17:12:42,748 resolution: 64
2020-05-23 17:12:42,748 fov: 150
2020-05-23 17:12:42,748 n_horizontal_rays: 128
2020-05-23 17:12:42,748 n_vertical_beams: 1
2020-05-23 17:12:42,748 use_filler: True
2020-05-23 17:12:42,748 display_ui: False
2020-05-23 17:12:42,748 show_diagnostics: False
2020-05-23 17:12:42,748 ui_num: 2
2020-05-23 17:12:42,748 ui_components: ['RGB_FILLED', 'DEPTH']
2020-05-23 17:12:42,748 random: {'random_initial_pose': False, 'random_target_pose': False, 'random_init_x_range': [-0.1, 0.1], 'random_init_y_range': [-0.1, 0.1], 'random_init_z_range': [-0.1, 0.1], 'random_init_rot_range': [-0.1, 0.1]}
2020-05-23 17:12:42,748 speed: {'timestep': 0.001, 'frameskip': 10}
2020-05-23 17:12:42,748 mode: web_ui
2020-05-23 17:12:42,748 verbose: False
2020-05-23 17:12:42,748 fast_lq_render: True
2020-05-23 17:12:42,749 visual_object_at_initial_target_pos: True
2020-05-23 17:12:42,749 target_visual_object_visible_to_agent: False
2020-05-23 17:12:42,749 debug: False

Thanks!

from hrl4in.

ChengshuLi avatar ChengshuLi commented on August 15, 2024

Hi @liuqi8827
Glad to hear that run_train_toy_env.sh works well now.

Let's see if you are able to run iGibson by itself correctly.

  1. cd $HOME/iGibson and make sure you are on hrl4in branch when you did pip install -e .
  2. Copy the updated JR URDF file from this repo to iGibson's asset folder
cp $HOME/HRL4IN/hrl4in/envs/gibson/jr2_kinova.urdf $HOME/iGibson/gibson2/assets/models/jr2_urdf/jr2_kinova.urdf
  1. Run the environment
cd $HOME/iGibson/gibson2/envs
python locomotor_env.py -m headless -c ../../examples/configs/jr_interactive_nav.yaml -r jr --env_type interactive

At the end of the STDOUT, you should see something like this:

Episode: 0
Episode: 1
Episode: 2
Episode: 3

Let me know if you encounter any errors.

from hrl4in.

gchal avatar gchal commented on August 15, 2024

Hi @liuqi8827

Sorry for my belated reply. Thank you for your question. gibson2learning is from legacy code and I have cleaned it up.

Feel free to follow the latest README.md and let me know if you are able to run the training code.

Thanks you!

Hi @liuqi8827

Sorry for my belated reply. Thank you for your question. gibson2learning is from legacy code and I have cleaned it up.

Feel free to follow the latest README.md and let me know if you are able to run the training code.

Thanks you!

Dear @ChengshuLi
I am also having trouble running the code due to gibson2learning.
I am not sure what the solution is, as I have already followed the readme file. Note that I had already installed iGibson2 previously and I have successfull switched to the hrl4in branch.
Thank you in advance

from hrl4in.

ChengshuLi avatar ChengshuLi commented on August 15, 2024

@gchal

My bad. I have cleaned up the remaining reference to gibson2learning. Feel free to pull and try again. Let me know if it works.

Thanks!

from hrl4in.

gchal avatar gchal commented on August 15, 2024

@gchal

My bad. I have cleaned up the remaining reference to gibson2learning. Feel free to pull and try again. Let me know if it works.

Thanks!

@ChengshuLi thanks for the quick reply. Now the toy env launches but t crushes with the following error:
Traceback (most recent call last):
File "train_hrl_toy_env.py", line 1159, in
main()
File "train_hrl_toy_env.py", line 875, in main
(1 - masks).byte()], dim=1) # episode is done
RuntimeError: Expected object of scalar type Bool but got scalar type Byte for sequence element 2 in sequence argument at position #1 'tensors'

from hrl4in.

liuqi8827 avatar liuqi8827 commented on August 15, 2024

@gchal
My bad. I have cleaned up the remaining reference to gibson2learning. Feel free to pull and try again. Let me know if it works.
Thanks!

@ChengshuLi thanks for the quick reply. Now the toy env launches but t crushes with the following error:
Traceback (most recent call last):
File "train_hrl_toy_env.py", line 1159, in
main()
File "train_hrl_toy_env.py", line 875, in main
(1 - masks).byte()], dim=1) # episode is done
RuntimeError: Expected object of scalar type Bool but got scalar type Byte for sequence element 2 in sequence argument at position #1 'tensors'

@gchal Hi,
I met this problem.
As @ChengshuLi replied: Maybe you can check the version of your torch and torchvision.
torch==1.1.0 and torchvision=0.2.2 work well.

from hrl4in.

gchal avatar gchal commented on August 15, 2024

Ah I haven't thought about it. Thank you for your reply.

from hrl4in.

ChengshuLi avatar ChengshuLi commented on August 15, 2024

@gchal Let me know if torch==1.1.0 and torchvision=0.2.2 works for you.
@liuqi8827 Thanks for helping out. Did you have any luck running the iGibson training? Just curious. Let me know if I can be of any help. Thanks!

from hrl4in.

gchal avatar gchal commented on August 15, 2024

@gchal Let me know if torch==1.1.0 and torchvision=0.2.2 works for you.
@liuqi8827 Thanks for helping out. Did you have any luck running the iGibson training? Just curious. Let me know if I can be of any help. Thanks!

Hi! I managed to run the toy_env today. Thank you. I will do the gibson training today too. In case I run into problems I will let you know.

from hrl4in.

gchal avatar gchal commented on August 15, 2024

Hi @liuqi8827
Glad to hear that run_train_toy_env.sh works well now.

Let's see if you are able to run iGibson by itself correctly.

  1. cd $HOME/iGibson and make sure you are on hrl4in branch when you did pip install -e .
  2. Copy the updated JR URDF file from this repo to iGibson's asset folder
cp $HOME/HRL4IN/hrl4in/envs/gibson/jr2_kinova.urdf $HOME/iGibson/gibson2/assets/models/jr2_urdf/jr2_kinova.urdf
  1. Run the environment
cd $HOME/iGibson/gibson2/envs
python locomotor_env.py -m headless -c ../../examples/configs/jr_interactive_nav.yaml -r jr --env_type interactive

At the end of the STDOUT, you should see something like this:

Episode: 0
Episode: 1
Episode: 2
Episode: 3

Let me know if you encounter any errors.

Ok I stil have errors. When I run run_train_gibson.sh I get:
File "train_hrl_gibson.py", line 21, in
from gibson2.envs.locomotor_env import NavigateEnv, NavigateRandomEnv, InteractiveNavigateEnv
ImportError: cannot import name 'InteractiveNavigateEnv'

and I looked it up in gibson2 and apparently there is no class InteractiveNavigateEnv
I am for sure in the branch hrl4in in iGibson, I checked it many times.

from hrl4in.

ChengshuLi avatar ChengshuLi commented on August 15, 2024

@gchal

You might need to do pip install -e . again in iGibson when you are on hrl4in branch.

InteractiveNavigateEnv is defined here: https://github.com/StanfordVL/iGibson/blob/hrl4in/gibson2/envs/locomotor_env.py

Let me know if this solves your issue. Thanks

from hrl4in.

liuqi8827 avatar liuqi8827 commented on August 15, 2024

@ChengshuLi Hi
Sorry for my belated reply.

Hi @liuqi8827
Glad to hear that run_train_toy_env.sh works well now.

Let's see if you are able to run iGibson by itself correctly.

  1. cd $HOME/iGibson and make sure you are on hrl4in branch when you did pip install -e .
  2. Copy the updated JR URDF file from this repo to iGibson's asset folder
cp $HOME/HRL4IN/hrl4in/envs/gibson/jr2_kinova.urdf $HOME/iGibson/gibson2/assets/models/jr2_urdf/jr2_kinova.urdf
  1. Run the environment
cd $HOME/iGibson/gibson2/envs
python locomotor_env.py -m headless -c ../../examples/configs/jr_interactive_nav.yaml -r jr --env_type interactive

At the end of the STDOUT, you should see something like this:

Episode: 0
Episode: 1
Episode: 2
Episode: 3

Let me know if you encounter any errors.

It dosen't work. The error is Segmentation fault (core dumped)
1.I follow your prompt.
However, when I run python locomotor_env.py -m headless -c ../../examples/configs/jr_interactive_nav.yaml -r jr --env_type interactive
the STDOUT is:
Screenshot from 2020-06-04 09-20-00

2.I download iGibson dataset (fully annotated environment: Rs_interactive) from (http://svl.stanford.edu/igibson/docs/intro.html) and put it into /home/hitsz/iGibson/gibson2/assets/dataset
My question is the datdaset that I download is right?

3.However, the STDOUT showed Segmentation fault (core dumped)

Can you give me some suggestions to solve the Segmentation fault (core dumped) problem?
Thanks!

from hrl4in.

ChengshuLi avatar ChengshuLi commented on August 15, 2024

@liuqi8827

Did you also download the assets? http://svl.stanford.edu/igibson/docs/installation.html#downloading-the-assets

I am not sure what causes this seg fault. It might be because of missing assets but I think the chance is low.

One thing you can try is to re-build iGibson.

  1. cd iGibson; git checkout master; ./clean.sh
  2. git checkout hrl4in; pip install -e .

Let me know if this solves your problem.

from hrl4in.

gchal avatar gchal commented on August 15, 2024

@gchal

You might need to do pip install -e . again in iGibson when you are on hrl4in branch.

InteractiveNavigateEnv is defined here: https://github.com/StanfordVL/iGibson/blob/hrl4in/gibson2/envs/locomotor_env.py

Let me know if this solves your issue. Thanks

Hi @ChengshuLi
no unfortunately not, I have already done these things and I tried now again:
-I checked-out in the hrl4in branch

  • pulled just in case
    -I did pip install -e .
    -went back to hrl4in and try the ./run_train_gibson.sh
    -again same error:
~/HRL4IN/hrl4in$ ./run_train_gibson.sh 
hrl_reward_dense_pos_fixed_sgm_arm_world_irs_30.0_sgr_0.0_lr_1e-4_meta_lr_1e-5_fr_lr_0_death_30.0_init_std_0.6_0.6_0.1_failed_pnt_0.0_nsteps_1024_ext_col_0.0_6x6_from_scr_exp_run_0
INFO:root:Importing iGibson (gibson2 module)
INFO:root:Assets path: /home/euclid/iGibson/gibson2/assets
INFO:root:Dataset path: /home/euclid/iGibson/gibson2/dataset
pybullet build time: Jun  2 2020 06:47:43
Traceback (most recent call last):
  File "train_hrl_gibson.py", line 21, in <module>
    from gibson2.envs.locomotor_env import NavigateEnv, NavigateRandomEnv, InteractiveNavigateEnv
ImportError: cannot import name 'InteractiveNavigateEnv'

and when I go back to the iGibson/gibson2/envs/locomotor_env.py and still is not having the class inside. It is like it is not on the branch.

I will hard-put it there by copy-pasting your link though I know this is not a good thing to do :)
I will let you know

from hrl4in.

gchal avatar gchal commented on August 15, 2024

@gchal
You might need to do pip install -e . again in iGibson when you are on hrl4in branch.
InteractiveNavigateEnv is defined here: https://github.com/StanfordVL/iGibson/blob/hrl4in/gibson2/envs/locomotor_env.py
Let me know if this solves your issue. Thanks

Hi @ChengshuLi
no unfortunately not, I have already done these things and I tried now again:
-I checked-out in the hrl4in branch

  • pulled just in case
    -I did pip install -e .
    -went back to hrl4in and try the ./run_train_gibson.sh
    -again same error:
~/HRL4IN/hrl4in$ ./run_train_gibson.sh 
hrl_reward_dense_pos_fixed_sgm_arm_world_irs_30.0_sgr_0.0_lr_1e-4_meta_lr_1e-5_fr_lr_0_death_30.0_init_std_0.6_0.6_0.1_failed_pnt_0.0_nsteps_1024_ext_col_0.0_6x6_from_scr_exp_run_0
INFO:root:Importing iGibson (gibson2 module)
INFO:root:Assets path: /home/euclid/iGibson/gibson2/assets
INFO:root:Dataset path: /home/euclid/iGibson/gibson2/dataset
pybullet build time: Jun  2 2020 06:47:43
Traceback (most recent call last):
  File "train_hrl_gibson.py", line 21, in <module>
    from gibson2.envs.locomotor_env import NavigateEnv, NavigateRandomEnv, InteractiveNavigateEnv
ImportError: cannot import name 'InteractiveNavigateEnv'

and when I go back to the iGibson/gibson2/envs/locomotor_env.py and still is not having the class inside. It is like it is not on the branch.

I will hard-put it there by copy-pasting your link though I know this is not a good thing to do :)
I will let you know

I deleted my iGibson and I installed it back according to your readme file and is finally working. However, I cannot get why it wasnt correctly ckecking out the branch of hrl4in, if you have already installed iGibson before. Anyways for now I will work with this setup. Thank you for your work and effort. I hope I can give you a citation soon :)

from hrl4in.

ChengshuLi avatar ChengshuLi commented on August 15, 2024

@gchal I am so glad it worked out. I think the reason that it didn't work previously might be your local branch hrl4in didn't track the remote branch hrl4in correctly. But anyway, let me know if you have any further questions! Thanks!

from hrl4in.

ding15963 avatar ding15963 commented on August 15, 2024

@liuqi8827 Have you solved the segmentation fault problem? I encountered the same confusion as you. Could you help me? Thanks!

@ChengshuLi I think the segfault problem lies here /iGibson/gibson2/core/render/mesh_renderer/CppMeshRenderer.cpython-36m-x86_64-linux-gnu.so. It is applied in line 271 of /iGibson/gibson2/core/render/mesh_renderer/mesh_renderer_cpu.py. self.r.init() will cause this segmentation error. Could you help me? Thanks!

from hrl4in.

liuqi8827 avatar liuqi8827 commented on August 15, 2024

@liuqi8827 Have you solved the segmentation fault problem? I encountered the same confusion as you. Could you help me? Thanks!

@ChengshuLi I think the segfault problem lies here /iGibson/gibson2/core/render/mesh_renderer/CppMeshRenderer.cpython-36m-x86_64-linux-gnu.so. It is applied in line 271 of /iGibson/gibson2/core/render/mesh_renderer/mesh_renderer_cpu.py. self.r.init() will cause this segmentation error. Could you help me? Thanks!

Hi,
I can not solve the segmentation fault problem.
As a result, I didn't run the project successfully.

from hrl4in.

fxia22 avatar fxia22 commented on August 15, 2024

@liuqi8827 for the segmentation fault issue can you check against this trouble shooting list? http://svl.stanford.edu/igibson/docs/issues.html

Thanks

from hrl4in.

Related Issues (6)

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.