Giter VIP home page Giter VIP logo

stanfordvl / gibsonenv Goto Github PK

View Code? Open in Web Editor NEW
825.0 31.0 146.0 79.98 MB

Gibson Environments: Real-World Perception for Embodied Agents

Home Page: http://gibsonenv.stanford.edu/

License: MIT License

Shell 0.38% Python 33.04% CMake 0.86% C++ 17.58% Cuda 2.42% C 45.58% HTML 0.01% Dockerfile 0.13%
computer-vision robotics simulator sim2real deep-learning deep-reinforcement-learning research ros reinforcement-learning cvpr2018

gibsonenv's Introduction

GIBSON ENVIRONMENT for Embodied Active Agents with Real-World Perception

You shouldn't play video games all day, so shouldn't your AI! We built a virtual environment simulator, Gibson, that offers real-world experience for learning perception.

Summary: Perception and being active (i.e. having a certain level of motion freedom) are closely tied. Learning active perception and sensorimotor control in the physical world is cumbersome as existing algorithms are too slow to efficiently learn in real-time and robots are fragile and costly. This has given a fruitful rise to learning in the simulation which consequently casts a question on transferring to real-world. We developed Gibson environment with the following primary characteristics:

I. being from the real-world and reflecting its semantic complexity through virtualizing real spaces,
II. having a baked-in mechanism for transferring to real-world (Goggles function), and
III. embodiment of the agent and making it subject to constraints of space and physics via integrating a physics engine (Bulletphysics).

Naming: Gibson environment is named after James J. Gibson, the author of "Ecological Approach to Visual Perception", 1979. “We must perceive in order to move, but we must also move in order to perceive” – JJ Gibson

Please see the website (http://gibsonenv.stanford.edu/) for more technical details. This repository is intended for distribution of the environment and installation/running instructions.

Paper

"Gibson Env: Real-World Perception for Embodied Agents", in CVPR 2018 [Spotlight Oral].

Gibson summary video

Release

This is the 0.3.1 release. Bug reports, suggestions for improvement, as well as community developments are encouraged and appreciated. change log file.

Database

The full database includes 572 spaces and 1440 floors and can be downloaded here. A diverse set of visualizations of all spaces in Gibson can be seen here. To make the core assets download package lighter for the users, we include a small subset (39) of the spaces. Users can download the rest of the spaces and add them to the assets folder. We also integrated Stanford 2D3DS and Matterport 3D as separate datasets if one wishes to use Gibson's simulator with those datasets (access here).

Table of contents

Installation

Installation Method

There are two ways to install gibson, A. using our docker image (recommended) and B. building from source.

System requirements

The minimum system requirements are the following:

For docker installation (A):

  • Ubuntu 16.04
  • Nvidia GPU with VRAM > 6.0GB
  • Nvidia driver >= 384
  • CUDA >= 9.0, CuDNN >= v7

For building from the source(B):

  • Ubuntu >= 14.04
  • Nvidia GPU with VRAM > 6.0GB
  • Nvidia driver >= 375
  • CUDA >= 8.0, CuDNN >= v5

Download data

First, our environment core assets data are available here. You can follow the installation guide below to download and set up them properly. gibson/assets folder stores necessary data (agent models, environments, etc) to run gibson environment. Users can add more environments files into gibson/assets/dataset to run gibson on more environments. Visit the database readme for downloading more spaces. Please sign the license agreement before using Gibson's database.

A. Quick installation (docker)

We use docker to distribute our software, you need to install docker and nvidia-docker2.0 first.

Run docker run --runtime=nvidia --rm nvidia/cuda nvidia-smi to verify your installation.

You can either 1. pull from our docker image (recommended) or 2. build your own docker image.

  1. Pull from our docker image (recommended)
# download the dataset from https://storage.googleapis.com/gibson_scenes/dataset.tar.gz
docker pull xf1280/gibson:0.3.1
xhost +local:root
docker run --runtime=nvidia -ti --rm -e DISPLAY -v /tmp/.X11-unix:/tmp/.X11-unix -v <host path to dataset folder>:/root/mount/gibson/gibson/assets/dataset xf1280/gibson:0.3.1
  1. Build your own docker image
git clone https://github.com/StanfordVL/GibsonEnv.git
cd GibsonEnv
./download.sh # this script downloads assets data file and decompress it into gibson/assets folder
docker build . -t gibson ### finish building inside docker, note by default, dataset will not be included in the docker images
xhost +local:root ## enable display from docker

If the installation is successful, you should be able to run docker run --runtime=nvidia -ti --rm -e DISPLAY -v /tmp/.X11-unix:/tmp/.X11-unix -v <host path to dataset folder>:/root/mount/gibson/gibson/assets/dataset gibson to create a container. Note that we don't include dataset files in docker image to keep our image slim, so you will need to mount it to the container when you start a container.

Notes on deployment on a headless server

Gibson Env supports deployment on a headless server and remote access with x11vnc. You can build your own docker image with the docker file Dockerfile as above. Instructions to run gibson on a headless server (requires X server running):

  1. Install nvidia-docker2 dependencies following the starter guide. Install x11vnc with sudo apt-get install x11vnc.
  2. Have xserver running on your host machine, and run x11vnc on DISPLAY :0.
  3. docker run --runtime=nvidia -ti --rm -e DISPLAY -v /tmp/.X11-unix/X0:/tmp/.X11-unix/X0 -v <host path to dataset folder>:/root/mount/gibson/gibson/assets/dataset <gibson image name>
  4. Run gibson with python <gibson example or training> inside docker.
  5. Visit your host:5900 and you should be able to see the GUI.

If you don't have X server running, you can still run gibson, see this guide for more details.

B. Building from source

If you don't want to use our docker image, you can also install gibson locally. This will require some dependencies to be installed.

First, make sure you have Nvidia driver and CUDA installed. If you install from source, CUDA 9 is not necessary, as that is for nvidia-docker 2.0. Then, let's install some dependencies:

apt-get update 
apt-get install libglew-dev libglm-dev libassimp-dev xorg-dev libglu1-mesa-dev libboost-dev \
		mesa-common-dev freeglut3-dev libopenmpi-dev cmake golang libjpeg-turbo8-dev wmctrl \
		xdotool libzmq3-dev zlib1g-dev

Install required deep learning libraries: Using python3.5 is recommended. You can create a python3.5 environment first.

conda create -n py35 python=3.5 anaconda 
source activate py35 # the rest of the steps needs to be performed in the conda environment
conda install -c conda-forge opencv
pip install http://download.pytorch.org/whl/cu90/torch-0.3.1-cp35-cp35m-linux_x86_64.whl 
pip install torchvision==0.2.0
pip install tensorflow==1.3

Clone the repository, download data and build

git clone https://github.com/StanfordVL/GibsonEnv.git
cd GibsonEnv
./download.sh # this script downloads assets data file and decompress it into gibson/assets folder
./build.sh build_local ### build C++ and CUDA files
pip install -e . ### Install python libraries

Install OpenAI baselines if you need to run the training demo.

git clone https://github.com/fxia22/baselines.git
pip install -e baselines

Uninstalling

Uninstall gibson is easy. If you installed with docker, just run docker images -a | grep "gibson" | awk '{print $3}' | xargs docker rmi to clean up the image. If you installed from source, uninstall with pip uninstall gibson

Quick Start

First run xhost +local:root on your host machine to enable display. You may need to run export DISPLAY=:0 first. After getting into the docker container with docker run --runtime=nvidia -ti --rm -e DISPLAY -v /tmp/.X11-unix:/tmp/.X11-unix -v <host path to dataset folder>:/root/mount/gibson/gibson/assets/dataset gibson, you will get an interactive shell. Now you can run a few demos.

If you installed from source, you can run those directly using the following commands without using docker.

python examples/demo/play_husky_nonviz.py ### Use ASWD keys on your keyboard to control a car to navigate around Gates building

You will be able to use ASWD keys on your keyboard to control a car to navigate around Gates building. A camera output will not be shown in this particular demo.

python examples/demo/play_husky_camera.py ### Use ASWD keys on your keyboard to control a car to navigate around Gates building, while RGB and depth camera outputs are also shown.

You will able to use ASWD keys on your keyboard to control a car to navigate around Gates building. You will also be able to see the RGB and depth camera outputs.

python examples/train/train_husky_navigate_ppo2.py ### Use PPO2 to train a car to navigate down the hallway in Gates building, using visual input from the camera.

By running this command you will start training a husky robot to navigate in Gates building and go down the corridor with RGBD input. You will see some RL related statistics in the terminal after each episode.
python examples/train/train_ant_navigate_ppo1.py ### Use PPO1 to train an ant to navigate down the hallway in Gates building, using visual input from the camera.

By running this command you will start training an ant to navigate in Gates building and go down the corridor with RGBD input. You will see some RL related statistics in the terminal after each episode.

Gibson Framerate

Below is Gibson Environment's framerate benchmarked on different platforms. Please refer to fps branch for the code to reproduce the results.

Platform Tested on Intel E5-2697 v4 + NVIDIA Tesla V100
Resolution [nxn] 128 256 512
RGBD, pre networkf 109.1 58.5 26.5
RGBD, post networkf 77.7 30.6 14.5
RGBD, post small networkf 87.4 40.5 21.2
Depth only 253.0 197.9 124.7
Surface Normal only 207.7 129.7 57.2
Semantic only 190.0 144.2 55.6
Non-Visual Sensory 396.1 396.1 396.1

We also tested on Intel I7 7700 + NVIDIA GeForce GTX 1070Ti and Tested on Intel I7 6580k + NVIDIA GTX 1080Ti platforms. The FPS difference is within 10% on each task.

Platform Multi-process FPS tested on Intel E5-2697 v4 + NVIDIA Tesla V100
Configuration 512x512 episode sync 512x512 frame sync 256x256 episode sync 256x256 frame sync 128x128 episode sync 128x128 frame sync
1 process 12.8 12.02 32.98 32.98 52 52
2 processes 23.4 20.9 60.89 53.63 86.1 101.8
4 processes 42.4 31.97 105.26 76.23 97.6 145.9
8 processes 72.5 48.1 138.5 97.72 113 151

Web User Interface

When running Gibson, you can start a web user interface with python gibson/utils/web_ui.py python gibson/utils/web_ui.py 5552. This is helpful when you cannot physically access the machine running gibson or you are running on a headless cloud environment. You need to change mode in configuration file to web_ui to use the web user interface.

Rendering Semantics

Gibson can provide pixel-wise frame-by-frame semantic masks when the model is semantically annotated. As of now we have incorporated models from Stanford 2D-3D-Semantics Dataset and Matterport 3D for this purpose. You can access them within Gibson here. We refer you to the original dataset's reference for the list of their semantic classes and annotations.

For detailed instructions of rendering semantics in Gibson, see semantic instructions. As one example in the starter dataset that comes with installation, space7 includes Stanford 2D-3D-Semantics style annotation.

Robotic Agents

Gibson provides a base set of agents. See videos of these agents and their corresponding perceptual observation here.

To enable (optionally) abstracting away low-level control and robot dynamics for high-level tasks, we also provide a set of practical and ideal controllers for each agent.

Agent Name DOF Information Controller
Mujoco Ant 8 OpenAI Link Torque
Mujoco Humanoid 17 OpenAI Link Torque
Husky Robot 4 ROS, Manufacturer Torque, Velocity, Position
Minitaur Robot 8 Robot Page, Manufacturer Sine Controller
JackRabbot 2 Stanford Project Link Torque, Velocity, Position
TurtleBot 2 ROS, Manufacturer Torque, Velocity, Position
Quadrotor 6 Paper Position

Starter Code

More demonstration examples can be found in examples/demo folder

Example Explanation
play_ant_camera.py Use 1234567890qwerty keys on your keyboard to control an ant to navigate around Gates building, while RGB and depth camera outputs are also shown.
play_ant_nonviz.py Use 1234567890qwerty keys on your keyboard to control an ant to navigate around Gates building.
play_drone_camera.py Use ASWDZX keys on your keyboard to control a drone to navigate around Gates building, while RGB and depth camera outputs are also shown.
play_drone_nonviz.py Use ASWDZX keys on your keyboard to control a drone to navigate around Gates building
play_humanoid_camera.py Use 1234567890qwertyui keys on your keyboard to control a humanoid to navigate around Gates building. Just kidding, controlling humaniod with keyboard is too difficult, you can only watch it fall. Press R to reset. RGB and depth camera outputs are also shown.
play_humanoid_nonviz.py Watch a humanoid fall. Press R to reset.
play_husky_camera.py Use ASWD keys on your keyboard to control a car to navigate around Gates building, while RGB and depth camera outputs are also shown.
play_husky_nonviz.py Use ASWD keys on your keyboard to control a car to navigate around Gates building

More training code can be found in examples/train folder.

Example Explanation
train_husky_navigate_ppo2.py Use PPO2 to train a car to navigate down the hallway in Gates building, using RGBD input from the camera.
train_husky_navigate_ppo1.py   Use PPO1 to train a car to navigate down the hallway in Gates building, using RGBD input from the camera.
train_ant_navigate_ppo1.py Use PPO1 to train an ant to navigate down the hallway in Gates building, using visual input from the camera.
train_ant_climb_ppo1.py Use PPO1 to train an ant to climb down the stairs in Gates building, using visual input from the camera.
train_ant_gibson_flagrun_ppo1.py Use PPO1 to train an ant to chase a target (a red cube) in Gates building. Everytime the ant gets to target(or time out), the target will change position.
train_husky_gibson_flagrun_ppo1.py Use PPO1 to train a car to chase a target (a red cube) in Gates building. Everytime the car gets to target(or time out), the target will change position.

ROS Configuration

We provide examples of configuring Gibson with ROS here. We use turtlebot as an example, after a policy is trained in Gibson, it requires minimal changes to deploy onto a turtlebot. See README for more details.

Coding Your RL Agent

You can code your RL agent following our convention. The interface with our environment is very simple (see some examples in the end of this section).

First, you can create an environment by creating an instance of classes in gibson/core/envs folder.

env = AntNavigateEnv(is_discrete=False, config = config_file)

Then do one step of the simulation with env.step. And reset with env.reset()

obs, rew, env_done, info = env.step(action)

obs gives the observation of the robot. It is a dictionary with each component as a key value pair. Its keys are specified by user inside config file. E.g. obs['nonviz_sensor'] is proprioceptive sensor data, obs['rgb_filled'] is rgb camera data.

rew is the defined reward. env_done marks the end of one episode, for example, when the robot dies. info gives some additional information of this step; sometimes we use this to pass additional non-visual sensor values.

We mostly followed OpenAI gym convention when designing the interface of RL algorithms and the environment. In order to help users start with the environment quicker, we provide some examples at examples/train. The RL algorithms that we use are from openAI baselines with some adaptation to work with hybrid visual and non-visual sensory data. In particular, we used PPO and a speed optimized version of PPO.

Environment Configuration

Each environment is configured with a yaml file. Examples of yaml files can be found in examples/configs folder. Parameters for the file is explained below. For more informat specific to Bullet Physics engine, you can see the documentation here.

Argument name Example value Explanation
envname AntClimbEnv Environment name, make sure it is the same as the class name of the environment
model_id space1-space8 Scene id, in beta release, choose from space1-space8
target_orn [0, 0, 3.14] Eulerian angle (in radian) target orientation for navigating, the reference frame is world frame. For non-navigation tasks, this parameter is ignored.
target_pos [-7, 2.6, -1.5] target position (in meter) for navigating, the reference frame is world frame. For non-navigation tasks, this parameter is ignored.
initial_orn [0, 0, 3.14] initial orientation (in radian) for navigating, the reference frame is world frame
initial_pos [-7, 2.6, 0.5] initial position (in meter) for navigating, the reference frame is world frame
fov 1.57 field of view for the camera, in radian
use_filler true/false use neural network filler or not. It is recommended to leave this argument true. See Gibson Environment website for more information.
display_ui true/false Gibson has two ways of showing visual output, either in multiple windows, or aggregate them into a single pygame window. This argument determines whether to show pygame ui or not, if in a production environment (training), you need to turn this off
show_diagnostics true/false show dignostics(including fps, robot position and orientation, accumulated rewards) overlaying on the RGB image
ui_num 2 how many ui components to show, this should be length of ui_components.
ui_components [RGB_FILLED, DEPTH] which are the ui components, choose from [RGB_FILLED, DEPTH, NORMAL, SEMANTICS, RGB_PREFILLED]
output [nonviz_sensor, rgb_filled, depth] output of the environment to the robot, choose from [nonviz_sensor, rgb_filled, depth]. These values are independent of ui_components, as ui_components determines what to show and output determines what the robot receives.
resolution 512 choose from [128, 256, 512] resolution of rgb/depth image
initial_orn [0, 0, 3.14] initial orientation (in radian) for navigating, the reference frame is world frame
speed : timestep 0.01 length of one physics simulation step in seconds(as defined in Bullet). For example, if timestep=0.01 sec, frameskip=10, and the environment is running at 100fps, it will be 10x real time. Note: setting timestep above 0.1 can cause instability in current version of Bullet simulator since an object should not travel faster than its own radius within one timestep. You can keep timestep at a low value but increase frameskip to simulate at a faster speed. See Bullet guide under "discrete collision detection" for more info.
speed : frameskip 10 how many timestep to skip when rendering frames. See above row for an example. For tasks that does not require high frequency control, you can set frameskip to larger value to gain further speed up.
mode gui/headless/web_ui gui or headless, if in a production environment (training), you need to turn this to headless. In gui mode, there will be visual output; in headless mode, there will be no visual output. In addition to that, if you set mode to web_ui, it will behave like in headless mode but the visual will be rendered to a web UI server. (more information)
verbose true/false show diagnostics in terminal
fast_lq_render true/false if there is fast_lq_render in yaml file, Gibson will use a smaller filler network, this will render faster but generate slightly lower quality camera output. This option is useful for training RL agents fast.

Making Your Customized Environment

Gibson provides a set of methods for you to define your own environments. You can follow the existing environments inside gibson/core/envs.

Method name Usage
robot.render_observation(pose) Render new observations based on pose, returns a dictionary.
robot.get_observation() Get observation at current pose. Needs to be called after robot.render_observation(pose). This does not induce extra computation.
robot.get_position() Get current robot position.
robot.get_orientation() Get current robot orientation.
robot.eyes.get_position() Get current robot perceptive camera position.
robot.eyes.get_orientation() Get current robot perceptive camera orientation.
robot.get_target_position() Get robot target position.
robot.apply_action(action) Apply action to robot.
robot.reset_new_pose(pos, orn) Reset the robot to any pose.
robot.dist_to_target() Get current distance from robot to target.

Goggles: transferring the agent to real-world

Gibson includes a baked-in domain adaptation mechanism, named Goggles, for when an agent trained in Gibson is going to be deployed in real-world (i.e. operate based on images coming from an onboard camera). The mechanisms is essentially a learned inverse function that alters the frames coming from a real camera to what they would look like if they were rendered via Gibson, and hence, disolve the domain gap.

More details: With all the imperfections in point cloud rendering, it has been proven difficult to get completely photo-realistic rendering with neural network fixes. The remaining issues make a domain gap between the synthesized and real images. Therefore, we formulate the rendering problem as forming a joint space ensuring a correspondence between rendered and real images, rather than trying to (unsuccessfully) render images that are identical to real ones. This provides a deterministic pathway for traversing across these domains and hence undoing the gap. We add another network "u" for target image (I_t) and define the rendering loss to minimize the distance between f(I_s) and u(I_t), where "f" and "I_s" represent the filler neural network and point cloud rendering output, respectively (see the loss in above figure). We use the same network structure for f and u. The function u(I) is trained to alter the observation in real-world, I_t, to look like the corresponding I_s and consequently dissolve the gap. We named the u network goggles, as it resembles corrective lenses for the agent for deployment in real-world. Detailed formulation and discussion of the mechanism can be found in the paper. You can download the function u and apply it when you deploy your trained agent in real-world.

In order to use goggle, you will need preferably a camera with depth sensor, we provide an example here for Kinect. The trained goggle functions are stored in assets/unfiller_{resolution}.pth, and each one is paired with one filler function. You need to use the correct one depending on which filler function is used. If you don't have a camera with depth sensor, we also provide an example for RGB only here.

Citation

If you use Gibson Environment's software or database, please cite:

@inproceedings{xiazamirhe2018gibsonenv,
  title={Gibson {Env}: real-world perception for embodied agents},
  author={Xia, Fei and R. Zamir, Amir and He, Zhi-Yang and Sax, Alexander and Malik, Jitendra and Savarese, Silvio},
  booktitle={Computer Vision and Pattern Recognition (CVPR), 2018 IEEE Conference on},
  year={2018},
  organization={IEEE}
}

gibsonenv's People

Contributors

amir32002 avatar francis-lewis avatar fxia22 avatar hzyjerry avatar ir0 avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

gibsonenv's Issues

Spotty Camera Images

I am running play_drone_camera.py through Allensville, with orientation [0, 0, 1.75] and position [2.6, 2, 0.4], and this is the output I get with rgb_prefilled, rgb_filled, and depth. It seems like there's a lot of missing spots in rbg_prefilled. Am I running something incorrectly, or are there certain viewpoints in the models where there are a lot of holes?

screenshot from 2018-07-26 12-07-48

Depth render fail large models

Error message:

depth_render: malloc.c:2394: sysmalloc: Assertion `(old_top == initial_top (av) && old_size == 0) || ((unsigned long) (old_size) >= MINSIZE && prev_inuse (old_top) && ((unsigned long) old_end & (pagesize - 1)) == 0)' failed.

Potentially due to memory overflow

Frameskip doesn't work as expected with Minitaur

When adjusting frameskip for minitaur, any settings besides timestep: 0.001 and frameskip: 1 leads to erratic minitaur leg behavior when running examples/demo/controller_minitaur_nonviz.py.

Very low fps

I run play_husky_camera.py on GTX 1070 but can only obtain an fps of 9, far less than what you reported on the paper.
Do you know what is the problem?

Interaction with objects

Is there an option to spawn custom objects (e.g. cubes, balls...) with which our agents would be able to interact, fully leveraging Bullet engine?

Scaling Physics Models

Hi, I am trying to run a scaled version of the quadrotor in GibsonEnv. How can I do this?

I tried changing the dimensions of the quadrotor in quadrotoe_base.obj, and changing the scale in quadrobor.urbf, but it didn't change anything when I ran the simulator. I also tried changing self.mjcf_scaling in robot_locomotors, but when running play_drone_nonviz.py it changed the view of the environment, and I could no longer see the quadrotor.

Thanks!

Gibson Goggle

Can you guys provide more instructions on how to use Gibson's Goggle?
I am trying to convert RGB images to Gibson style to train an image classifier for identifying chairs.

I am running Gibson headless in a docker container.
Thanks,

Change the camera height, do not affect physics

Hi, is it possible to change the camera height of the robot without changing the underlying physics of the robot? Could you please point me in the right direction on which file to change?

Thanks!

Gazebo integration

Looks fantastic. Very well done guys. Is Gazebo integration planned anytime soon?

Seems that this could add tons of value in top of ODE or Bullet.

Thanks,

Hazy Camera Image

I am trying to run Gibson Env in space3 initial position [-0.6897, 9.0862, 1.5] and orientation [0, 0, -1.57], with camera fov 0.66666 and resolution 128. The images I am getting for rgb_filled turn out to be very hazy, and I just get a black screen for rgb_prefilled (see attached images). Is there some way I can fix this?

I also tried it on space7, and the rgb_filled looks good, but the rgb_prefilled image is still just black.

Thanks!
screenshot from 2018-07-19 14-44-48
screenshot from 2018-07-19 14-46-48

train_husky_navigate_ppo2 episodes to converge

Hi, I was able to run the Docker version successfully.

When I run the script train_husky_navigate_ppo2.py it runs correctly but the robot barely moves. When I run the other script like train_husky_gibson_flagrun_ppo1.py, it runs much faster.

Is it expected? Do you have any numbers on how many epochs the agents start to converge or move around?

I believe it can take very long to converge, so I'd just like to confirm. Let me know if I'm missing something.

Thanks in advance!

env.reset() and env.step() not working

Just did a fresh install of Gibson from source. When I run the following code:

from gibson.envs.husky_env import HuskyNavigateEnv

env = HuskyNavigateEnv(gpu_count=1, config=<path to examples/configs/husky_navigate.yaml>)
env.reset()

or

env.step(0)

I get a not implemented error.

When I try doing env._step(0), I get:

~/svl/GibsonEnv/gibson/envs/husky_env.py in _rewards(self, action, debugmode)
     55     def _rewards(self, action=None, debugmode=False):
     56         a = action
---> 57         potential_old = self.potential
     58         self.potential = self.robot.calc_potential()
     59         progress = float(self.potential - potential_old)

AttributeError: 'HuskyNavigateEnv' object has no attribute 'potential'

Not quite sure what to do. Just upgraded OpenAI gym from 0.9.4 to 0.10.5. Perhaps this is the issue? Still shouldn't env have a step and reset method that work?

Gibson on remote server hangs when loading

When running on Deep Learning AMI (Ubuntu) Version 12.0 on a p2.xlarge AWS machine, gibson hangs at line 128:self.socket_mist.connect("tcp://localhost:{}").format(self.port-1) in file gibson/core/render/pcrender.py. Any tips on what may be wrong?

An Issue about Environment on my Computer

Hello everyone,

I am a novice ubuntu and github user. Nowadays, I am studying Master Degree in EEE at my university. So, I want to use GibsonEnv. for my final thesis. I installed packages and requirements according to the instructions. Then, I could work first example. Unfortunately, Other examples which consist camera and RGB don't work and are closed themselves. I've read some issues about cuda render or something else.

First example works as following;
working_1

Whereas other examples have problem.. And the output is given below;

`(py35) deepsrv@deepsrv-System-Product-Name:~/GibsonEnv$ python examples/demo/play_husky_camera.py
Error: cuda renderer is not loaded, rendering will not work
pybullet build time: Oct 17 2018 10:24:42
pygame 1.9.4
Hello from the pygame community. https://www.pygame.org/contribute.html
/home/deepsrv/GibsonEnv/examples/demo/../configs/play/play_husky_camera.yaml
startThreads creating 1 threads.
starting thread 0
started thread 0
argc=2
argv[0] = --unused
argv[1] = --start_demo_name=Physics Server
ExampleBrowserThreadFunc started
X11 functions dynamically loaded using dlopen/dlsym OK!
Creating context
Created GL 3.0 context
Direct GLX rendering context obtained
Making context current
GL_VENDOR=NVIDIA Corporation
GL_RENDERER=GeForce GTX 780/PCIe/SSE2
GL_VERSION=3.2.0 NVIDIA 410.48
GL_SHADING_LANGUAGE_VERSION=1.50 NVIDIA via Cg compiler
pthread_getconcurrency()=0
Version = 3.2.0 NVIDIA 410.48
Vendor = NVIDIA Corporation
Renderer = GeForce GTX 780/PCIe/SSE2
b3Printf: Selected demo: Physics Server
startThreads creating 1 threads.
starting thread 0
started thread 0
MotionThreadFunc thread started
ven = NVIDIA Corporation
killing None
Error in sys.excepthook:
Traceback (most recent call last):
File "/home/deepsrv/GibsonEnv/gibson/envs/env_modalities.py", line 642, in camera_multi_excepthook
self.r_camera_mul.terminate()
AttributeError: 'NoneType' object has no attribute 'terminate'

Original exception was:
Traceback (most recent call last):
File "examples/demo/play_husky_camera.py", line 17, in
env = HuskyNavigateEnv(config=args.config, gpu_idx = args.gpu)
File "/home/deepsrv/GibsonEnv/gibson/envs/husky_env.py", line 40, in init
self.robot_introduce(Husky(self.config, env=self))
File "/home/deepsrv/GibsonEnv/gibson/envs/env_modalities.py", line 349, in robot_introduce
self.setup_rendering_camera()
File "/home/deepsrv/GibsonEnv/gibson/envs/env_modalities.py", line 375, in setup_rendering_camera
self.setup_camera_multi()
File "/home/deepsrv/GibsonEnv/gibson/envs/env_modalities.py", line 671, in setup_camera_multi
self.r_camera_mul = subprocess.Popen(shlex.split(render_main), shell=False)
File "/home/deepsrv/anaconda3/envs/py35/lib/python3.5/subprocess.py", line 676, in init
restore_signals, start_new_session)
File "/home/deepsrv/anaconda3/envs/py35/lib/python3.5/subprocess.py", line 1289, in _execute_child
raise child_exception_type(errno_num, err_msg)
FileNotFoundError: [Errno 2] No such file or directory: './depth_render'
numActiveThreads = 0
stopping threads
stopThreads: Thread 0 used: 1
Thread with taskId 0 exiting
Thread TERMINATED
destroy semaphore
semaphore destroyed
destroy main semaphore
main semaphore destroyed
finished
numActiveThreads = 0
btShutDownExampleBrowser stopping threads
stopThreads: Thread 0 used: 1
Thread with taskId 0 exiting
Thread TERMINATED
destroy semaphore
semaphore destroyed
destroy main semaphore
main semaphore destroyed
Error in atexit._run_exitfuncs:
Traceback (most recent call last):
File "/home/deepsrv/anaconda3/envs/py35/lib/python3.5/site-packages/gym/utils/closer.py", line 67, in close
closeable.close()
File "/home/deepsrv/GibsonEnv/gibson/envs/env_modalities.py", line 487, in _close
self.r_camera_mul.terminate()
AttributeError: 'NoneType' object has no attribute 'terminate'
`

depth render error when rendering semantics

GibsonEnv works fine on my machine when the output is just RGB and depth.
When I tried to output Semantics, I got this error.

/home/reza/Datasets/GibsonEnv/gibson/assets/dataset/space7/semantic.obj: size of temp vertices 193957, vertex indices 1686 out vertices 1686
From ply loaded total of 4 vertices
Semantic.obj file was loaded with success.
Parsing /home/reza/Datasets/GibsonEnv/gibson/assets/dataset/space7/semantic.mtl file for material textures.
  1%|▎                                                    | 1/190 [00:02<06:35,  2.09s/it]Number of loaded materials: 1686
Texture file was loaded with success, total: 1686
Indexing VBO total groups: 1686
 16%|████████▍                                           | 31/190 [00:05<00:27,  5.83it/s]Finished indexing vertices v 473694 uvs 473694 normals 473694 semantics 473694
Semantics 
 19%|██████████▏                                         | 37/190 [00:06<00:26,  5.81it/s]*** 
Error in `./depth_render': corrupted double-linked list (not small): 0x00000000020b9750 ***

And the process got stuck at,

Semantic.obj file was loaded with success.
Parsing /home/reza/Datasets/GibsonEnv/gibson/assets/dataset/space7/semantic.mtl file for material textures.
Number of loaded materials: 1686
Texture file was loaded with success, total: 1686
filename: /home/reza/Datasets/GibsonEnv/gibson/assets/dataset/space7/mesh_z_up.obj
Episode: steps:0 score:0
Episode count: 0

I'm wondering if there is way to make the rendering work.

NameError: name 'delta_pos' is not defined

Hi there! First of all, thanks for this awesome environment. It looks great and I'm very much looking forward to using it.

It looks like there's an error in gibson/core/physics/robot_bases.py:

In reset_random_pos(), the variable 'delta_pos' is not defined. I fixed it by replacing lines 133-136:

new_pos = [ pos[0] + self.np_random.uniform(low=-delta_pos, high=delta_pos),
                    pos[1] + self.np_random.uniform(low=-delta_pos, high=delta_pos),
                    pos[2] + self.np_random.uniform(low=0, high=delta_pos)]
new_orn = quaternions.qmult(quaternions.axangle2quat([1, 0, 0], self.np_random.uniform(low=-delta_orn, high=delta_orn)), orn)

with

x_range = self.config["random"]["random_init_x_range"]
y_range = self.config["random"]["random_init_y_range"]
z_range = self.config["random"]["random_init_z_range"]
r_range = self.config["random"]["random_init_rot_range"]

new_pos = [ pos[0] + self.np_random.uniform(low=x_range[0], high=x_range[1]),
                    pos[1] + self.np_random.uniform(low=y_range[0], high=y_range[1]),
                    pos[2] + self.np_random.uniform(low=z_range[0], high=z_range[1])]
new_orn = quaternions.qmult(quaternions.axangle2quat([1, 0, 0], self.np_random.uniform(low=r_range[0], high=r_range[1])), orn)

You can reproduce this issue by running husky_navigate.yaml with random_initial_pose = true.

deployment on a headless server problem

when i run

/opt/websockify/run 5901 --web=/opt/noVNC --wrap-mode=ignore -- vncserver :1 -securitytypes otp -otp -noxstartup

this command,i got the infomation like this:

_WebSocket server settings:

  • Listen on :5901
  • Flash security policy server
  • Web server. Web root: /opt/noVNC
  • SSL/TLS support
  • proxying from :5901 to 'vncserver :1 -securitytypes otp -otp -noxstartup' (port 33951)
    Starting 'vncserver :1 -securitytypes otp -otp -noxstartup'

Desktop 'TurboVNC: unix:1 ()' started on display unix:1

One-Time Password authentication enabled. Generating initial OTP ...
Full control one-time password: 23411271
Run 'vncpasswd -o' from within the TurboVNC session or
'vncpasswd -o -display unix:1' from within this shell
to generate additional OTPs
Log file is /root/.vnc/unix:1.log_

then, how can i run

_Run gibson with DISPLAY=:1 vglrun python xxxx

this command? that is not interactive any more, it is listening the connection, and print the log info.

anyone can help ?

thks

shader loading error for play_drone_camera.py

I'm trying to run play_drone camera.py and although it renders the nonviz version correctly there's some error with loading the shaders. It only completes that partially and so I get this as the output -

screenshot from 2018-11-30 13-40-35

the error is -

terminate called after throwing an instance of 'zmq::error_t

Here's the complete error dump -

pybullet build time: Nov  9 2018 22:05:20
pygame 1.9.4
Hello from the pygame community. https://www.pygame.org/contribute.html
/root/mount/gibson/examples/demo/../configs/play/play_drone_camera.yaml
DroneNavigateEnv
startThreads creating 1 threads.
starting thread 0
started thread 0 
argc=2
argv[0] = --unused
argv[1] = --start_demo_name=Physics Server
ExampleBrowserThreadFunc started
X11 functions dynamically loaded using dlopen/dlsym OK!
Creating context
Created GL 3.0 context
Direct GLX rendering context obtained
Making context current
GL_VENDOR=NVIDIA Corporation
GL_RENDERER=TITAN Xp/PCIe/SSE2
GL_VERSION=3.2.0 NVIDIA 410.73
GL_SHADING_LANGUAGE_VERSION=1.50 NVIDIA via Cg compiler
pthread_getconcurrency()=0
Version = 3.2.0 NVIDIA 410.73
Vendor = NVIDIA Corporation
Renderer = TITAN Xp/PCIe/SSE2
b3Printf: Selected demo: Physics Server
startThreads creating 1 threads.
starting thread 0
started thread 0 
MotionThreadFunc thread started
ven = NVIDIA Corporation
Processing the data:
Total 1 scenes 0 train 1 test
Indexing
  0%|                                                                               | 0/1 [00:00<?, ?it/s]number of devices found 1
Loaded EGL 1.5 after reload.
100%|#######################################################################| 1/1 [00:00<00:00,  5.29it/s]
  0%|                                                                             | 0/190 [00:00<?, ?it/s]GL_VENDOR=NVIDIA Corporation
GL_RENDERER=TITAN Xp/PCIe/SSE2
GL_VERSION=4.6.0 NVIDIA 410.73
GL_SHADING_LANGUAGE_VERSION=4.60 NVIDIA
finish loading shaders
 15%|##########                                                          | 28/190 [00:01<01:15,  2.15it/s]terminate called after throwing an instance of 'zmq::error_t'
  what():  Address already in use
100%|###################################################################| 190/190 [00:09<00:00, 20.57it/s]
{'use_filler': True, 'random': {'random_init_z_range': [-0.1, 0.1], 'random_initial_pose': False, 'random_init_rot_range': [-0.1, 0.1], 'random_init_y_range': [-0.1, 0.1], 'random_init_x_range': [-0.1, 0.1], 'random_target_pose': False}, 'model_id': 'space7', 'target_pos': [-14.3, 45.07, 0.5], 'speed': {'timestep': 0.01, 'frameskip': 1}, 'display_ui': True, 'envname': 'DroneNavigateEnv', 'show_diagnostics': False, 'is_discrete': True, 'mode': 'gui', 'output': ['nonviz_sensor', 'rgb_filled', 'depth'], 'resolution': 256, 'ui_num': 2, 'verbose': False, 'initial_orn': [0, 0, 4.71], 'initial_pos': [-14.3, 5, 1.2], 'target_orn': [0, 0, 1.57], 'fov': 1.57, 'ui_components': ['RGB_FILLED', 'DEPTH']}
Episode: steps:0 score:0
Episode count: 0
render to ui
Play Env: step: complete: 3.92 fps, 0.25502 seconds
Play mode: reward 0.000000
render to ui
Play Env: step: complete: 7.32 fps, 0.13668 seconds
Play mode: reward 0.000000
render to ui
Play Env: step: complete: 7.34 fps, 0.13622 seconds
Play mode: reward 0.000000
render to ui
Play Env: step: complete: 7.40 fps, 0.13508 seconds
Play mode: reward 0.000000
render to ui

Headless server deployment error

Hi, I followed "Notes on deployment on a headless server" but I'm getting an error:

root@feb123451b79:~/mount/gibson# DISPLAY=:1 vglrun python examples/demo/play_husky_nonviz.py
pybullet build time: Jun 18 2018 19:32:26
/root/mount/gibson/examples/demo/../configs/play/play_husky_nonviz.yaml
startThreads creating 1 threads.
starting thread 0
started thread 0
argc=2
argv[0] = --unused
argv[1] = --start_demo_name=Physics Server
ExampleBrowserThreadFunc started
X11 functions dynamically loaded using dlopen/dlsym OK!
No protocol specified
[VGL] ERROR: Could not open display :0.

Output from /opt/websockify/run:

WebSocket server settings:
  - Listen on :5901
  - Flash security policy server
  - Web server. Web root: /opt/noVNC
  - SSL/TLS support
  - proxying from :5901 to 'vncserver :1 -securitytypes otp -otp -noxstartup' (port 35923)
Starting 'vncserver :1 -securitytypes otp -otp -noxstartup'
xauth:  file /root/.Xauthority does not exist

Desktop 'TurboVNC: unix:1 ()' started on display unix:1

One-Time Password authentication enabled.  Generating initial OTP ...
Full control one-time password: 14299659
Run 'vncpasswd -o' from within the TurboVNC session or
    'vncpasswd -o -display unix:1' from within this shell
    to generate additional OTPs
Log file is /root/.vnc/unix:1.log

Log file contents:

TurboVNC Server (Xvnc) 64-bit v2.1.2 (build 20170925)
Copyright (C) 1999-2017 The VirtualGL Project and many others (see README.txt)
Visit http://www.TurboVNC.org for more information on TurboVNC

18/06/2018 19:41:07 Using auth configuration file /etc/turbovncserver-security.conf
18/06/2018 19:41:07 Enabled authentication method 'otp'
18/06/2018 19:41:07 Advertising security type 'vncauth' to viewers
_XSERVTransmkdir: Mode of /tmp/.X11-unix should be set to 0777
_XSERVTransmkdir: this may cause subsequent errors
18/06/2018 19:41:07 Desktop name 'TurboVNC: unix:1 ()' (feb123451b79:1)
18/06/2018 19:41:07 Protocol versions supported: 3.3, 3.7, 3.8, 3.7t, 3.8t
18/06/2018 19:41:07 Listening for VNC connections on TCP port 5901
18/06/2018 19:41:07   Interface 127.0.0.1
18/06/2018 19:41:07 NOTICE: HTTP server disabled per system policy
18/06/2018 19:41:07 Framebuffer: BGRX 8/8/8/8
18/06/2018 19:41:07 Maximum clipboard transfer size: 1048576 bytes
18/06/2018 19:41:07 VNC extension running!

Could you help me resolve this problem?

Position control for Husky

Right now I only see the HuskyNavigateSpeedControlEnv. Is there an ideal position controller environment for Husky so we can abstract away the low level dynamics?

Top-down view of Gibson

How can I get the top-down view of the whole Gibson map just like showed like this?
image

Thank you!

robot.reset_new_pos() causes segmentation fault

I'm currently trying to use ROS to move the robot position via reset_new_pos(). To do this, I publish a PoseStamped message, which is received by examples/ros/gibson-ros/turtlebot_rgbd.py. turtlebot_rgbd.py then calls env.robot.reset_new_pos(data.pose.position, data.pose.orientation). An example message is the following:

header: 
  seq: 1
  stamp: 
    secs: 1526417273
    nsecs: 781210899
  frame_id: "map"
pose: 
  position: 
    x: -10.4499994278
    y: 5.69999990463
    z: 0.5
  orientation: 
    x: 0.0
    y: 0.0
    z: 0.0
    w: 1.0

However, when this is done, I get the following message:

[INFO] [1526417273.781602]: Teleporting robot
Fatal Python error: (pygame parachute) Segmentation Fault
[ INFO] [1526417273.983097364]: Got new plan
[turtlebot_gibson_sim-3] process has died [pid 4376, exit code -6, cmd ~/catkin_ws/src/gibson-ros/turtlebot_rgbd.py __name:=turtlebot_gibson_sim __log:=~/.ros/log/13cc672c-5881-11e8-80f1-305a3a540f92/turtlebot_gibson_sim-3.log].
log file: ~/.ros/log/13cc672c-5881-11e8-80f1-305a3a540f92/turtlebot_gibson_sim-3*.log
[ WARN] [1526417274.247098312]: Costmap2DROS transform timeout. Current time: 1526417274.2471, global_pose stamp: 1526417273.7352, tolerance: 0.5000
[ WARN] [1526417274.247133164]: Could not get robot pose, cancelling reconfiguration
[ERROR] [1526417274.489536942]: Could not get robot pose

X server error

I followed the instructions in your readme and was successfully able to setup and run the docker environment. However I get this error

Invalid MIT-MAGIC-COOKIE-1 keyxhost: unable to open display ":0.0"

on running the xhost +local:root command

Successfully built 62993fc08d1b
Successfully tagged gibson:latest
drparadox30@mae2:/media/drparadox30/Data/GibsonEnv$ sudo docker run --runtime=nvidia -ti --rm -e DISPLAY -v /tmp/.X11-unix:/tmp/.X11-unix -v '/media/drparadox30/Data/GibsonEnv/gibson/assets/dataset':/root/mount/gibson/gibson/assets/dataset gibson
[sudo] password for drparadox30:
root@adb10486efed:~/mount/gibson# exit
exit
drparadox30@mae2:/media/drparadox30/Data/GibsonEnv$ export DISPLAY=:0.0
drparadox30@mae2:/media/drparadox30/Data/GibsonEnv$ xhost +local:root
Invalid MIT-MAGIC-COOKIE-1 keyxhost: unable to open display ":0.0"

What could be the problem?

512 resolution environment gives error

(py35) XXXXX@XXXXXX:~/codes/GibsonEnv$ python examples/demo/play_turtlebot_camera.py
pybullet build time: Jan 10 2019 16:01:23
pygame 1.9.4
Hello from the pygame community. https://www.pygame.org/contribute.html
/home/tushar/codes/GibsonEnv/examples/demo/../configs/play/play_turtlebot_camera.yaml
startThreads creating 1 threads.
starting thread 0
started thread 0
argc=2
argv[0] = --unused
argv[1] = --start_demo_name=Physics Server
ExampleBrowserThreadFunc started
X11 functions dynamically loaded using dlopen/dlsym OK!
Creating context
Created GL 3.0 context
Direct GLX rendering context obtained
Making context current
GL_VENDOR=NVIDIA Corporation
GL_RENDERER=GeForce GTX TITAN X/PCIe/SSE2
GL_VERSION=3.2.0 NVIDIA 396.54.09
GL_SHADING_LANGUAGE_VERSION=1.50 NVIDIA via Cg compiler
pthread_getconcurrency()=0
Version = 3.2.0 NVIDIA 396.54.09
Vendor = NVIDIA Corporation
Renderer = GeForce GTX TITAN X/PCIe/SSE2
b3Printf: Selected demo: Physics Server
startThreads creating 1 threads.
starting thread 0
started thread 0
MotionThreadFunc thread started
ven = NVIDIA Corporation
Processing the data:
Total 1 scenes 0 train 1 test
Indexing
0%| | 0/1 [00:00<?, ?it/s]number of devices found 1
Loaded EGL 1.4 after reload.
GL_VENDOR=NVIDIA Corporation
GL_RENDERER=GeForce GTX TITAN X/PCIe/SSE2
GL_VERSION=4.6.0 NVIDIA 396.54.09
GL_SHADING_LANGUAGE_VERSION=4.60 NVIDIA
finish loading shaders
100%|██████████████████████████████████████████████████████████████████████████████████████████████████| 1/1 [00:00<00:00, 2.69it/s]
13%|████████████ | 24/190 [00:01<00:23, 7.12it/s] 15%|██████████████▌ | 29/190 [00:02<00:15, 10.40it/s]100%|██████████████████████████████████████████████████████████████████████████████████████████████| 190/190 [00:10<00:00, 18.29it/s]
{'initial_pos': [-14.3, 5, 1.2], 'ui_num': 2, 'fov': 1.57, 'model_id': 'space7', 'target_orn': [0, 0, 1.57], 'verbose': False, 'display_ui': True, 'show_diagnostics': True, 'mode': 'gui', 'initial_orn': [0, 0, 4.71], 'speed': {'frameskip': 1, 'timestep': 0.01}, 'target_pos': [-14.3, 45.07, 0.5], 'is_discrete': True, 'envname': 'TurtlebotNavigateEnv', 'random': {'random_init_rot_range': [-0.1, 0.1], 'random_target_range': 0.1, 'random_init_z_range': [-0.1, 0.1], 'random_init_x_range': [-0.1, 0.1], 'random_init_y_range': [-0.1, 0.1], 'random_target_pose': False, 'random_initial_pose': False}, 'ui_components': ['RGB_FILLED', 'DEPTH'], 'resolution': 512, 'output': ['nonviz_sensor', 'rgb_filled', 'depth'], 'use_filler': True}
Episode: steps:0 score:0
Episode count: 0
THCudaCheck FAIL file=/pytorch/torch/lib/THC/generic/THCTensorCopy.c line=20 error=77 : an illegal memory access was encountered
killing <subprocess.Popen object at 0x7f3ad486a2b0>
File "examples/demo/play_turtlebot_camera.py", line 16, in
File "/home/tushar/codes/GibsonEnv/gibson/utils/play.py", line 107, in play
File "/home/tushar/codes/GibsonEnv/gibson/envs/env_modalities.py", line 391, in _reset
File "/home/tushar/codes/GibsonEnv/gibson/envs/env_modalities.py", line 540, in render_observations
File "/home/tushar/codes/GibsonEnv/gibson/core/render/pcrender.py", line 493, in renderOffScreen
File "/home/tushar/codes/GibsonEnv/gibson/core/render/pcrender.py", line 437, in render
File "/home/tushar/miniconda3/envs/py35/lib/python3.5/site-packages/torch/autograd/variable.py", line 298, in cuda
File "/home/tushar/miniconda3/envs/py35/lib/python3.5/site-packages/torch/autograd/_functions/tensor.py", line 201, in forward
File "/home/tushar/miniconda3/envs/py35/lib/python3.5/site-packages/torch/_utils.py", line 69, in _cuda
RuntimeError: cuda runtime error (77) : an illegal memory access was encountered at /pytorch/torch/lib/THC/generic/THCTensorCopy.c:2
0
numActiveThreads = 0
stopping threads
stopThreads: Thread 0 used: 1
Thread with taskId 0 exiting
Thread TERMINATED
destroy semaphore
semaphore destroyed
destroy main semaphore
main semaphore destroyed
finished
numActiveThreads = 0
btShutDownExampleBrowser stopping threads
stopThreads: Thread 0 used: 1
Thread with taskId 0 exiting
Thread TERMINATED
destroy semaphore
semaphore destroyed
destroy main semaphore
main semaphore destroyed

Add cube as noninteractive visual to scene

Is there a way we could drop a red cube or other visual object into the scene as only a visual (not necessarily supporting physical interaction)? We want to use a visual navigation target for our experiments.

Detecting collisions

I'm using Gibson to run quadrotor simulations. I wanted to know how I could detect collisions between the agent and the environment. So far I've been using approximate methods to do that but they're not completely accurate. Is there an accurate way of doing it maybe by using the underlying pybullet engine or do you already have a higher level function which I can simply call?

Running on Google Cloud

I am new to this, does anyone have instructions on how to setup Gibson on a GCloud instance? and how to view the web UI for simulations on a GCloud instance.

Much appreciated.

View Synthesis

What should I do if I just want to do "View Synthesis" this section?

FileNotFoundError: [Errno 2] No such file or directory: ../coord.npy

When I execute this command:

python examples/demo/play_husky_nonviz.py

It shows me this error:

Traceback (most recent call last):
  File "play_husky_nonviz.py", line 1, in <module>
    from gibson.envs.husky_env import HuskyNavigateEnv
  File "/home/amax/Python/GibsonEnv/gibson/envs/husky_env.py", line 1, in <module>
    from gibson.envs.env_modalities import CameraRobotEnv, BaseRobotEnv, SemanticRobotEnv
  File "/home/amax/Python/GibsonEnv/gibson/envs/env_modalities.py", line 2, in <module>
    from gibson.core.render.pcrender import PCRenderer
  File "/home/amax/Python/GibsonEnv/gibson/core/render/pcrender.py", line 32, in <module>
    coords  = np.load(os.path.join(assets_file_dir, 'coord.npy'))
  File "/root/anaconda3/envs/py35/lib/python3.5/site-packages/numpy/lib/npyio.py", line 372, in load
    fid = open(file, "rb")
FileNotFoundError: [Errno 2] No such file or directory: '/home/amax/Python/GibsonEnv/gibson/assets/coord.npy'

Randomize initial robot position

Is there a way to randomize the initial robot position that respects the constraints of the environment?

When I enable random initial position, it currently seems to spawn the robot into walls, outside the environment, etc.

ValueError: cannot reshape array of size 262144 into shape (64,64)

Something about image resolutions appears to be broken. Running husky_navigate.yaml with resolutions other than 512 is causing a numpy reshape error for me:

killing <subprocess.Popen object at 0x7fbff94daa20>
   File "./run", line 51, in <module>
   File "/usr/local/lib/python3.5/dist-packages/gym/core.py", line 104, in reset
   File "/home/jake/local-workspace/GibsonEnv/gibson/envs/husky_env.py", line 155, in _reset
   File "/home/jake/local-workspace/GibsonEnv/gibson/envs/env_modalities.py", line 334, in _reset
   File "/home/jake/local-workspace/GibsonEnv/gibson/envs/env_modalities.py", line 144, in _reset
   File "/home/jake/local-workspace/GibsonEnv/gibson/envs/env_modalities.py", line 457, in render_observations
   File "/home/jake/local-workspace/GibsonEnv/gibson/core/render/pcrender.py", line 476, in renderOffScreen
   File "/home/jake/local-workspace/GibsonEnv/gibson/core/render/pcrender.py", line 351, in render
 ValueError: cannot reshape array of size 262144 into shape (64,64)

I wasn't able to track down the root cause of this issue, although it only started happening after I started trying out different resolutions in the config file. Maybe some sort of caching problem?

About gibson.utils.pposgd_fuse.py

At line 88 of gibson.utils.pposgd_fuse.py, I think it should be
yield {"ob": obs, "ob_sensor": obs_sensor, "rew": rews, "vpred": vpreds, "new": news,

camera image shows abnormally

when I run this command “roslaunch gibson-ros turtlebot_gmapping.launch” in ROS. The camera image shows like this.
screenshot from 2018-12-03 15-04-59

but,when I just run Gibson without ros ,like "examples/demo/play_husky_camera.py" , the camera image is good.
Is it wrong when I install Gibson?
How to deal with it?

Some Questions About GibsonEnv

  • where in the code can we access the u function for the real world image goggles?
  • is the camera field of view parameter in the yaml referring to the diagonal of the image or the side?
  • when setting the position of a robot, what is the unit of the position?

Thanks!

turtlebot_gmapping.launch failed

when I want to use gibson-ros , I find you said Gibson should be built in source with python 2.7 .However,I can not open the website [Install gibson from source following installation guide in python2.7.] in README.md . I built the gibson in python3.5 ,and the gibson-ros can not roslaunch . So I'd like to know how to Install gibson from source following installation guide in python2.7. Would you show the steps to install gibson from source in python 2.7?

Inconsistency in number of models

I've been trying to use the navigation metrics to support initializing in multiple environments.

How many models of building spaces does gibson_full have? It seems to have only 360 however there are 500 json files in the navigation metrics for gibson_full. It seems the other 140 are not in the gibson_full tar file.

Projection matrix for depth images?

Where can I find the k matrix for projecting the depth to points? I am working with the raw dataset for now and if I recall correctly the Matterport exports individual k matrices for each viewpoint.

Downloading only semantics dataset

If I'm interested only in the semantic segmented parts of the dataset.

Which dataset should I download?
Where is the format specification? (I guess it's just a PNG with color encoding according to what is specified in semantic_color.hpp?)

python examples/train/enjoy_husky_navigate_ppo1.py

********** Iteration 0 ************
Episode: steps:0 score:0
Episode count: 0
killing <subprocess.Popen object at 0x7f8492f790f0>
File "examples/train/enjoy_husky_navigate_ppo1.py", line 97, in
File "examples/train/enjoy_husky_navigate_ppo1.py", line 84, in main
File "examples/train/enjoy_husky_navigate_ppo1.py", line 70, in train
File "/home/gantao/workspaces/GibsonEnv-master/gibson/utils/pposgd_simple.py", line 403, in enjoy
File "/home/gantao/workspaces/GibsonEnv-master/gibson/utils/pposgd_simple.py", line 57, in traj_segment_generator
File "/home/gantao/anaconda3/envs/gibson/lib/python3.6/site-packages/gym/core.py", line 104, in reset
File "/home/gantao/workspaces/GibsonEnv-master/gibson/utils/monitor.py", line 56, in _reset
File "/home/gantao/workspaces/GibsonEnv-master/gibson/envs/env_modalities.py", line 384, in _reset
File "/home/gantao/workspaces/GibsonEnv-master/gibson/envs/env_modalities.py", line 166, in _reset
File "/home/gantao/workspaces/GibsonEnv-master/gibson/envs/env_modalities.py", line 529, in render_observations
File "/home/gantao/workspaces/GibsonEnv-master/gibson/core/render/pcrender.py", line 493, in renderOffScreen
File "/home/gantao/workspaces/GibsonEnv-master/gibson/core/render/pcrender.py", line 379, in render
ValueError: cannot reshape array of size 65536 into shape (128,128)
numActiveThreads = 0
stopping threads
stopThreads: Thread 0 used: 1
Thread with taskId 0 exiting
Thread TERMINATED
destroy semaphore
semaphore destroyed
destroy main semaphore
main semaphore destroyed
finished
numActiveThreads = 0
btShutDownExampleBrowser stopping threads
stopThreads: Thread 0 used: 1
Thread with taskId 0 exiting
destroy semaphore
semaphore destroyed
Thread TERMINATED
destroy main semaphore
main semaphore destroyed

Accessing depth sensor data

What are the different ways in which I can access depth data for my agent? If my agent is a drone, how can I access the depth sensor data. Is it from obs['nonviz_sensor'] or something similar? I tried using obs['depth'] but it seems that isn't a valid key.

Also if I want to find the depth by ray casting should I directly use the pybullet engine or is there any other existing method?

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.