Giter VIP home page Giter VIP logo

Comments (15)

oawiles avatar oawiles commented on August 12, 2024 2

OK I figured this out.

So the commit at which I checked out these to repos were:

  • habitat-sim: d383c2011bf1baab2ce7b3cd40aea573ad2ddf71
  • habitat-api: e94e6f3953fcfba4c29ee30f65baa52d6cea716e

Also, you were right that I changed vector_envs.py in the habitat-api code. Apologies, I completely forgot I did this. I've included the modified version.

Copy-paste the following into habitat-api/habitat/core/vector_env.py


# Copyright (c) Facebook, Inc. and its affiliates.
# This source code is licensed under the MIT license found in the
# LICENSE file in the root directory of this source tree.

import multiprocessing as mp
from multiprocessing.connection import Connection
from queue import Queue
from threading import Thread
from typing import Any, Callable, Iterable, List, Optional, Set, Tuple, Union

import gym
import numpy as np
from gym.spaces.dict_space import Dict as SpaceDict

import habitat
from habitat.config import Config
from habitat.core.env import Env, Observations
from habitat.core.logging import logger
from habitat.core.utils import tile_images

STEP_COMMAND = "step"
RESET_COMMAND = "reset"
RENDER_COMMAND = "render"
CLOSE_COMMAND = "close"
OBSERVATION_SPACE_COMMAND = "observation_space"
ACTION_SPACE_COMMAND = "action_space"
CALL_COMMAND = "call"
NAVIGABLE_COMMAND = 'navigate'
OBSERVATIONS = 'observations'
AGENT_STATE_COMMAND = 'get_agent_state'

def _make_env_fn(
    config: Config, dataset: Optional[habitat.Dataset] = None, rank: int = 0
) -> Env:
    r"""Constructor for default habitat Env.

    Args:
        config: configuration for environment.
        dataset: dataset for environment.
        rank: rank for setting seed of environment

    Returns:
        ``Env``/``RLEnv`` object
    """
    habitat_env = Env(config=config, dataset=dataset)
    habitat_env.seed(config.SEED + rank)
    return habitat_env


class VectorEnv:
    r"""Vectorized environment which creates multiple processes where each
    process runs its own environment. All the environments are synchronized
    on step and reset methods.

    Args:
        make_env_fn: function which creates a single environment. An
            environment can be of type Env or RLEnv
        env_fn_args: tuple of tuple of args to pass to the make_env_fn.
        auto_reset_done: automatically reset the environment when
            done. This functionality is provided for seamless training
            of vectorized environments.
        multiprocessing_start_method: the multiprocessing method used to
            spawn worker processes. Valid methods are
            ``{'spawn', 'forkserver', 'fork'}`` ``'forkserver'`` is the
            recommended method as it works well with CUDA. If
            ``'fork'`` is used, the subproccess  must be started before
            any other GPU useage.
    """

    observation_spaces: SpaceDict
    action_spaces: SpaceDict
    _workers: List[Union[mp.Process, Thread]]
    _is_waiting: bool
    _num_envs: int
    _auto_reset_done: bool
    _mp_ctx: mp.context.BaseContext
    _connection_read_fns: List[Callable[[], Any]]
    _connection_write_fns: List[Callable[[Any], None]]

    def __init__(
        self,
        make_env_fn: Callable[..., Env] = _make_env_fn,
        env_fn_args: Tuple[Tuple] = None,
        auto_reset_done: bool = True,
        multiprocessing_start_method: str = "forkserver",
    ) -> None:

        self._is_waiting = False
        self._is_closed = True

        assert (
            env_fn_args is not None and len(env_fn_args) > 0
        ), "number of environments to be created should be greater than 0"

        self._num_envs = len(env_fn_args)

        assert multiprocessing_start_method in self._valid_start_methods, (
            "multiprocessing_start_method must be one of {}. Got '{}'"
        ).format(self._valid_start_methods, multiprocessing_start_method)
        self._auto_reset_done = auto_reset_done
        self._mp_ctx = mp.get_context(multiprocessing_start_method)
        self._workers = []
        (
            self._connection_read_fns,
            self._connection_write_fns,
        ) = self._spawn_workers(  # noqa
            env_fn_args, make_env_fn
        )

        self._is_closed = False

        for write_fn in self._connection_write_fns:
            write_fn((OBSERVATION_SPACE_COMMAND, None))
        self.observation_spaces = [
            read_fn() for read_fn in self._connection_read_fns
        ]
        for write_fn in self._connection_write_fns:
            write_fn((ACTION_SPACE_COMMAND, None))
        self.action_spaces = [
            read_fn() for read_fn in self._connection_read_fns
        ]
        self._paused = []

    @property
    def num_envs(self):
        r"""
        Returns:
             number of individual environments.
        """
        return self._num_envs - len(self._paused)

    @staticmethod
    def _worker_env(
        connection_read_fn: Callable,
        connection_write_fn: Callable,
        env_fn: Callable,
        env_fn_args: Tuple[Any],
        auto_reset_done: bool,
        child_pipe: Optional[Connection] = None,
        parent_pipe: Optional[Connection] = None,
    ) -> None:
        r"""process worker for creating and interacting with the environment.
        """
        env = env_fn(*env_fn_args)
        if parent_pipe is not None:
            parent_pipe.close()
        try:
            command, data = connection_read_fn()
            while command != CLOSE_COMMAND:
                if command == STEP_COMMAND:
                    # different step methods for habitat.RLEnv and habitat.Env
                    if isinstance(env, habitat.RLEnv) or isinstance(
                        env, gym.Env
                    ):
                        # habitat.RLEnv
                        observations, reward, done, info = env.step(data)
                        if auto_reset_done and done:
                            observations = env.reset()
                        connection_write_fn((observations, reward, done, info))
                    elif isinstance(env, habitat.Env):
                        # habitat.Env
                        observations = env.step(data)
                        if auto_reset_done and env.episode_over:
                            observations = env.reset()
                        connection_write_fn(observations)
                    else:
                        raise NotImplementedError

                elif command == RESET_COMMAND:
                    observations = env.reset()
                    connection_write_fn(observations)

                elif command == RENDER_COMMAND:
                    connection_write_fn(env.render(*data[0], **data[1]))

                elif (
                    command == OBSERVATION_SPACE_COMMAND
                    or command == ACTION_SPACE_COMMAND
                ):
                    connection_write_fn(getattr(env, command))

                elif command == CALL_COMMAND:
                    function_name, function_args = data
                    if function_args is None or len(function_args) == 0:
                        result = getattr(env, function_name)()
                    else:
                        result = getattr(env, function_name)(*function_args)
                    connection_write_fn(result)
                elif command == NAVIGABLE_COMMAND:
                    location = env.sim.sample_navigable_point()
                    connection_write_fn(location)
                elif command == OBSERVATIONS:
                    position, rotation = data
                    observations = env.sim.get_observations_at(position=position, 
                                                                    rotation=rotation,
                                                                    keep_agent_at_new_pose=True)
                    connection_write_fn((observations))
                elif command == AGENT_STATE_COMMAND:
                    agent_state = env.sim.get_agent_state().sensor_states['depth']
                    rotation = np.array([agent_state.rotation.w, agent_state.rotation.x, agent_state.rotation.y,
                                         agent_state.rotation.z])
                    connection_write_fn((agent_state.position, rotation))
                else:
                    raise NotImplementedError

                command, data = connection_read_fn()

            if child_pipe is not None:
                child_pipe.close()
        except KeyboardInterrupt:
            logger.info("Worker KeyboardInterrupt")
        finally:
            env.close()

    def _spawn_workers(
        self,
        env_fn_args: Iterable[Tuple[Any, ...]],
        make_env_fn: Callable[..., Env] = _make_env_fn,
    ) -> Tuple[List[Callable[[], Any]], List[Callable[[Any], None]]]:
        parent_connections, worker_connections = zip(
            *[self._mp_ctx.Pipe(duplex=True) for _ in range(self._num_envs)]
        )
        self._workers = []
        for worker_conn, parent_conn, env_args in zip(
            worker_connections, parent_connections, env_fn_args
        ):
            ps = self._mp_ctx.Process(
                target=self._worker_env,
                args=(
                    worker_conn.recv,
                    worker_conn.send,
                    make_env_fn,
                    env_args,
                    self._auto_reset_done,
                    worker_conn,
                    parent_conn,
                ),
            )
            self._workers.append(ps)
            ps.daemon = True
            ps.start()
            worker_conn.close()
        return (
            [p.recv for p in parent_connections],
            [p.send for p in parent_connections],
        )

    def reset(self):
        r"""Reset all the vectorized environments

        Returns:
            list of outputs from the reset method of envs.
        """
        self._is_waiting = True
        for write_fn in self._connection_write_fns:
            write_fn((RESET_COMMAND, None))
        results = []
        for read_fn in self._connection_read_fns:
            results.append(read_fn())
        self._is_waiting = False
        return results

    def reset_at(self, index_env: int):
        r"""Reset in the index_env environment in the vector.

        Args:
            index_env: index of the environment to be reset

        Returns:
            list containing the output of reset method of indexed env.
        """
        self._is_waiting = True
        self._connection_write_fns[index_env]((RESET_COMMAND, None))
        results = [self._connection_read_fns[index_env]()]
        self._is_waiting = False
        return results

    def step_at(self, index_env: int, action: int):
        r"""Step in the index_env environment in the vector.

        Args:
            index_env: index of the environment to be stepped into
            action: action to be taken

        Returns:
            list containing the output of step method of indexed env.
        """
        self._is_waiting = True
        self._connection_write_fns[index_env]((STEP_COMMAND, action))
        results = [self._connection_read_fns[index_env]()]
        self._is_waiting = False
        return results

    def async_step(self, actions: List[int]) -> None:
        r"""Asynchronously step in the environments.

        Args:
            actions: actions to be performed in the vectorized envs.
        """
        self._is_waiting = True
        for write_fn, action in zip(self._connection_write_fns, actions):
            write_fn((STEP_COMMAND, action))

    def wait_step(self) -> List[Observations]:
        r"""Wait until all the asynchronized environments have synchronized.
        """
        observations = []
        for read_fn in self._connection_read_fns:
            observations.append(read_fn())
        self._is_waiting = False
        return observations

    def step(self, actions: List[int]):
        r"""Perform actions in the vectorized environments.

        Args:
            actions: list of size _num_envs containing action to be taken
                in each environment.

        Returns:
            list of outputs from the step method of envs.
        """
        self.async_step(actions)
        return self.wait_step()

    def get_observations_at(self, index: int, position: List[float], rotation: List[float]):
        self._is_waiting = True
        self._connection_write_fns[index]((OBSERVATIONS, (position, rotation)))
        observations = self._connection_read_fns[index]()
        self._is_waiting = False
        return observations

    def sample_navigable_point(self, index: int):
        self._is_waiting = True
        self._connection_write_fns[index]((NAVIGABLE_COMMAND,None))
        locations = self._connection_read_fns[index]()
        self._is_waiting = False
        return locations

    def get_agent_state(self, index: int):
        self._is_waiting = True
        self._connection_write_fns[index]((AGENT_STATE_COMMAND,None))
        cameras = self._connection_read_fns[index]()
        self._is_waiting = False
        return cameras

    def close(self) -> None:
        if self._is_closed:
            return

        if self._is_waiting:
            for read_fn in self._connection_read_fns:
                read_fn()

        for write_fn in self._connection_write_fns:
            write_fn((CLOSE_COMMAND, None))

        for _, _, write_fn, _ in self._paused:
            write_fn((CLOSE_COMMAND, None))

        for process in self._workers:
            process.join()

        for _, _, _, process in self._paused:
            process.join()

        self._is_closed = True

    def pause_at(self, index: int) -> None:
        r"""Pauses computation on this env without destroying the env. This is
        useful for not needing to call steps on all environments when only
        some are active (for example during the last episodes of running
        eval episodes).

        Args:
            index: which env to pause. All indexes after this one will be
                shifted down by one.
        """
        if self._is_waiting:
            for read_fn in self._connection_read_fns:
                read_fn()
        read_fn = self._connection_read_fns.pop(index)
        write_fn = self._connection_write_fns.pop(index)
        worker = self._workers.pop(index)
        self._paused.append((index, read_fn, write_fn, worker))

    def resume_all(self) -> None:
        r"""Resumes any paused envs.
        """
        for index, read_fn, write_fn, worker in reversed(self._paused):
            self._connection_read_fns.insert(index, read_fn)
            self._connection_write_fns.insert(index, write_fn)
            self._workers.insert(index, worker)
        self._paused = []

    def call_at(
        self,
        index: int,
        function_name: str,
        function_args: Optional[List[Any]] = None,
    ) -> Any:
        r"""Calls a function (which is passed by name) on the selected env and
        returns the result.

        Args:
            index: which env to call the function on.
            function_name: the name of the function to call on the env.
            function_args: optional function args.

        Returns:
            result of calling the function.
        """
        self._is_waiting = True
        self._connection_write_fns[index](
            (CALL_COMMAND, (function_name, function_args))
        )
        result = self._connection_read_fns[index]()
        self._is_waiting = False
        return result

    def call(
        self,
        function_names: List[str],
        function_args_list: Optional[List[Any]] = None,
    ) -> List[Any]:
        r"""Calls a list of functions (which are passed by name) on the
        corresponding env (by index).

        Args:
            function_names: the name of the functions to call on the envs.
            function_args_list: list of function args for each function. If
                provided, len(function_args_list) should be as long as
                len(function_names).

        Returns:
            result of calling the function.
        """
        self._is_waiting = True
        if function_args_list is None:
            function_args_list = [None] * len(function_names)
        assert len(function_names) == len(function_args_list)
        func_args = zip(function_names, function_args_list)
        for write_fn, func_args_on in zip(
            self._connection_write_fns, func_args
        ):
            write_fn((CALL_COMMAND, func_args_on))
        results = []
        for read_fn in self._connection_read_fns:
            results.append(read_fn())
        self._is_waiting = False
        return results

    def render(
        self, mode: str = "human", *args, **kwargs
    ) -> Union[np.ndarray, None]:
        r"""Render observations from all environments in a tiled image.
        """
        for write_fn in self._connection_write_fns:
            write_fn((RENDER_COMMAND, (args, {"mode": "rgb", **kwargs})))
        images = [read_fn() for read_fn in self._connection_read_fns]
        tile = tile_images(images)
        if mode == "human":
            import cv2

            cv2.imshow("vecenv", tile[:, :, ::-1])
            cv2.waitKey(1)
            return None
        elif mode == "rgb_array":
            return tile
        else:
            raise NotImplementedError

    @property
    def _valid_start_methods(self) -> Set[str]:
        return {"forkserver", "spawn", "fork"}

    def __del__(self):
        self.close()

    def __enter__(self):
        return self

    def __exit__(self, exc_type, exc_val, exc_tb):
        self.close()


class ThreadedVectorEnv(VectorEnv):
    r"""Provides same functionality as ``VectorEnv``, the only difference is it
    runs in a multi-thread setup inside a single process. ``VectorEnv`` runs
    in a multi-proc setup. This makes it much easier to debug when using 
    ``VectorEnv`` because you can actually put break points in the environment 
    methods. It should not be used for best performance.
    """

    def _spawn_workers(
        self,
        env_fn_args: Iterable[Tuple[Any, ...]],
        make_env_fn: Callable[..., Env] = _make_env_fn,
    ) -> Tuple[List[Callable[[], Any]], List[Callable[[Any], None]]]:
        parent_read_queues, parent_write_queues = zip(
            *[(Queue(), Queue()) for _ in range(self._num_envs)]
        )
        self._workers = []
        for parent_read_queue, parent_write_queue, env_args in zip(
            parent_read_queues, parent_write_queues, env_fn_args
        ):
            thread = Thread(
                target=self._worker_env,
                args=(
                    parent_write_queue.get,
                    parent_read_queue.put,
                    make_env_fn,
                    env_args,
                    self._auto_reset_done,
                ),
            )
            self._workers.append(thread)
            thread.daemon = True
            thread.start()
        return (
            [q.get for q in parent_read_queues],
            [q.put for q in parent_write_queues],
        )

from synsin.

oawiles avatar oawiles commented on August 12, 2024 1

Try with KITTI first I think to verify whether the data loading or MP3D is causing issues. MP3D is messier as the renderer runs on the GPU (I think I remember getting this error when I initially set up my environment). If it is, then, I can't be of much help -- the only thing I can say is I remember I had to set a library path (see ./submit_slurm_synsin.sh) -- this may be of help.

from synsin.

oawiles avatar oawiles commented on August 12, 2024 1

I didn't perform any changes on it. Unfortunately, I no longer have access to the code I used, but I will look into this tomorrow to try to locate the version I used. However, it may have been a master branch as they were just bringing in vector environments.

from synsin.

oawiles avatar oawiles commented on August 12, 2024 1

OK. I no longer have access to the original code as I'm no longer at Facebook. I'll try to download and run on my computer to diagnose the issue. This may take some time.

from synsin.

phongnhhn92 avatar phongnhhn92 commented on August 12, 2024 1

It works now ! Thanks a lot. I can train with the MP3D dataset. I will close this issue now.

from synsin.

phongnhhn92 avatar phongnhhn92 commented on August 12, 2024

Thanks @oawiles for quick reply ! So I have fixed above error by changing the --render_ids from 1 to 0. In my case, I am testing the code using a single GPU so both --gpu_id and --render_ids must be 0.
However, I am having a new error and I think this one is directly related to Habitat new API, I guess. This is my current error log:

Traceback (most recent call last):
  File "train.py", line 370, in <module>
    run(model, Dataset, log_path, plotter, CHECKPOINT_tempfile)
  File "train.py", line 265, in run
    epoch, train_data_loader, model, log_path, plotter, opts
  File "train.py", line 93, in train
    iter_data_loader, isval=False, num_steps=opts.num_accumulations
  File "/home/phong/data/Work/Paper3/Code/synsin/models/base_model.py", line 108, in __call__
    t_losses, output_images = self.model(next(dataloader))
  File "/home/phong/anaconda3/envs/pytorch/lib/python3.6/site-packages/torch/utils/data/dataloader.py", line 345, in __next__
    data = self._next_data()
  File "/home/phong/anaconda3/envs/pytorch/lib/python3.6/site-packages/torch/utils/data/dataloader.py", line 385, in _next_data
    data = self._dataset_fetcher.fetch(index)  # may raise StopIteration
  File "/home/phong/anaconda3/envs/pytorch/lib/python3.6/site-packages/torch/utils/data/_utils/fetch.py", line 44, in fetch
    data = [self.dataset[idx] for idx in possibly_batched_index]
  File "/home/phong/anaconda3/envs/pytorch/lib/python3.6/site-packages/torch/utils/data/_utils/fetch.py", line 44, in <listcomp>
    data = [self.dataset[idx] for idx in possibly_batched_index]
  File "/home/phong/data/Work/Paper3/Code/synsin/data/habitat_data.py", line 126, in __getitem__
    data = self.image_generator.get_sample(item, self.num_views, self.train)
  File "/home/phong/data/Work/Paper3/Code/synsin/data/create_rgb_dataset.py", line 429, in get_sample
    return self.get_vector_sample(index, num_views, isTrain)
  File "/home/phong/data/Work/Paper3/Code/synsin/data/create_rgb_dataset.py", line 246, in get_vector_sample
    orig_location = np.array(self.env.sample_navigable_point(index))
AttributeError: 'VectorEnv' object has no attribute 'sample_navigable_point'
Exception ignored in: <bound method VectorEnv.__del__ of <habitat.core.vector_env.VectorEnv object at 0x7f7568053f28>>
Traceback (most recent call last):
  File "/home/phong/data/Work/Paper3/Libraries/habitat-api/habitat/core/vector_env.py", line 468, in __del__
    self.close()
  File "/home/phong/data/Work/Paper3/Libraries/habitat-api/habitat/core/vector_env.py", line 350, in close
    write_fn((CLOSE_COMMAND, None))
  File "/home/phong/anaconda3/envs/pytorch/lib/python3.6/multiprocessing/connection.py", line 206, in send
    self._send_bytes(_ForkingPickler.dumps(obj))
  File "/home/phong/anaconda3/envs/pytorch/lib/python3.6/multiprocessing/connection.py", line 404, in _send_bytes
    self._send(header + buf)
  File "/home/phong/anaconda3/envs/pytorch/lib/python3.6/multiprocessing/connection.py", line 368, in _send
    n = write(self._handle, buf)
BrokenPipeError: [Errno 32] Broken pipe

So basically there is no sample_navigable_point() method in the VectorEnv object. When i check the source code of Habitat API https://github.com/facebookresearch/habitat-api/blob/4e7614f6b214aac99487bd9a172752891c7cd3ad/habitat/core/vector_env.py#L67, I also didnt find any sample_navigable_point() method ? I guess this repo is using an old version of Habitat-API or some sort ?

from synsin.

phongnhhn92 avatar phongnhhn92 commented on August 12, 2024

I just want to update that I have no problem training with KITTI dataset so I guess the only problem left is the Habitat-API :D

from synsin.

oawiles avatar oawiles commented on August 12, 2024

Great! Good luck figuring out the MP3D problem! I'm going to close the issue for now.

from synsin.

oawiles avatar oawiles commented on August 12, 2024

Actually I can't -- could you close it?

from synsin.

phongnhhn92 avatar phongnhhn92 commented on August 12, 2024

Actually, the problem for MP3D is still bothering me since I have no idea where to start. I have to be honest that Habitat-API is very poor documented and it seems like the developer keep changing the API. Can you tell me at least the version of habitat-api that you have used to build this project ? Did you perform any internal changes on it ?
It would be very helpful because I think the most essential part of your paper is to make it work on MP3D and Replica dataset, the KITTI dataset is not that challenging tbh.

from synsin.

phongnhhn92 avatar phongnhhn92 commented on August 12, 2024

@oawiles Thanks a lot !

from synsin.

phongnhhn92 avatar phongnhhn92 commented on August 12, 2024

Hello,
I have to reopen this because found out that is I set --use_semantics to True then the current code doesnt work. This is due to the lack of the fucntion semantic_annotations() which you can find it here

for obj in self.env.sim.semantic_annotations().objects
. Can you add them here as well ?

Also, when I check the habitat-api, I found the same function https://github.com/facebookresearch/habitat-api/blob/6091d0aedd41d348824252d74119d5d0d3355b8e/habitat/sims/habitat_simulator/habitat_simulator.py#L367. I wonder are they the same or actually you are trying to call them from the vector_env.py ?

from synsin.

oawiles avatar oawiles commented on August 12, 2024

I never used semantics so no that wouldn't work (as in I never used that part of the code -- the whole loading part was taking from another repo and I never used the semantic loading in my work). I would be happy to help but I'm no longer at Facebook, so I can't access my code so unfortunately you're going to have to figure out how to do fix this. But you may be right that it should be self.env.semantic_annotations() but I'm not sure.

from synsin.

phongnhhn92 avatar phongnhhn92 commented on August 12, 2024

Yeah, I understand.
It seems to be irrelevant to your project so I guess I will close this issue for now. I guess the problem here is to find a way to work with vector environment to get the semantic maps. If you have time, I hope you can help me to find a way to get it since I am not familiar with the habitat-api :(
Thanks a lot !

from synsin.

chenjiahes avatar chenjiahes commented on August 12, 2024

can you tell me how to install habitat-api and habitat-sim by habitat-sim: d383c2011bf1baab2ce7b3cd40aea573ad2ddf71
habitat-api: e94e6f3953fcfba4c29ee30f65baa52d6cea716e

from synsin.

Related Issues (20)

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.