Giter VIP home page Giter VIP logo

soundspaces-challenge's Introduction


SoundSpaces Challenge 2021

This repository contains starter code for the 2021 challenge, details of the tasks, and training and evaluation setups. For an overview of SoundSpaces Challenge visit soundspaces.org/challenge.

This year, we are hosting challenges on audio-visual navigation task, where an agent is tasked to find a sound-making object in unmapped 3D environments with visual and auditory perception.

AudioNav Task

In AudioNav, an agent is spawned at a random starting position and orientation in an unseen environmen. A sound-emitting object is also randomly spawned at a location in the same environment. The agent receives a one-second audio in the form of waveform at each time step and needs to navigate to the target location. No ground-truth map is available and the agent must only use its sensory input (audio and RGB-D) to navigate.

Dataset

We use Matterport3D for the challenge. For the dataset, we use the train_multiple and val_multiple_unheard episodes from SoundSpaces repository, which are publically available to all participants. The test episodes for the official challenge evaluation won't be provided to participants. And the sounds in the test dataset will also be unheard during training, requiring the agent to generalize to new sounds.

Evaluation

After calling the STOP action, the agent is evaluated using the 'Success weighted by Path Length' (SPL) metric [2].

An episode is deemed successful if on calling the STOP action, the agent is within 0.36m (2x agent-radius) of the goal position.

Participation Guidelines

Participate in the contest by registering on the EvalAI challenge page and creating a team. Participants will upload docker containers with their agents that evaluated on a AWS GPU-enabled instance. Before pushing the submissions for remote evaluation, participants should test the submission docker locally to make sure it is working. Instructions for training, local evaluation, and online submission are provided below.

Local Evaluation

  1. Clone the challenge repository:

    git clone https://github.com/changanvr/soundspaces-challenge.git
    cd soundspaces-challenge
  2. Implement your own agent or try one of ours. We provide an agent in agent.py that takes random actions:

    import soundspaces
    
    class RandomAgent(soundspaces.Agent):
        def reset(self):
            pass
    
        def act(self, observations):
            return {"action": numpy.random.choice(task_config.TASK.POSSIBLE_ACTIONS)}
    
    def main():
        agent = RandomAgent(task_config=config)
        challenge = soundspaces.Challenge()
        challenge.submit(agent)

    [Optional] Modify submission.sh file if your agent needs any custom modifications (e.g. command-line arguments). Otherwise, nothing to do. Default submission.sh is simply a call to RandomAgent agent in agent.py.

  3. Install nvidia-docker v2 following instructions here: https://github.com/nvidia/nvidia-docker/wiki/Installation-(version-2.0). Note: only supports Linux; no Windows or MacOS.

  4. Modify the provided Dockerfile if you need custom modifications. Let's say your code needs pytorch, these dependencies should be pip installed inside a conda environment called soundspaces that is shipped with our soundspaces/challenge docker, as shown below:

    FROM soundspaces/challenge:2021
    
    # install dependencies in the soundspaces conda environment
    RUN /bin/bash -c ". activate soundspaces; pip install torch"
    
    ADD agent.py /agent.py
    ADD submission.sh /submission.sh

    Build your docker container: docker build . --file audionav.dockerfile -t audionav_submission or using docker build . --file audionav.dockerfile -t audionav_submission. (Note: you may need sudo priviliges to run this command.)

  5. Following instructions for downloading SoundSpaces dataset and place all data under data/ folder.

    Using Symlinks: If you used symlinks (i.e. ln -s) to link to an existing downloaded data, there is an additional step. Make sure there is only one level of symlink (instead of a symlink to a symlink link to a .... symlink) with

    ln -f -s $(realpath data/scene_datasets/mp3d) \
        data/scene_datasets/mp3d
  6. Evaluate your docker container locally:

    # Testing AudioNav
    ./test_locally_audionav_rgbd.sh --docker-name audionav_submission

    If the above command runs successfully you will get an output similar to:

    2019-02-14 21:23:51,798 initializing sim Sim-v0
    2019-02-14 21:23:52,820 initializing task Nav-v0
    2020-02-14 21:23:56,339 distance_to_goal: 5.205519378185272
    2020-02-14 21:23:56,339 spl: 0.0
    

    Note: this same command will be run to evaluate your agent for the leaderboard. Please submit your docker for remote evaluation (below) only if it runs successfully on your local setup.

Online submission

Follow instructions in the submit tab of the EvalAI challenge page (coming soon) to submit your docker image. Note that you will need a version of EvalAI >= 1.3.5. Pasting those instructions here for convenience:

# Installing EvalAI Command Line Interface
pip install "evalai>=1.3.5"

# Set EvalAI account token
evalai set_token <your EvalAI participant token>

# Push docker image to EvalAI docker registry
evalai push audionav_submission:latest --phase <phase-name>

Valid challenge phases are soundspaces21-audionav-{minival, test-std, test-ch}.

The challenge consists of the following phases:

  1. Minival phase: This split is same as the one used in ./test_locally_audionav_rgbd.sh. The purpose of this phase/split is sanity checking -- to confirm that our remote evaluation reports the same result as the one you're seeing locally. Each team is allowed maximum of 30 submission per day for this phase, but please use them judiciously. We will block and disqualify teams that spam our servers.
  2. Test Standard phase: The purpose of this phase/split is to serve as the public leaderboard establishing the state of the art; this is what should be used to report results in papers. Each team is allowed maximum of 10 submission per day for this phase, but again, please use them judiciously. Don't overfit to the test set.
  3. Test Challenge phase: This phase/split will be used to decide challenge winners. Each team is allowed total of 5 submissions until the end of challenge submission phase. The highest performing of these 5 will be automatically chosen. Results on this split will not be made public until the announcement of final results at the Embodied AI workshop at CVPR.

Note: Your agent will be evaluated on 1000-2000 episodes and will have a total available time of 24 hours to finish. Your submissions will be evaluated on AWS EC2 p2.xlarge instance which has a Tesla K80 GPU (12 GB Memory), 4 CPU cores, and 61 GB RAM. If you need more time/resources for evaluation of your submission please get in touch. If you face any issues or have questions you can ask them by opening an issue on this repository.

AudioNav Baselines and Starter Code

We have added a config in configs/ppo_pointnav.yaml that includes the av-nav baseline using PPO from SoundSpaces.

Acknowledgments

Thank Oleksandr Maksymets and Rishabh Jain for the technical support. And thank Habitat team for the challenge template.

soundspaces-challenge's People

Contributors

changanvr avatar

Watchers

 avatar

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.