This is our solution for the course project for NYU ROB-GY 6203 Robot Perception.
The above flowchart shows our pipeline to obtain Odometry for our robot.
- Environment Setup
Execute the following commands to set up the environment:
conda update conda
git clone https://github.com/ai4ce/vis_nav_player.git
cd vis_nav_player
conda env create -f environment.yaml
conda activate game
- Installing LightGlue
LightGlue can be installed by following the instructions in the official documentation, available here, or by executing these commands:
git clone https://github.com/cvg/LightGlue.git && cd LightGlue
python -m pip install -e .
- Git clone repository
Clone the vis_slam_game repository using:
https://github.com/Harshit0803/vis_slam_game.git
Replace the player.py with the modified player.py provided in this repository.
- Play using the default keyboard player
python player.py
-
Exploring the Environment
Navigate the environment using keyboard movement keys. The areas explored by the user will be visible on the map. -
Completing the Exploration Phase
Press the ESC key to complete the exploration phase. -
Visual Place Recognition
The query images will be processed using the Visual Place Recognition algorithm, specifically VLAD. -
Identifying the Likely Location
The location with the minimum covariance will be identified as the most probable location. -
Auto-Navigation to Final Destination
After pressing the ESC key, the AUTO_NAV feature in the code will guide the player to the final destination.
Note: The map displays probable locations with red dots based on the query image match. If the camera continually collides with walls, the auto-navigation feature can be toggled in player.py:
self.AUTO_NAV = True #change this to enable/disable NAV
Project Video:
Robot_Perception.mp4
This project was created by Prof. Chen Feng (cfeng at nyu dot edu) of AI4CE Lab