General Navigation Models: GNM, ViNT and NoMaD
Contributors: Dhruv Shah, Ajay Sridhar, Nitish Dashora, Catherine Glossop, Kyle Stachowicz, Arjun Bhorkar, Kevin Black, Noriaki Hirose, Sergey Levine
Berkeley AI Research
Project Page | Citing | Pre-Trained Models
General Navigation Models are general-purpose goal-conditioned visual navigation policies trained on diverse, cross-embodiment training data, and can control many different robots in zero-shot. They can also be efficiently fine-tuned, or adapted, to new robots and downstream tasks. Our family of models is described in the following research papers (and growing):
- GNM: A General Navigation Model to Drive Any Robot (October 2022, presented at ICRA 2023)
- ViNT: A Foundation Model for Visual Navigation (June 2023, presented at CoRL 2023)
- NoMaD: Goal Masking Diffusion Policies for Navigation and Exploration (October 2023)
Overview
This repository contains code for training our family of models with your own data, pre-trained model checkpoints, as well as example code to deploy it on a TurtleBot2/LoCoBot robot. The repository follows the organization from GNM.
./train/train.py
: training script to train or fine-tune the ViNT model on your custom data../train/vint_train/models/
: contains model files for GNM, ViNT, and some baselines../train/process_*.py
: scripts to process rosbags or other formats of robot trajectories into training data../deployment/src/record_bag.sh
: script to collect a demo trajectory as a ROS bag in the target environment on the robot. This trajectory is subsampled to generate a topological graph of the environment../deployment/src/create_topomap.sh
: script to convert a ROS bag of a demo trajectory into a topological graph that the robot can use to navigate../deployment/src/navigate.sh
: script that deploys a trained GNM/ViNT/NoMaD model on the robot to navigate to a desired goal in the generated topological graph. Please see relevant sections below for configuration settings../deployment/src/explore.sh
: script that deploys a trained NoMaD model on the robot to randomly explore its environment. Please see relevant sections below for configuration settings.
Train
This subfolder contains code for processing datasets and training models from your own data.
Pre-requisites
The codebase assumes access to a workstation running Ubuntu (tested on 18.04 and 20.04), Python 3.7+, and a GPU with CUDA 10+. It also assumes access to conda, but you can modify it to work with other virtual environment packages, or a native setup.
Setup
Run the commands below inside the vint_release/
(topmost) directory:
- Set up the conda environment:
conda env create -f train/train_environment.yml
- Source the conda environment:
conda activate vint_train
- Install the vint_train packages:
pip install -e train/
- Install the
diffusion_policy
package from this repo:git clone [email protected]:real-stanford/diffusion_policy.git pip install -e diffusion_policy/
Data-Wrangling
In the papers, we train on a combination of publicly available and unreleased datasets. Below is a list of publicly available datasets used for training; please contact the respective authors for access to the unreleased data.
We recommend you to download these (and any other datasets you may want to train on) and run the processing steps below.
Data Processing
We provide some sample scripts to process these datasets, either directly from a rosbag or from a custom format like HDF5s:
- Run
process_bags.py
with the relevant args, orprocess_recon.py
for processing RECON HDF5s. You can also manually add your own dataset by following our structure below (if you are adding a custom dataset, please checkout the Custom Datasets section). - Run
data_split.py
on your dataset folder with the relevant args.
After step 1 of data processing, the processed dataset should have the following structure:
โโโ <dataset_name>
โ โโโ <name_of_traj1>
โ โ โโโ 0.jpg
โ โ โโโ 1.jpg
โ โ โโโ ...
โ โ โโโ T_1.jpg
โ โ โโโ traj_data.pkl
โ โโโ <name_of_traj2>
โ โ โโโ 0.jpg
โ โ โโโ 1.jpg
โ โ โโโ ...
โ โ โโโ T_2.jpg
โ โ โโโ traj_data.pkl
โ ...
โโโ โโโ <name_of_trajN>
โโโ 0.jpg
โโโ 1.jpg
โโโ ...
โโโ T_N.jpg
โโโ traj_data.pkl
Each *.jpg
file contains an forward-facing RGB observation from the robot, and they are temporally labeled. The traj_data.pkl
file is the odometry data for the trajectory. Itโs a pickled dictionary with the keys:
"position"
: An np.ndarray [T, 2] of the xy-coordinates of the robot at each image observation."yaw"
: An np.ndarray [T,] of the yaws of the robot at each image observation.
After step 2 of data processing, the processed data-split should the following structure inside vint_release/train/vint_train/data/data_splits/
:
โโโ <dataset_name>
โ โโโ train
| | โโโ traj_names.txt
โโโ โโโ test
โโโ traj_names.txt
Training your General Navigation Models
Run this inside the vint_release/train
directory:
python train.py -c <path_of_train_config_file>
The premade config yaml files are in the train/config
directory.
Custom Config Files
You can use one of the premade yaml files as a starting point and change the values as you need. config/vint.yaml
is good choice since it has commented arguments. config/defaults.yaml
contains the default config values (don't directly train with this config file since it does not specify any datasets for training).
Custom Datasets
Make sure your dataset and data-split directory follows the structures provided in the Data Processing section. Locate train/vint_train/data/data_config.yaml
and append the following:
<dataset_name>:
metric_waypoints_distance: <average_distance_in_meters_between_waypoints_in_the_dataset>
Locate your training config file and add the following text under the datasets
argument (feel free to change the values of end_slack
, goals_per_obs
, and negative_mining
):
<dataset_name>:
data_folder: <path_to_the_dataset>
train: data/data_splits/<dataset_name>/train/
test: data/data_splits/<dataset_name>/test/
end_slack: 0 # how many timesteps to cut off from the end of each trajectory (in case many trajectories end in collisions)
goals_per_obs: 1 # how many goals are sampled per observation
negative_mining: True # negative mining from the ViNG paper (Shah et al.)
Training your model from a checkpoint
Instead of training from scratch, you can also load an existing checkpoint from the published results.
Add load_run: <project_name>/<log_run_name>
to your .yaml config file in vint_release/train/config/
. The *.pth
of the file you are loading to be saved in this file structure and renamed to โlatestโ: vint_release/train/logs/<project_name>/<log_run_name>/latest.pth
. This makes it easy to train from the checkpoint of a previous run since logs are saved this way by default. Note: if you are loading a checkpoint from a previous run, check for the name the run in the vint_release/train/logs/<project_name>/
, since the code appends a string of the date to each run_name specified in the config yaml file of the run to avoid duplicate run names.
If you want to use our checkpoints, you can download the *.pth
files from this link.
Deployment
This subfolder contains code to load a pre-trained ViNT and deploy it on the open-source LoCoBot indoor robot platform with a NVIDIA Jetson Orin Nano. It can be easily adapted to be run on alternate robots, and researchers have been able to independently deploy it on the following robots โ Clearpath Jackal, DJI Tello, Unitree A1, TurtleBot2, Vizbot โ and in simulated environments like CARLA.
LoCoBot Setup
This software was tested on a LoCoBot running Ubuntu 20.04.
Software Installation (in this order)
- ROS: ros-noetic
- ROS packages:
sudo apt-get install ros-noetic-usb-cam ros-noetic-joy
- kobuki
- Conda
- Install anaconda/miniconda/etc. for managing environments
- Make conda env with environment.yml (run this inside the
vint_release/
directory)conda env create -f deployment/deployment_environment.yml
- Source env
conda activate vint_deployment
- (Recommended) add to
~/.bashrc
:echo โconda activate vint_deploymentโ >> ~/.bashrc
- Install the
vint_train
packages (run this inside thevint_release/
directory):pip install -e train/
- Install the
diffusion_policy
package from this repo:git clone [email protected]:real-stanford/diffusion_policy.git pip install -e diffusion_policy/
- (Recommended) Install tmux if not present. Many of the bash scripts rely on tmux to launch multiple screens with different commands. This will be useful for debugging because you can see the output of each screen.
Hardware Requirements
- LoCoBot: http://locobot.org (just the navigation stack)
- A wide-angle RGB camera: Example. The
vint_locobot.launch
file uses camera parameters that work with cameras like the ELP fisheye wide angle, feel free to modify to your own. Adjust the camera parameters invint_release/deployment/config/camera.yaml
your camera accordingly (used for visualization). - Joystick/keyboard teleop that works with Linux. Add the index mapping for the deadman_switch on the joystick to the
vint_release/deployment/config/joystick.yaml
. You can find the mapping from buttons to indices for common joysticks in the wiki.
Loading the model weights
Save the model weights *.pth file in vint_release/deployment/model_weights
folder. Our model's weights are in this link.
Collecting a Topological Map
Make sure to run these scripts inside the vint_release/deployment/src/
directory.
This section discusses a simple way to create a topological map of the target environment for deployment. For simplicity, we will use the robot in โpath-followingโ mode, i.e. given a single trajectory in an environment, the task is to follow the same trajectory to the goal. The environment may have new/dynamic obstacles, lighting variations etc.
Record the rosbag:
./record_bag.sh <bag_name>
Run this command to teleoperate the robot with the joystick and camera. This command opens up three windows
roslaunch vint_locobot.launch
: This launch file opens theusb_cam
node for the camera, the joy node for the joystick, and nodes for the robotโs mobile base.python joy_teleop.py
: This python script starts a node that reads inputs from the joy topic and outputs them on topics that teleoperate the robotโs base.rosbag record /usb_cam/image_raw -o <bag_name>
: This command isnโt run immediately (you have to click Enter). It will be run in the vint_release/deployment/topomaps/bags directory, where we recommend you store your rosbags.
Once you are ready to record the bag, run the rosbag record
script and teleoperate the robot on the map you want the robot to follow. When you are finished with recording the path, kill the rosbag record
command, and then kill the tmux session.
Make the topological map:
./create_topomap.sh <topomap_name> <bag_filename>
This command opens up 3 windows:
roscore
python create_topomap.py โdt 1 โdir <topomap_dir>
: This command creates a directory in/vint_release/deployment/topomaps/images
and saves an image as a node in the map every second the bag is played.rosbag play -r 1.5 <bag_filename>
: This command plays the rosbag at x5 speed, so the python script is actually recording nodes 1.5 seconds apart. The<bag_filename>
should be the entire bag name with the .bag extension. You can change this value in themake_topomap.sh
file. The command does not run until you hit Enter, which you should only do once the python script gives its waiting message. Once you play the bag, move to the screen where the python script is running so you can kill it when the rosbag stops playing.
When the bag stops playing, kill the tmux session.
Running the model
Navigation
Make sure to run this script inside the vint_release/deployment/src/
directory.
./navigate.sh โ--model <model_name> --dir <topomap_dir>โ
To deploy one of the models from the published results, we are releasing model checkpoints that you can download from this link.
The <model_name>
is the name of the model in the vint_release/deployment/config/models.yaml
file. In this file, you specify these parameters of the model for each model (defaults used):
config_path
(str): path of the *.yaml file invint_release/train/config/
used to train the modelckpt_path
(str): path of the *.pth file invint_release/deployment/model_weights/
Make sure these configurations match what you used to train the model. The configurations for the models we provided the weights for are provided in yaml file for your reference.
The <topomap_dir>
is the name of the directory in vint_release/deployment/topomaps/images
that has the images corresponding to the nodes in the topological map. The images are ordered by name from 0 to N.
This command opens up 4 windows:
roslaunch vint_locobot.launch
: This launch file opens the usb_cam node for the camera, the joy node for the joystick, and several nodes for the robotโs mobile base).python navigate.py --model <model_name> -โdir <topomap_dir>
: This python script starts a node that reads in image observations from the/usb_cam/image_raw
topic, inputs the observations and the map into the model, and publishes actions to the/waypoint
topic.python joy_teleop.py
: This python script starts a node that reads inputs from the joy topic and outputs them on topics that teleoperate the robotโs base.python pd_controller.py
: This python script starts a node that reads messages from the/waypoint
topic (waypoints from the model) and outputs velocities to navigate the robotโs base.
When the robot is finishing navigating, kill the pd_controller.py
script, and then kill the tmux session. If you want to take control of the robot while it is navigating, the joy_teleop.py
script allows you to do so with the joystick.
Exploration
Make sure to run this script inside the vint_release/deployment/src/
directory.
./exploration.sh โ--model <model_name>โ
To deploy one of the models from the published results, we are releasing model checkpoints that you can download from this link.
The <model_name>
is the name of the model in the vint_release/deployment/config/models.yaml
file (note that only NoMaD works for exploration). In this file, you specify these parameters of the model for each model (defaults used):
config_path
(str): path of the *.yaml file invint_release/train/config/
used to train the modelckpt_path
(str): path of the *.pth file invint_release/deployment/model_weights/
Make sure these configurations match what you used to train the model. The configurations for the models we provided the weights for are provided in yaml file for your reference.
The <topomap_dir>
is the name of the directory in vint_release/deployment/topomaps/images
that has the images corresponding to the nodes in the topological map. The images are ordered by name from 0 to N.
This command opens up 4 windows:
roslaunch vint_locobot.launch
: This launch file opens the usb_cam node for the camera, the joy node for the joystick, and several nodes for the robotโs mobile base.python explore.py --model <model_name>
: This python script starts a node that reads in image observations from the/usb_cam/image_raw
topic, inputs the observations and the map into the model, and publishes exploration actions to the/waypoint
topic.python joy_teleop.py
: This python script starts a node that reads inputs from the joy topic and outputs them on topics that teleoperate the robotโs base.python pd_controller.py
: This python script starts a node that reads messages from the/waypoint
topic (waypoints from the model) and outputs velocities to navigate the robotโs base.
When the robot is finishing navigating, kill the pd_controller.py
script, and then kill the tmux session. If you want to take control of the robot while it is navigating, the joy_teleop.py
script allows you to do so with the joystick.
Adapting this code to different robots
We hope that this codebase is general enough to allow you to deploy it to your favorite ROS-based robots. You can change the robot configuration parameters in vint_release/deployment/config/robot.yaml
, like the max angular and linear velocities of the robot and the topics to publish to teleop and control the robot. Please feel free to create a Github Issue or reach out to the authors at [email protected].
Citing
@inproceedings{shah2022gnm,
author = {Dhruv Shah and Ajay Sridhar and Arjun Bhorkar and Noriaki Hirose and Sergey Levine},
title = {{GNM: A General Navigation Model to Drive Any Robot}},
booktitle = {International Conference on Robotics and Automation (ICRA)},
year = {2023},
url = {https://arxiv.org/abs/2210.03370}
}
@inproceedings{shah2023vint,
title = {Vi{NT}: A Foundation Model for Visual Navigation},
author = {Dhruv Shah and Ajay Sridhar and Nitish Dashora and Kyle Stachowicz and Kevin Black and Noriaki Hirose and Sergey Levine},
booktitle = {7th Annual Conference on Robot Learning},
year = {2023},
url = {https://arxiv.org/abs/2306.14846}
}
@article{sridhar2023nomad,
author = {Ajay Sridhar and Dhruv Shah and Catherine Glossop and Sergey Levine},
title = {{NoMaD: Goal Masked Diffusion Policies for Navigation and Exploration}},
journal = {arXiv pre-print},
year = {2023},
url = {https://arxiv.org/abs/2310.xxxx}
}