• Stars
    star
    198
  • Rank 196,898 (Top 4 %)
  • Language
    Jupyter Notebook
  • License
    Other
  • Created about 3 years ago
  • Updated about 1 month ago

Reviews

There are no reviews yet. Be the first to send feedback to the community and the maintainers!

Repository Details

A library for neuroscience-inspired navigation and decision making research.

Neuro-Nav

Example environments

Neuro-Nav is an open-source library for neurally plausible reinforcement learning (RL). It offers a set of standardized environments and RL algorithms drawn from canonical behavioral and neural studies in rodents and humans. In addition, this repository also contains a set of jupyter notebooks which reproduce various experimental results from the literature.

Benchmark Environments

Contains two highly parameterizable environments: GridEnv and GraphEnv. Each comes with a variety of task templates, observation spaces, and other settings useful for research.

See neuronav/envs for more information.

Algorithm Toolkit

Contains artifical agents which implement over a dozen cannonical reinforcement learning algorithms, including: Temporal Difference (TD) and Dyna versions of Q-Learning, Successor Representation, and Actor-Critic algorithms.

See neuronav/agents for more information.

Deep RL Algorithms

Contains a set of deep reinforcement learning algorithms implemented in PyTorch. These include Proximal Policy Optimization (PPO) and Soft Actor-Critic (SAC).

See neuronav/deep_agents for more information.

Experiment Notebooks

Neuro-nav includes a number of interactive jupyter notebooks, featuring different experimental environments, tasks, and RL agent algorithms. You can use these notebooks to replicate various experiments across the literature, or to simply learn about what's possible to do with the library. Notebooks include those that replicate results from computational neuroscience, psychiatry, and machine learning.

See notebooks for more information.

Installation

The easiest way to install the neuronav package is by running the following command:

pip install 'git+https://github.com/awjuliani/neuro-nav'

This will provide the environments and algorithms, but not the jupyter notebooks. If you would like access to the notebooks as well, you can locally download the repository, and then install neuronav by running the following command from the root of the directory where the repository is downloaded to:

pip install -e .

If you would like to use the experiment notebooks as well as the core library, please run pip install -e .[experiments_local] from the root of this directory to install the additional dependencies.

It is also possible to access all notebooks using google colab. The links to the colab notebooks can be found here.

Requirements

Requirements for the neuronav library can be found here.

Contributing

Neuro-Nav is an open source project, and we actively encourage community contributions. These can take various forms, such as new environments, tasks, algorithms, bug fixes, documentation, citations of relevant work, or additional experiment notebooks. If there is a small contribution you would like to make, please feel free to open a pull request, and we can review it. If there is a larger contribution you are considering, please open a github issue. This way, the contribution can be discussed, and potential support can be provided if needed. If you have ideas for changes or features you would like to see in the library in the future, but don't have the resources to contribute yourself, please feel free to open a github issue describing the request.

Citing

If you use Neuro-Nav in your research or educational material, please cite the work as follows:

@inproceedings{neuronav2022,
  Author = {Juliani, Arthur and Barnett, Samuel and Davis, Brandon and Sereno, Margaret and Momennejad, Ida},
  Title = {Neuro-Nav: A Library for Neurally-Plausible Reinforcement Learning},
  Year = {2022},
  BookTitle = {The 5th Multidisciplinary Conference on Reinforcement Learning and Decision Making},
}

The research paper corresponding to the above citation can be found here.

License

Apache License 2.0

More Repositories

1

DeepRL-Agents

A set of Deep Reinforcement Learning Agents implemented in Tensorflow.
Jupyter Notebook
2,231
star
2

TF-Tutorials

A collection of deep learning tutorials using Tensorflow and Python
Jupyter Notebook
523
star
3

Meta-RL

Implementation of Meta-RL A3C algorithm
Jupyter Notebook
401
star
4

oreilly-rl-tutorial

Contains Jupyter notebooks associated with the "Deep Reinforcement Learning Tutorial" tutorial given at the O'Reilly 2017 NYC AI Conference.
Jupyter Notebook
273
star
5

dfp

Reinforcement Learning with Goals
Jupyter Notebook
170
star
6

Pix2Pix-Film

An implementation of Pix2Pix in Tensorflow for use with frames from films
Jupyter Notebook
165
star
7

pytorch-diffusion

A basic PyTorch implementation of 'Denoising Diffusion Probabilistic Models'
Python
161
star
8

sound-cnn

A convolutional neural network that classifies sounds
Python
159
star
9

3D-TSNE

A Unity project for visualizing t-SNE data in 3D.
C#
73
star
10

RL-CC

Web-based Reinforcement Learning Control Center
Jupyter Notebook
64
star
11

successor_examples

Tutorials on learning and using successor representations.
Jupyter Notebook
49
star
12

ML-Tools

Variety of machine learning algorithms written in python
Python
42
star
13

DNN-Sentiment

Convolutional and recurrent deep neural networks for text sentiment analysis.
Python
32
star
14

NeuralDreamVideos

A deep learning model for creating video sequences
Jupyter Notebook
24
star
15

cognition-course

Slides used in Cognitive Psychology course taught during summer 2015 at the University of Oregon
6
star
16

synescape

Sound visualization app for musicians and music fans.
C#
5
star
17

interaction-grounded-learning

A simple PyTorch implementation of the ideas presented in the paper Interaction Grounded Learning (IGL) from Xie et al., 2021.
Jupyter Notebook
4
star
18

serotonin-ebm

Python
1
star