• Stars
    star
    886
  • Rank 51,520 (Top 2 %)
  • Language
    Python
  • License
    MIT License
  • Created about 8 years ago
  • Updated almost 3 years ago

Reviews

There are no reviews yet. Be the first to send feedback to the community and the maintainers!

Repository Details

Reinforcement learning environments with musculoskeletal models

NeurIPS 2019: Learn to Move - Walk Around

This repository contains software required for participation in the NeurIPS 2019 Challenge: Learn to Move - Walk Around. See more details about the challenge here. See full documentation of our reinforcement learning environment here. In this document we will give very basic steps to get you set up for the challenge!

Your task is to develop a controller for a physiologically plausible 3D human model to walk or run following velocity commands with minimum effort. You are provided with a human musculoskeletal model and a physics-based simulation environment, OpenSim. There will be three tracks:

  1. Best performance
  2. Novel ML solution
  3. Novel biomechanical solution, where all the winners of each track will be awarded.

To model physics and biomechanics we use OpenSim - a biomechanical physics environment for musculoskeletal simulations.

What's new compared to NIPS 2017: Learning to run?

We took into account comments from the last challenge and there are several changes:

  • You can use experimental data (to greatly speed up learning process)
  • We released the 3rd dimensions (the model can fall sideways)
  • We added a prosthetic leg -- the goal is to solve a medical challenge on modeling how walking will change after getting a prosthesis. Your work can speed up design, prototying, or tuning prosthetics!

You haven't heard of NIPS 2017: Learning to run? Watch this video!

HUMAN environment

Getting started

Anaconda is required to run our simulations. Anaconda will create a virtual environment with all the necessary libraries, to avoid conflicts with libraries in your operating system. You can get anaconda from here https://docs.anaconda.com/anaconda/install/. In the following instructions we assume that Anaconda is successfully installed.

For the challenge we prepared OpenSim binaries as a conda environment to make the installation straightforward

We support Windows, Linux, and Mac OSX (all in 64-bit). To install our simulator, you first need to create a conda environment with the OpenSim package.

On Windows, open a command prompt and type:

conda create -n opensim-rl -c kidzik -c conda-forge opensim python=3.6.1
activate opensim-rl
pip install osim-rl

On Linux/OSX, run:

conda create -n opensim-rl -c kidzik -c conda-forge opensim python=3.6.1
source activate opensim-rl
pip install osim-rl

These commands will create a virtual environment on your computer with the necessary simulation libraries installed. If the command python -c "import opensim" runs smoothly, you are done! Otherwise, please refer to our FAQ section.

Note that source activate opensim-rl activates the anaconda virtual environment. You need to type it every time you open a new terminal.

Basic usage

To execute 200 iterations of the simulation enter the python interpreter and run the following:

from osim.env import L2M2019Env

env = L2M2019Env(visualize=True)
observation = env.reset()
for i in range(200):
    observation, reward, done, info = env.step(env.action_space.sample())

Random walk

The function env.action_space.sample() returns a random vector for muscle activations, so, in this example, muscles are activated randomly (red indicates an active muscle and blue an inactive muscle). Clearly with this technique we won't go too far.

Your goal is to construct a controller, i.e. a function from the state space (current positions, velocities and accelerations of joints) to action space (muscle excitations), that will enable to model to travel as far as possible in a fixed amount of time. Suppose you trained a neural network mapping observations (the current state of the model) to actions (muscle excitations), i.e. you have a function action = my_controller(observation), then

# ...
total_reward = 0.0
for i in range(200):
    # make a step given by the controller and record the state and the reward
    observation, reward, done, info = env.step(my_controller(observation))
    total_reward += reward
    if done:
        break

# Your reward is
print("Total reward %f" % total_reward)

You can find details about the observation object here.

Submission

In order to make a submission to AIcrowd, please refer to this page

Rules

Organizers reserve the right to modify challenge rules as required.

Read more in the official documentation

Contributions of participants

Partners

More Repositories

1

opencap-core

Main OpenCap processing pipeline
Python
149
star
2

mobile-gaitlab

Jupyter Notebook
81
star
3

opencap-processing

Utilities for processing OpenCap data.
Python
64
star
4

mobilize-tutorials

Mobilize Center Tutorials
Jupyter Notebook
26
star
5

osimpipeline

Python framework for generating scientific workflows with the OpenSim musculoskeletal modeling and simulation software package. Built on Python DoIt (http://pydoit.org/), osimpipeline handles the organization of input and output files for generating simulations and results in a clean, repeatable manner.
Python
13
star
6

sit2stand-analysis

HTML
11
star
7

mocopaper

Generate the results for the publication on OpenSim Moco.
TeX
11
star
8

predictKAM

Predict the knee adduction moment using motion capture marker positions.
Jupyter Notebook
10
star
9

MatlabStaticOptimization

Custom static optimization implementation that allows for flexible cost terms, such as EMG tracking, as well as the incorporation of passive muscle forces and tendon compliance.
MATLAB
9
star
10

imu-fog-detection

Python
8
star
11

video-pipelines

Makefile
8
star
12

kneenet-docker

Python
7
star
13

knee_OA_staging

Python
7
star
14

coupled-exo-sim

Simulations of single and multi-joint assistive devices to reduce the metabolic cost of walking.
TeX
5
star
15

opencap-api

Python
5
star
16

balance-exo-sim

Python
4
star
17

PassiveMuscleForceCalibration

Calibrates the passive muscle forces in an OpenSim model based on experimentally-collected passive joint moments from Silder et al. 2007.
MATLAB
4
star
18

opencap-viewer

JavaScript
3
star
19

addbiomechanics-paper

Data and results for the manuscript associated with the AddBiomechanics automated data-processing tool.
Python
2
star
20

opensim-taskspacecontrol

Task space control framework in OpenSim
C++
2
star
21

psim

Framework for conducting predictive simulations. Currently in development.
C++
1
star
22

gaitlab

Python
1
star
23

grf_filtering

C++
1
star
24

kneenet-local-instructions

1
star
25

opencap-analysis

Python
1
star
26

toilet-seat

C++
1
star
27

shoulder-personalization

Python scripts to scale and personalize the Saul upper body model
Python
1
star