• Stars
    star
    143
  • Rank 257,007 (Top 6 %)
  • Language
    C#
  • License
    Apache License 2.0
  • Created over 6 years ago
  • Updated about 2 years ago

Reviews

There are no reviews yet. Be the first to send feedback to the community and the maintainers!

Repository Details

Research into locomotion style transfer with Active Ragdolls (using MarathonEnvs +ml_agents)

NOTE This project has now been integrated with MarathonEnvs. Please go there if you have any questions.


ActiveRagdollStyleTransfer

Research into using mocap (and longer term video) as style reference for training Active Ragdolls / locomotion for Video Games

(using Unity ML_Agents + MarathonEnvs)


Goals


Using this repro

  • Make sure you are using a compatable version of Unity (tested with 2018.4 LTS and 2019.1)

  • To run trained models, make sure you: add TensorFlowSharp to Unity

  • To try different moves: Replace reference MoCap in animation tree and select the right ML-Agent trained model

  • To re-train:

    • Make sure you've installed ml-agents of this project before by
pip install .
  • Set the LearnFromMocapBrain to External SetBrainType.png

  • Build the project

  • From the root path, invoke the python script like this: mlagents-learn config\style_transfer_config.yaml --train --env="\b\StyleTransfer002\Unity Environment.exe" --run-id=StyleTransfer002-145 where "\b\StyleTransfer002\Unity Environment.exe" points to the built project and StyleTransfer002-145 is the unique name for this run. (Note: use / if on MacOS/Linux)

  • See the ML-Agents documentation for more details on using ML-Agents

  • Post an Issue if you are still stuck


Contributors


Download builds : Releases


StyleTransfer002

Backflip (002.144-128m steps)
StyleTransfer002.128
Running (002.114) Walking (002.113)
StyleTransfer002.114 StyleTransfer002.113
  • Model: MarathonMan (modified MarathonEnv.DeepMindHumanoid)
  • Animation: Runningv2, Walking, Backflip
  • Hypostheis: Implement basic style transfer from mo-cap using MarathonEnv model
  • Outcome: Is now training on Backflip
    • Initial was able to train walking but not running (16m steps / 3.2m observations)
    • Through tweaking model was able to train running (32m steps / 6.4m observations)
    • Was struggling to train backflip but looks like I need to train for longer (current example is 48m steps / 9.6m observations)
    • Was able to train Backflip after updating to Unity 2018.3 beta - looks like updates to PhyX engine improve stability
  • References:
  • Notes:
    • Needed to make lots of modifications to model to improve training performance
    • Added sensors to feet improved trainging
    • Tweaking joints improved training
    • Training time was = ~7h for 16m steps (3.2m observations) TODO: check assumptions
    • New Training time is + 2x
    • ... Optimization: Hack to Academy to have 4 physics only steps per ml-step
    • ... Optimization: Train with 64 agents
    • ... also found training in headless mode --no-graphics helped
    • Updated to Unity 2018.3 Beta for PhysX improvements
    • see RawNotes.002 for details on each experiment

StyleTransfer001

StyleTransfer001

  • Model: U_Character_REFAvatar
  • Animation: HumanoidWalk
  • Hypostheis: Implement basic style transfer from mo-cap
  • Outcome: FAIL
    • U_Character_REFAvatar + HumanoidWalk has an issue whereby the feet collide. The RL does get learn to avoid - but it feels that this is slowing it down
  • References:
    • Insperation: [DeepMimic: Example-Guided Deep Reinforcement Learning of Physics-Based Character Skills arXiv:1804.02717 [cs.GR]](https://arxiv.org/abs/1804.02717
  • Raw Notes:
    • Aug 27 2018: Migrate to new repro and tidy up code so to make open source

More Repositories

1

MujocoUnity

Reproducing MuJoCo benchmarks in a modern, commercial game /physics engine (Unity + PhysX).
C#
50
star
2

ActiveRagdollAssaultCourse

Research into Assault Course for training Active Ragdolls (using MujocoUnity+ml_agents)
C#
39
star
3

ActiveRagdollControllers

Research into controllers for 2d and 3d Active Ragdolls (using MujocoUnity+ml_agents)
C#
34
star
4

MarathonEnvsBaselines

Experimental - using OpenAI baselines with MarathonEnvs (ML-Agents)
Python
19
star
5

ppo-dash

PPO Dash: Improving Generalization in Deep Reinforcement Learning
Python
16
star
6

UnityMQ

NetMQ + Unity3D, am I wasting my time?
C#
9
star
7

CocosSharp-Forms-sample

Createing a sample project which has both CocosSharp and Xamarin Forms in the same project
C#
8
star
8

UnityRest

C#
6
star
9

ImagePickerCropAndResize

Simple xamarin.forms sample to test image picker + down sample
C#
6
star
10

CLIP_visual-spatial-reasoning

Python
5
star
11

BabyDyna

C#
2
star
12

Machine-Learning-Foundations

Jupyter Notebook
2
star
13

Kibble

R&D project that creates a tyle UI over objective C programming using some funky relection tricks.
Objective-C
2
star
14

Getting-aHead

Global Game Jam 2020
C#
2
star
15

many-worlds

C#
2
star
16

agent_lab

HTML
2
star
17

PD-controller

C#
1
star
18

OnlyOne

ShaderLab
1
star
19

MarathonEnvsPlanet

Python
1
star
20

ml-agents-envs-python

custom ml-agents-envs-python
Python
1
star
21

WatchTheBPM

Simple Xamain iOS Watch App for tapping BPM on Apple Watch
C#
1
star
22

UdacityDeepRL-Project2

Udacity Reinforcement Learning Nanodegree Project Two: Continuous Control
Python
1
star
23

WatchTheBPM-Forms

Simple Xamain Forms + Apple Watch App for tapping BPM on Apple Watch
C#
1
star
24

ActiveRagdollDeliberatePractice

Experiments to see if implementing theories of Deliberate Practice can improve the authenticity of Active Ragdolls
C#
1
star
25

UdacityDeepRL-Project3

Udacity Reinforcement Learning Nanodegree Project Three: Collaboration and Competition
Python
1
star
26

MarkdownKaTeX

Reader for Markdown + KaTeX
C#
1
star
27

TicTacAlpha

attempt at implementing the Udacity connect four Alpha Zero module in ml-agents
C#
1
star