Decision Making Under Uncertainty with POMDPs.jl
Introduction to the POMDPs.jl
framework and its ecosystem.
The course covers how to build and solve decision making problems in uncertain environments using the POMDPs.jl ecosystem of Julia packages. Topics covered include sequential decision making frameworks—namely, Markov decision processes (MDPs) and partially observable Markov decision processes (POMDPs)—running simulations, online and offline solution methods (value iteration, Q-learning, SARSA, and Monte Carlo tree search), reinforcement learning, deep reinforcement learning (including proximal policy optimization (PPO), deep Q-networks (DQN), and actor-critic methods), imitation learning through behavior cloning of expert demonstrations, state estimation through particle filtering, belief updating, alpha vectors, approximate methods (including grid interpolation for local approximation value iteration), and black-box stress testing to validate autonomous systems. The course is intended for a wide audience—no prior MDP/POMDP knowledge is expected.
Installation
- Install Julia (we used v1.6.2, other versions should work)
- Install Pluto.jl
- Clone this repo:
git clone https://github.com/JuliaAcademy/Decision-Making-Under-Uncertainty
- From the Julia REPL (
julia
), run Pluto (a web browser window will pop-up):Or you can simply run the following in a terminal:julia> using Pluto julia> Pluto.run()
julia -E "using Pluto; Pluto.run()"
- From Pluto, open one of the
.jl
notebook files located in theDecision-Making-Under-Uncertainty/notebooks/
directory—enjoy!
Lectures
The lectures can be found on Julia Academy and YouTube. They are broken down as follows.
0. Introduction
Brief introduction to the content of this course.
1. MDPs: Markov Decision Processes
Introduction to MDPs using the Grid World problem.
2. POMDPs: Partially Observable Markov Decision Processes
Introduction to POMDPs using the Crying Baby problem.
3. State Estimation using Particle Filtering
Using beliefs to estimate the state of an agent through particle filtering.
4. Approximate Methods
Approximating a continuous space using grid interpolation and value function approximation.
5. Deep Reinforcement Learning
Introduction to deep reinforcement learning applied to the pendulum swing-up MDP.
6. Imitation Learning
Introduction to imitation learning using behavior cloning of expert demonstrations.
7. Black-Box Validation
Stress testing a black-box system using adaptive stress testing.
Created and taught by Robert Moss.