aima-haskell
Algorithms from Artificial Intelligence: A Modern Approach by Russell and Norvig.
Part I. Artificial Intelligence
2. Intelligent Agents
- Environment (Fig 2.1)
- Agent (Fig 2.1)
Part II. Problem Solving
3. Searching
Completed:
- Problem
- Node
- Tree Search (Fig 3.7)
- Graph Search (Fig 3.7)
- Breadth First Search (Fig 3.11)
- Uniform Cost Search (Fig 3.14)
- Depth First Search
- Depth-Limited Search (Fig 3.17)
- Iterative Deepening Search (Fig 3.18)
- Greedy Best First Search
- A* Search
To do:
- Recursive Best First Search (Fig 3.26)
- Iterative-Deepening A*
- Memory-Bounded A* (MA*)
- Simplified MA*
- Bidirectional Search
- Eight Puzzle
4. Beyond Classical Search
Completed:
- Hill-Climbing (Fig 4.2)
- Simulated Annealing (Fig 4.5)
To do:
- Genetic Algorithm (Fig 4.8)
- And/Or Graph Search (Fig 4.11)
- Online Depth First Search (Fig 4.21)
- LRTA* (Fig 4.24)
5. Adversarial Search
Completed:
- Minimax Search (Fig 5.3)
- Alpha-Beta Search (Fig 5.7)
- Searching with cutoff
To do:
- Stochastic games
6. Constraint Satisfaction Problems
Completed:
- AC3 (Fig 6.3)
- Backtracking Search (Fig 6.5)
To do:
- Min Conflicts (Fig 6.8)
- Tree CSP Solver (Fig 6.11)
Part III. Knowledge, Reasoning and Planning
7. Logical Agents
Completed:
- TT-Entails (Fig 7.10)
- PL-Resolution (Fig 7.12)
- PL-FC-Entails (Fig 7.15)
To do:
- DPLL-Satisfiable (Fig 7.17)
- WalkSAT (Fig 7.18)
- Wumpus World
8-9. First-Order Logic
Completed:
- Unify (Fig 9.1)
- FOL-FC-Ask (Fig 9.3)
To do:
- FOL-BC-Ask (Fig 9.6)
10. Classical Planning
11. Planning and Acting in the Real World
12. Knowledge Representation
Part IV. Uncertain Knowledge and Reasoning
14. Probabilistic Reasoning
Completed:
- Enumeration-Ask (Fig 14.9)
- Elimination-Ask (Fig 14.11)
- Prior-Sample (Fig 14.13)
- Rejection-Sampling (Fig 14.14)
- Likelihood-Weighting (Fig 14.15)
To do:
- Gibbs-Ask (Fig 14.16)
- Fit Bayes Networks from data
15. Probabilistic Reasoning Over Time
To do:
- Kalman Filter
- Particle Filter (Fig 15.17)
16/17. Making Complex Decisions
Completed:
- Value Iteration (Fig 17.4)
- Policy Iteration (Fig 17.7)
To do:
- POMDP Value Iteration (Fig 17.9)
18. Learning from Examples
Completed:
- Decision Tree Learning (Fig 18.5)
- Cross-Validation (Fig 18.8)
- Linear regression
- Logistic regression
To do:
- Decision List Learning (Fig 18.11)
- Artificial Neural Networks
- Back Prop Learning (Fig 18.24)
- Nearest Neighbour
- Nonparametric Regression
- Regression Trees
- Support Vector Machines
- AdaBoost (Fig 18.34)
20. Statistical Learning
To do:
- Naive Bayes
21. Reinforcement Learning
To do:
- TD-Learning
- Q-Learning
- SARSA