Some resources for ML research
Personal and biased selection of ML resources.
Disclaimer: I'm a noivce in ML research, and I read only a few of the list.
Table of Contents
- Beginner's Guide
- Machine Learning
- Deep Learning
- Generative Model
- Reinforcement Learning
- Graphical Model
- Optimization
- Learning Theory
- Statistics
- Topics in Machine Learning
- Math Backgrounds
- Blogs
Beginner's Guide
Must Read
- Machine Learning: A Probabilistic Perspective (Murphy)
- Deep Learning (Goodfellow et al.)
- Reinforcement Learning: An Introduction (Sutton & Barto)
Recommended
- Convex Optimization (Boyd & Vandenberghe)
- Graphical Models, Exponential Families, and Variational Inference (Wainwright & Jordan)
- Learning from Data (Abu-Mostafa) -> for whom interested in learning theory
Recent Topics
- Read research blogs (e.g., OpenAI, BAIR, CMU)
- Read lectures from Berkeley, Stanford, CMU or UofT (e.g., unsupervised learning)
- There are lots of good sources, but I stop updating them up-to-date
Machine Learning
There are many ML books, but most of them are encyclopedic.
I recommend to take a course using Murphy or Bishop book.
Textbook
- Machine Learning: A Probabilistic Perspective (Murphy) β¨
- Pattern Recognition and Machine Learning (Bishop)
β¨ - The Elements of Statistical Learning (Hastie et al.)
- Pattern Classification (Duda et al.)
- Bayesian Reasoning and Machine Learning (Barber)
Lecture
- Stanford CS229 Machine Learning
β¨ - CMU 10701 Mahine Learning
- CMU 10702 Statistical Machine Learning
- Oxford Machine Learning
Deep Learning
Goodfellow et al. is the new classic.
For vision and NLP, Stanford lectures would be helpful.
Textbook
- Deep Learning (Goodfellow et al.)
β¨
Lecture (Practice)
- Deep Learning book β¨
- Stanford CS231n Convolutional Neural Networks for Visual Recognition
β¨ - Stanfrod CS224d Deep Learning for Natural Language Processing
- Stanfrod CS224s Spoken Language Processing
- Oxford Deep Natural Language Processing
- CMU 11747 Neural Networks for NLP
Lecture (Theory)
Generative Model
I seperated generative model as an independent topic,
since I think it is big and important area.
Lecture
- Toronto CSC2541 Differentiable Inference and Generative Models
- Toronto CSC2547 Learning Discrete Latent Structure
- Toronto CSC2541 Scalable and Flexible Models of Uncertainty
Reinforcement Learning
For classic (non-deep) RL, Sutton & Barto is the classic.
For deep RL, lectures from Berkeley/CMU looks good.
Textbook
- Reinforcement Learning: An Introduction (Sutton & Barto)
β¨
Lecture
- UCL Reinforcement Learning
β¨ - Berkeley CS294 Deep Reinforcement Leanring
β¨ - CMU 10703 Deep Reinforcement Learing and Control
Graphical Model
Koller & Friedman is comprehensive, but too encyclopedic.
I recommend to take an introductory course using Koller & Friedman book.
Wainwright & Jordan only focuses on variational inference,
but it gives really good intuition for probabilistic models.
Textbook
- Probabilistic Graphical Models: Principles and Techniques (Koller & Friedman)
- Graphical Models, Exponential Families, and Variational Inference (Wainwright & Jordan)
β¨
Lecture
Optimization
Boyd & Vandenberghe is the classic, but I think it's too boring.
Reading chapter 2-5 would be enough.
Bertsekas more concentrates on convex analysis.
Nocedal & Wright more concentrates on optimization.
Textbook
- Convex Optimization (Boyd & Vandenberghe)
β¨ - Convex Optimization Theory (Bertsekas)
- Numerical Optimization (Nocedal & Wright)
Lecture
- Stanford EE364a Convex Optimization I
β¨ - Stanford EE364b Convex Optimization II
- MIT 6.253 Convex Analysis and Optimization
Learning Theory
In my understanding, there are two major topics in learning theory:
- Learning Theory: VC-dimension, PAC-learning
- Online Learning: regret bound, multi-armed bandit
For learning theory, Kearns & Vazirani is the classic; but it's too old-fashined.
Abu-Mostafa is a good introductory book, and I think it's enough for most people.
For online learning, Cesa-Bianchi & Lugosi is the classic.
For multi-armed bandit, Bubeck & Cesa-Bianchi provides a good survey.
Textbook (Learning Theory)
- Learning from Data (Abu-Mostafa) β¨
- Foundations of Machine Learning (Mohri et al.)
- An Introduction to Computational Learning Theory (Kearns & Vazirani)
Textbook (Online Learning)
- Prediction, Learning, and Games (Cesa-Bianchi & Lugosi)
- Regret Analysis of Stochastic and Nonstochastic Multi-armed Bandit Problems (Bubeck & Cesa-Bianchi)
Lecture
- Caltech Learning from Data
β¨ - CMU 15859 Machine Learning Theory
- Berkeley CS281b/Stat241b Statistical Learning Theory
- MIT 9.520 Statistical Learning Theory and Applications
Statistics
Statistics is a broad area; hence, I listed only a few of them.
For advanced topics, lectures from Berkeley/Stanford/CMU/MIT looks really cool.
Textbook (Statistical Inference)
- All of Statistics (Wasserman)
- Computer Age Statistical Inference (Efron & Hastie)
β¨ - Time Series Analysis and Its Applications: With R Examples (Shumway & Stoffer)
Textbook (Nonparametrics)
- All of Nonparametric Statistics (Wasserman)
- Introduction to Nonparametric Estimation (Tsybakov)
- Gaussian Process and Machine Learning (Rasmussen & Williams)
β¨ - Bayesian Nonparametrics (Ghosh & Ramamoorthi)
β¨
Textbook (Advanced Topics)
- High-Dimensional Statistics: A Non-Asymptotic Viewpoint (Wainwright)
β¨ - Statistics for High-Dimensional Data (BΓΌhlmann & van de Geer)
- Asymptotic Statistics (van der Vaart)
- Empirical Processes in M-Estimation (van der Vaart)
Lecture
- Berkeley Stat210a Theoretical Statistics I
- Berkeley Stat210b Theoretical Statistics II
- Stanford Stat300a Theory of Statistics
- Stanford CS369m Algorithms for Massive Data Set Analysis
- CMU 36755 Advanced Statistical Theory I
- MIT 18.S997 High-Dimensional Statistics
Topics in Machine Learning
Miscellaneous topics related to machine learning.
There are much more subfields, but I'll not list them all.
Information Theory
- Elements of Information Theory (Cover & Thomas)
- Information Theory, Inference, and Learning Algorithms (MacKay)
Network Science
- Networks, Crowds, and Markets (Easley & Kleinberg)
- Social and Economic Networks (Jackson)
Markov Chain
- Markov Chains (Norris)
- Markov Chains and Mixing Times (Levin et al.)
Game Theory
- Algorithmic Game Theory (Nisan et al.)
- Multiagent Systems (Shoham & Leyton-Brown)
Combinatorics
- The Probabilistic Method (Alon & Spencer)
- A First Course in Combinatorial Optimization (Lee)
Algorithm
- Introduction to Algorithms (Cormen et al.)
- Randomized Algorithms (Motwani & Raghavan)
- Approximation Algorithms (Vazirani)
Geometric View
- Topological Data Analysis (Wasserman)
- Methods of Information Geometry (Amari & Nagaoka)
- Algebraic Geometry and Statistical Learning Theory (Watanabe)
Some Lectures
Math Backgrounds
I selected essential topics for machine learning.
Personally, I think more analysis / matrix / geometry never hurts.
Probability
- Probability: Theory and Examples (Durrett)
- Theoretical Statistics (Keener)
- Stochastic Processes (Bass)
- Probability and Statistics Cookbook (Vallentin)
Linear Algebra
- Linear Algebra (Hoffman & Kunze)
- Matrix Analysis (Horn & Johnson)
- Matrix Computations (Golub & Van Loan)
- The Matrix Cookbook (Petersen & Pedersen)
Large Deviations
- Concentration Inequalities and Martingale Inequalities (Chung & Lu)
- An Introduction to Matrix Concentration Inequalities (Tropp)