Kaggle solutions
I've been using Kaggle as an excuse to learn techniques in machine learning/artificial intelligence.
Resources I've been learning from
Here are some primary resources I've been learning from (in rough chronological order). For reference, I started from an extensive programming background, a decent but rusty math background, and a rudimentary background in machine learning.
-
http://karpathy.github.io/2015/05/21/rnn-effectiveness/: It was fun to play with the released code, even though I didn't yet know what many of the parameters meant.
-
http://www.pyimagesearch.com/2014/09/22/getting-started-deep-learning-python/: Didn't worry too much about the details of Deep Belief Networks, since I've been told those aren't behind the most recent advances in deep learning. However, I found a lot of value in actually getting a not-completely-black-boxed neural network up and running.
-
http://neuralnetworksanddeeplearning.com/: It wasn't until I worked through this book that I really felt like I understood what was going on (or at least knew enough to have a sense of what I didn't know). I highly recommend going through the entire thing. I was particularly impressed by the author's ability to anticipate my confusions and objections, and also convey intuitions and motivations.
-
https://class.coursera.org/ml-005/lecture: Andrew Ng's Machine Learning course. Contains helpful details on a number of topics I hadn't seen before. However, the course moves slower than I was hoping, but with the right cherry-picking it felt pretty useful.
-
http://yann.lecun.com/exdb/publis/pdf/lecun-98b.pdf: Useful for learning some practical tricks for getting better performance out of your neural network. The first half is very useful and readable (though I didn't work through all of the math). The second half seems less so: as they conclude the paper, "Classical second-order methods are impractical in almost all useful cases".
-
http://deeplearning.net/tutorial/lenet.html: Decent introduction to convolutional neural networks. I wasn't previously familiar with convolutions and didn't fully understand it until I'd read http://www.songho.ca/dsp/convolution/convolution.html.
-
http://www.cs.utoronto.ca/~ilya/pubs/ilya_sutskever_phd_thesis.pdf: Contains thorough background on recurrent neural networks, with many experiments and tricks for training your own.
-
http://andrew.gibiansky.com/blog/machine-learning/conjugate-gradient/: This whole blog is great. It has good exposition on some of the more mathematically-involved techniques.
-
http://arxiv.org/pdf/1211.5063v2.pdf: A better choice of activation function and initialization scheme.
-
http://deepdish.io/2015/02/24/network-initialization/: More recent overview of initialization techniques.
-
http://yyue.blogspot.com/2015/01/a-brief-overview-of-deep-learning.html: Great overview of how to think about deep neural networks, and how to train them in practice.
-
https://christopherolah.wordpress.com/: Many amazing blog posts which explore deep concepts in accessible ways.