• Stars
    star
    346
  • Rank 122,430 (Top 3 %)
  • Language
  • Created over 8 years ago
  • Updated about 3 years ago

Reviews

There are no reviews yet. Be the first to send feedback to the community and the maintainers!

Repository Details

Free deep learning papers

awesome-free-deep-learning-papers

PayPal me

Survey Review

  • Deep learning (2015), Yann LeCun, Yoshua Bengio and Geoffrey Hinton [pdf]
  • Deep learning in neural networks: An overview (2015), J. Schmidhuber [pdf]
  • Representation learning: A review and new perspectives (2013), Y. Bengio et al. [pdf]

Theory Future

  • Distilling the knowledge in a neural network (2015), G. Hinton et al. [pdf]
  • Deep neural networks are easily fooled: High confidence predictions for unrecognizable images (2015), A. Nguyen et al. [pdf]
  • How transferable are features in deep neural networks? (2014), J. Yosinski et al. (Bengio) [pdf]
  • Why does unsupervised pre-training help deep learning (2010), E. Erhan et al. (Bengio) [pdf]
  • Understanding the difficulty of training deep feedforward neural networks (2010), X. Glorot and Y. Bengio [pdf]

Optimization Regularization

  • Taking the human out of the loop: A review of bayesian optimization (2016), B. Shahriari et al. [pdf]
  • Batch Normalization: Accelerating Deep Network Training by Reducing Internal Covariate Shift (2015), S. Loffe and C. Szegedy [pdf]
  • Delving deep into rectifiers: Surpassing human-level performance on imagenet classification (2015), K. He et al. [pdf]
  • Dropout: A simple way to prevent neural networks from overfitting (2014), N. Srivastava et al. (Hinton) [pdf]
  • Adam: A method for stochastic optimization (2014), D. Kingma and J. Ba [pdf]
  • Regularization of neural networks using dropconnect (2013), L. Wan et al. (LeCun) [pdf]
  • Improving neural networks by preventing co-adaptation of feature detectors (2012), G. Hinton et al. [pdf]
  • Spatial pyramid pooling in deep convolutional networks for visual recognition (2014), K. He et al. [pdf]
  • Random search for hyper-parameter optimization (2012) J. Bergstra and Y. Bengio [pdf]
  • Faster R-CNN: Towards Real-Time Object Detection with Region Proposal Networks, by Shaoqing R., Kaiming H., Ross B. G. & Jian S. (2015) (Cited: 1,421) [pdf] In this work, we introduce a Region Proposal Network (RPN) that shares full-image convolutional features with the detection network, thus enabling nearly cost-free region proposals. An RPN is a fully convolutional network that simultaneously predicts object bounds and objectness scores at each position.
  • Asynchronous methods for deep reinforcement learning, by Volodymyr M., Adrià P. B., Mehdi M., Alex G., Tim H. et al. (2016) (Cited: 472) [pdf] The best performing method, an asynchronous variant of actor-critic, surpasses the current state-of-the-art on the Atari domain while training for half the time on a single multi-core CPU instead of a GPU. Furthermore, we show that asynchronous actor-critic succeeds on a wide variety of continuous motor control problems as well as on a new task of navigating random 3D mazes using a visual input.

NetworkModels

  • Deep residual learning for image recognition (2016), K. He et al. (Microsoft) [pdf]
  • Going deeper with convolutions (2015), C. Szegedy et al. (Google) [pdf]
  • Fast R-CNN (2015), R. Girshick [pdf]
  • Very deep convolutional networks for large-scale image recognition (2014), K. Simonyan and A. Zisserman [pdf]
  • Fully convolutional networks for semantic segmentation (2015), J. Long et al. [pdf]
  • OverFeat: Integrated recognition, localization and detection using convolutional networks (2014), P. Sermanet et al. (LeCun) [pdf]
  • Visualizing and understanding convolutional networks (2014), M. Zeiler and R. Fergus [pdf]
  • Maxout networks (2013), I. Goodfellow et al. (Bengio) [pdf]
  • ImageNet classification with deep convolutional neural networks (2012), A. Krizhevsky et al. (Hinton) [pdf]
  • Large scale distributed deep networks (2012), J. Dean et al. [pdf]
  • Deep sparse rectifier neural networks (2011), X. Glorot et al. (Bengio) [pdf]
  • Human-level control through deep reinforcement learning, by Volodymyr M., Koray K., David S., Andrei A. R., Joel V et al (2015) (Cited: 2,086) [pdf] Here we use recent advances in training deep neural networks to develop a novel artificial agent, termed a deep Q-network, that can learn successful policies directly from high-dimensional sensory inputs using end-to-end reinforcement learning. We tested this agent on the challenging domain of classic Atari 2600 games.
  • Conditional Random Fields as Recurrent Neural Networks, by Shuai Z., Sadeep J., Bernardino R., Vibhav V. et al (2015) (Cited: 760) [pdf] We introduce a new form of convolutional neural network that combines the strengths of Convolutional Neural Networks (CNNs) and Conditional Random Fields (CRFs)-based probabilistic graphical modelling. To this end, we formulate mean-field approximate inference for the Conditional Random Fields with Gaussian pairwise potentials as Recurrent Neural Networks.

Image

  • Imagenet large scale visual recognition challenge (2015), O. Russakovsky et al. [pdf]
  • Faster R-CNN: Towards Real-Time Object Detection with Region Proposal Networks (2015), S. Ren et al. [pdf]
  • DRAW: A recurrent neural network for image generation (2015), K. Gregor et al. [pdf]
  • Rich feature hierarchies for accurate object detection and semantic segmentation (2014), R. Girshick et al. [pdf]
  • Learning and transferring mid-Level image representations using convolutional neural networks (2014), M. Oquab et al. [pdf]
  • DeepFace: Closing the Gap to Human-Level Performance in Face Verification (2014), Y. Taigman et al. (Facebook) [pdf]
  • Decaf: A deep convolutional activation feature for generic visual recognition (2013), J. Donahue et al. [pdf]
  • Learning Hierarchical Features for Scene Labeling (2013), C. Farabet et al. (LeCun) [pdf]
  • Learning hierarchical invariant spatio-temporal features for action recognition with independent subspace analysis (2011), Q. Le et al. [pdf]
  • Learning mid-level features for recognition (2010), Y. Boureau (LeCun) [pdf]
  • Long-term recurrent convolutional networks for visual recognition and description, by Jeff D., Lisa Anne H., Sergio G., Marcus R., Subhashini V. et al. (2015) (Cited: 1,285) [pdf] In contrast to current models which assume a fixed spatio-temporal receptive field or simple temporal averaging for sequential processing, recurrent convolutional models are “doubly deep” in that they can be compositional in spatial and temporal “layers”.
  • U-Net: Convolutional Networks for Biomedical Image Segmentation, by Olaf R., Philipp F. &Thomas B. (2015) (Cited: 975) [pdf] There is large consent that successful training of deep networks requires many thousand annotated training samples. In this paper, we present a network and training strategy that relies on the strong use of data augmentation to use the available annotated samples more efficiently.
  • Image Super-Resolution Using Deep Convolutional Networks, by Chao D., Chen C., Kaiming H. & Xiaoou T. (2014) (Cited: 591) [pdf] Our method directly learns an end-to-end mapping between the low/high-resolution images. The mapping is represented as a deep convolutional neural network (CNN) that takes the low-resolution image as the input and outputs the high-resolution one
  • Salient Object Detection: A Discriminative Regional Feature Integration Approach, by Huaizu J., Jingdong W., Zejian Y., Yang W., Nanning Z. & Shipeng Li. (2013) (Cited: 518) [pdf] In this paper, we formulate saliency map computation as a regression problem. Our method, which is based on multi-level image segmentation, utilizes the supervised learning approach to map the regional feature vector to a saliency score.
  • Deep Learning Face Attributes in the Wild, by Ziwei L., Ping L., Xiaogang W. & Xiaoou T. (2015) (Cited: 401) [pdf] This framework not only outperforms the state-of-the-art with a large margin, but also reveals valuable facts on learning face representation. (1) It shows how the performances of face localization (LNet) and attribute prediction (ANet) can be improved by different pre-training strategies. (2) It reveals that although the filters of LNet are fine-tuned only with imagelevel attribute tags, their response maps over entire images have strong indication of face locations.

Caption

  • Show, attend and tell: Neural image caption generation with visual attention (2015), K. Xu et al. (Bengio) [pdf]
  • Show and tell: A neural image caption generator (2015), O. Vinyals et al. [pdf]
  • Long-term recurrent convolutional networks for visual recognition and description (2015), J. Donahue et al. [pdf]
  • Deep visual-semantic alignments for generating image descriptions (2015), A. Karpathy and L. Fei-Fei [pdf]

Video HumanActivity

  • Large-scale video classification with convolutional neural networks (2014), A. Karpathy et al. (FeiFei) [pdf]
  • A survey on human activity recognition using wearable sensors (2013), O. Lara and M. Labrador [pdf]
  • 3D convolutional neural networks for human action recognition (2013), S. Ji et al. [pdf]
  • Deeppose: Human pose estimation via deep neural networks (2014), A. Toshev and C. Szegedy [pdf]
  • Action recognition with improved trajectories (2013), H. Wang and C. Schmid [pdf]
  • Beyond short snippets: Deep networks for video classification, by Joe Y. Ng, Matthew J. H., Sudheendra V., Oriol V., Rajat M. & George T. (2015) (Cited: 533) [pdf] In this work, we propose and evaluate several deep neural network architectures to combine image information across a video over longer time periods than previously attempted.

WordEmbedding

  • Glove: Global vectors for word representation (2014), J. Pennington et al. [pdf]
  • Sequence to sequence learning with neural networks (2014), I. Sutskever et al. [pdf]
  • Distributed representations of sentences and documents (2014), Q. Le and T. Mikolov [pdf] (Google)
  • Distributed representations of words and phrases and their compositionality (2013), T. Mikolov et al. (Google) [pdf]
  • Efficient estimation of word representations in vector space (2013), T. Mikolov et al. (Google) [pdf]
  • Word representations: a simple and general method for semi-supervised learning (2010), J. Turian (Bengio) [pdf]
  • Visual Madlibs: Fill in the Blank Description Generation and Question Answering, by Licheng Y., Eunbyung P., Alexander C. B. & Tamara L. B. (2015) (Cited: 510) [pdf] In this paper, we introduce a new dataset consisting of 360,001 focused natural language descriptions for 10,738 images. This dataset, the Visual Madlibs dataset, is collected using automatically produced fill-in-the-blank templates designed to gather targeted descriptions about: people and objects, their appearances, activities, and interactions, as well as inferences about the general scene or its broader context.
  • Character-level convolutional networks for text classification, by Xiang Z., Junbo Jake Z. & Yann L. (2015) (Cited: 401) [pdf] This article offers an empirical exploration on the use of character-level convolutional networks (ConvNets) for text classification. We constructed several largescale datasets to show that character-level convolutional networks could achieve state-of-the-art or competitive results.

MachineTranslation QnA

  • Towards ai-complete question answering: A set of prerequisite toy tasks (2015), J. Weston et al. [pdf]
  • Neural machine translation by jointly learning to align and translate (2014), D. Bahdanau et al. (Bengio) [pdf]
  • Learning phrase representations using RNN encoder-decoder for statistical machine translation (2014), K. Cho et al. (Bengio) [pdf]
  • A convolutional neural network for modelling sentences (2014), N. kalchbrenner et al. [pdf]
  • Convolutional neural networks for sentence classification (2014), Y. Kim [pdf]
  • The stanford coreNLP natural language processing toolkit (2014), C. Manning et al. [pdf]
  • Recursive deep models for semantic compositionality over a sentiment treebank (2013), R. Socher et al. [pdf]
  • Natural language processing (almost) from scratch (2011), R. Collobert et al. [pdf]
  • Recurrent neural network based language model (2010), T. Mikolov et al. [pdf]

Speech Etc.

  • Speech recognition with deep recurrent neural networks (2013), A. Graves (Hinton) [pdf]
  • Deep neural networks for acoustic modeling in speech recognition: The shared views of four research groups (2012), G. Hinton et al. [pdf]
  • Context-dependent pre-trained deep neural networks for large-vocabulary speech recognition (2012) G. Dahl et al. [pdf]

RL Robotics

  • Mastering the game of Go with deep neural networks and tree search, D. Silver et al. (DeepMind) [[pdf]](Mastering the game of Go with deep neural networks and tree search)
  • Human-level control through deep reinforcement learning (2015), V. Mnih et al. (DeepMind) [pdf]
  • Deep learning for detecting robotic grasps (2015), I. Lenz et al. [pdf]
  • Playing atari with deep reinforcement learning (2013), V. Mnih et al. (DeepMind) [pdf])

Unsupervised

  • Building high-level features using large scale unsupervised learning (2013), Q. Le et al. [pdf]
  • Contractive auto-encoders: Explicit invariance during feature extraction (2011), S. Rifai et al. (Bengio) [pdf]
  • An analysis of single-layer networks in unsupervised feature learning (2011), A. Coates et al. [pdf]
  • Stacked denoising autoencoders: Learning useful representations in a deep network with a local denoising criterion (2010), P. Vincent et al. (Bengio) [pdf]
  • A practical guide to training restricted boltzmann machines (2010), G. Hinton [pdf]
  • Stacked denoising autoencoders: Learning useful representations in a deep network with a local denoising criterion (2010), P. Vincent et al. (Bengio) [pdf]
  • Unsupervised Representation Learning with Deep Convolutional Generative Adversarial Networks, by Alec R., Luke M. & Soumith C. (2015) (Cited: 1,054) [pdf] In this work, we hope to help bridge the gap between the success of CNNs for supervised learning and unsupervised learning. We introduce a class of CNNs called deep convolutional generative adversarial networks (DCGANs), that have certain architectural constraints, and demonstrate that they are a strong candidate for unsupervised learning.

Hardware Software

  • TensorFlow: Large-Scale Machine Learning on Heterogeneous Distributed Systems (2016), M. Abadi et al. (Google) [pdf]
  • TensorFlow: a system for large-scale machine learning, by Martín A., Paul B., Jianmin C., Zhifeng C., Andy D. et al. (2016) (Cited: 2,227) [pdf] TensorFlow supports a variety of applications, with a focus on training and inference on deep neural networks. Several Google services use TensorFlow in production, we have released it as an open-source project, and it has become widely used for machine learning research.
  • MatConvNet: Convolutional neural networks for matlab (2015), A. Vedaldi and K. Lenc [pdf] It exposes the building blocks of CNNs as easy-to-use MATLAB functions, providing routines for computing linear convolutions with filter banks, feature pooling, and many more. This document provides an overview of CNNs and how they are implemented in MatConvNet and gives the technical details of each computational block in the toolbox.
  • Caffe: Convolutional architecture for fast feature embedding (2014), Y. Jia et al. [pdf]
  • Theano: A Python framework for fast computation of mathematical expressions., by by Rami A., Guillaume A., Amjad A., Christof A. et al (2016) (Cited: 451) [pdf] Theano is a Python library that allows to define, optimize, and evaluate mathematical expressions involving multi-dimensional arrays efficiently. Since its introduction, it has been one of the most used CPU and GPU mathematical compilers especially in the machine learning community and has shown steady performance improvements.
  • Theano: new features and speed improvements (2012), F. Bastien et al. (Bengio) [pdf]
  • Inception-v4, Inception-ResNet and the Impact of Residual Connections on Learning, by Christian S., Sergey I., Vincent V. & Alexander A A. (2017) (Cited: 520) [pdf] Very deep convolutional networks have been central to the largest advances in image recognition performance in recent years. With an ensemble of three residual and one Inception-v4, we achieve 3.08% top-5 error on the test set of the ImageNet classification (CLS) challenge.

Free Deep Learning Books

This collection includes books on all aspects of deep learning. It begins with titles that cover the subject as a whole, before moving onto work that should help beginners expand their knowledge from machine learning to deep learning. The list concludes with books that discuss neural networks, both titles that introduce the topic and ones that go in-depth, covering the architecture of such networks.

  • Deep Learning, by Ian Goodfellow, Yoshua Bengio and Aaron Courville. The Deep Learning textbook is a resource intended to help students and practitioners enter the field of machine learning in general and deep learning in particular. The online version of the book is now complete and will remain available online for free.
  • Deep Learning Tutorial, by LISA Lab, University of Montreal. Developed by LISA lab at University of Montreal, this free and concise tutorial presented in the form of a book explores the basics of machine learning. The book emphasizes with using the Theano library (developed originally by the university itself) for creating deep learning models in Python.
  • Deep Learning: Methods and Applications, by Li Deng and Dong Yu. This book provides an overview of general deep learning methodology and its applications to a variety of signal and information processing tasks.
  • First Contact with TensorFlow, get started with Deep Learning Programming, by Jordi Torres. This book is oriented to engineers with only some basic understanding of Machine Learning who want to expand their wisdom in the exciting world of Deep Learning with a hands-on approach that uses TensorFlow.
  • Neural Networks and Deep Learning, by Michael Nielsen. This book teaches you about Neural networks, a beautiful biologically-inspired programming paradigm which enables a computer to learn from observational data. It also covers deep learning, a powerful set of techniques for learning in neural networks.
  • A Brief Introduction to Neural Networks By David Kriesel. This title covers Neural networks in depth. Neural networks are a bio-inspired mechanism of data processing, that enables computers to learn technically similar to a brain and even generalize once solutions to enough problem instances are taught. Available in English and German.
  • Neural Network Design (2nd edition) By Martin T. Hagan, Howard B. Demuth, Mark H. Beale and Orlando D. Jess. Neural Network Design (2nd Edition) provides a clear and detailed survey of fundamental neural network architectures and learning rules. In it, the authors emphasize a fundamental understanding of the principal neural networks and the methods for training them. The authors also discuss applications of networks to practical engineering problems in pattern recognition, clustering, signal processing, and control systems. Readability and natural flow of material is emphasized throughout the text.
  • Neural Networks and Learning Machines (3rd edition) By Simon Haykin. This third edition of Simon Haykin’s book provides an up-to-date treatment of neural networks in a comprehensive, thorough and readable manner, split into three sections. The book begins by looking at the classical approach on supervised learning, before continuing on to kernel methods based on radial-basis function (RBF) networks. The final part of the book is devoted to regularization theory, which is at the core of machine learning.

License

CC0

ko-fi