Awesome Torch
A curated list of awesome Torch tutorials, projects and communities.
Table of Contents
Tutorials
- Applied Deep Learning for Computer Vision with Torch CVPR15 Tutorial [Slides]
- Machine Learning with Torch for IPAM Summer School on Deep Learning. [Code]
- Oxford Computer Science - Machine Learning 2015
- Implementing LSTMs with nngraph
- Community Wiki (Cheatseet) for Torch
- Demos & Turorials for Torch
- Learn Lua in 15 Minutes
- Torch Starter
Model Zoo
Codes and related articles. (#)
means authors of code and paper are different.
Recurrent Networks
- SampleRNN (An Unconditional End-to-End Neural Audio Generation Model)
- Soroush Mehri, Kundan Kumar, Ishaan Gulrajani, Rithesh Kumar, Shubham Jain, Jose Sotelo, Aaron Courville, Yoshua Bengio, SampleRNN: An Unconditional End-to-End Neural Audio Generation Model, [Paper]
- Learning Simple Algorithms from Examples
- Wojciech Zaremba, Tomas Mikolov, Armand Joulin, Rob Fergus, Learning Simple Algorithms from Examples, arXiv:1511.07275 [Paper]
- SCRNN (Structurally Constrained Recurrent Neural Network)
- Tomas Mikolov, Armand Joulin, Sumit Chopra, Michael Mathieu, Marc'Aurelio Ranzato, Learning Longer Memory in Recurrent Neural Networks, arXiv:1406.1078 [Paper]
- Tree-LSTM (Tree-structured Long Short-Term Memory networks)
- Kai Sheng Tai, Richard Socher, Christopher D. Manning, Improved Semantic Representations From Tree-Structured Long Short-Term Memory Networks, ACL 2015 [Paper]
- LSTM language model with CNN over characters
- Yoon Kim, Yacine Jernite, David Sontag, Alexander M. Rush, Character-Aware Neural Language Models, AAAI 2016, [Paper]
- LSTM, GRU, RNN for character-level language (char-rnn), word-rnn
- Andrej Karpathy, Justin Johnson, Li Fei-Fei, Visualizing and Understanding Recurrent Networks, ICLR 2016, [Paper]
- LSTM for word-level language model
- Wojciech Zaremba, Ilya Sutskever, Oriol Vinyal, Recurrent Neural Network Regularization, arXiv:1409.2329 [Paper]
- LSTM
- Wojciech Zaremba, Ilya Sutskever, Learning to Execute, arXiv:1410.4615 [Paper]
- NeuralTalk2 (Show and Tell)
- (#) Oriol Vinyals, Alexander Toshev, Samy Bengio, Dumitru Erhan, Show and Tell: A Neural Image Caption Generator, CVPR 2015 [Paper]
- Seq2Seq
- (#) Ilya Sutskever, Oriol Vinyals, Quoc V. Le, Sequence to Sequence Learning with Neural Networks, NIPS 2014 [Paper]
- sentence2vec
- (#) Ilya Sutskever, Oriol Vinyals, Quoc V. Le, Sequence to Sequence Learning with Neural Networks, NIPS 2014 [Paper]
- LSTM (Sequence to Sequence Learning with Neural Networks)
- (#) Ilya Sutskever, Oriol Vinyals, Quoc V. Le, Sequence to Sequence Learning with Neural Networks, NIPS 2014 [Paper]
- Grid LSTM
- (#) Nal Kalchbrenner, Ivo Danihelka, Alex Graves, Grid Long Short-Term Memory, arXiv:1507.01526, [Paper]
- Recurrent Visual Attention Model
- (#) Volodymyr Mnih, Nicolas Heess, Alex Graves, Koray Kavukcuoglu, Recurrent Models of Visual Attention, NIPS 2014 [Paper]
- DRAW (Deep Recurrent Attentive Writer)
- (#) Karol Gregor, Ivo Danihelka, Alex Graves, Danilo Jimenez Rezende, Daan Wierstra, DRAW: A Recurrent Neural Network For Image Generation, arXiv:1502.04623 [Paper]
- Pixel rnn
- (#) Aaron van den Oord, Nal Kalchbrenner, Koray Kavukcuoglu, Pixel Recurrent Neural Networks, arXiv:1601.06759 [Paper]
- Deeper LSTM+ normalized CNN for Visual Question Answering
- Aishwarya Agrawal, Jiasen Lu, Stanislaw Antol, Margaret Mitchell, C. Lawrence Zitnick, Dhruv Batra, Devi Parikh, VQA: Visual Question Answering, arXiv:1505.00468, [Paper]
- CTCSpeechRecognition
- Dario Amodei, Rishita Anubhai, Eric Battenberg, Carl Case, Jared Casper, Bryan Catanzaro, Jingdong Chen, Mike Chrzanowski, Adam Coates, Greg Diamos, Erich Elsen, Jesse Engel, Linxi Fan, Christopher Fougner, Tony Han, Awni Hannun, Billy Jun, Patrick LeGresley, Libby Lin, Sharan Narang, Andrew Ng, Sherjil Ozair, Ryan Prenger, Jonathan Raiman, Sanjeev Satheesh, David Seetapun, Shubho Sengupta, Yi Wang, Zhiqian Wang, Chong Wang, Bo Xiao, Dani Yogatama, Jun Zhan, Zhenyao Zhu, Deep Speech 2: End-to-End Speech Recognition in English and Mandarin, arXiv:1512.02595, [Paper]
- DenseCap
- Justin Johnson, Andrej Karpathy, Li Fei-Fei, DenseCap: Fully Convolutional Localization Networks for Dense Captioning, CVPR 2016, [Paper]
- Sequence-to-Sequence Learning with Attentional Neural Networks
- (#) Minh-Thang Luong, Hieu Pham, Christopher D. Manning, Effective Approaches to Attention-based Neural Machine Translation, EMNLP 2015, [Paper]
- Recurrent Batch Normalization
- (#) Tim Cooijmans, Nicolas Ballas, Cรฉsar Laurent, รaฤlar Gรผlรงehre, Aaron Courville, Recurrent Batch Normalization, arXiv:1603.09025, [Paper]
- End-to-End Generative Dialogue
- Colton Gyulay, Michael Farrell, * End-to-End Generative Dialogue*, [Paper]
- ActivityNet
- Fabian Caba Heilbron, Victor Escorcia, Bernard Ghanem and Juan Carlos Niebles, Activitynet: A large-scale video benchmark for human activity understanding, CVPR 2015, [Paper]
- SCRNNs
- Tomas Mikolov, Armand Joulin, Sumit Chopra, Michael Mathieu, Marc'Aurelio Ranzato, Learning Longer Memory in Recurrent Neural Networks, arXiv:1412.7753, [Paper]
- Hierarchical Question-Image Co-Attention for Visual Question Answering
- Jiasen Lu, Jianwei Yang, Dhruv Batra, Devi Parikh, Hierarchical Question-Image Co-Attention for Visual Question Answering, arXiv:1606.00061, [Paper]
- ConvLSTM
- Viorica Patraucean, Ankur Handa, Roberto Cipolla, Spatio-temporal video autoencoder with differentiable memory, ICLR 2016 Workshop, [Paper]
Convolutional Networks
- Crepe (Character-level Convolutional Networks for Text Classification)
- Xiang Zhang, Junbo Zhao, Yann LeCun, Character-level Convolutional Networks for Text Classification, NIPS 2015 [Paper]
- DCGAN (Deep Convolutional Generative Adversarial Networks)
- (#) Alec Radford, Luke Metz, Soumith Chintala, Unsupervised Representation Learning with Deep Convolutional Generative Adversarial Networks, arXiv:1511.06434v1 [Paper]
- Inception
- (#) Christian Szegedy, Wei Liu, Yangqing Jia, Pierre Sermanet, Scott Reed, Dragomir Anguelov, Dumitru Erhan, Vincent Vanhoucke, Andrew Rabinovich, Going Deeper with Convolutions, CVPR 2015 [Paper]
- inception-v3.torch
- (#) Christian Szegedy, Vincent Vanhoucke, Sergey Ioffe, Jonathon Shlens, Zbigniew Wojna, Rethinking the Inception Architecture for Computer Vision, arXiv:1512.00567, [Paper]
- The inception-resnet-v2 models trained from scratch via torch
- (#) Christian Szegedy, Sergey Ioffe, Vincent Vanhoucke, Inception-v4, Inception-ResNet and the Impact of Residual Connections on Learning, arXiv:1602.07261, [Paper]
- OpenFace (Face recognition with Google's FaceNet deep neural network)
- (#) Florian Schroff, Dmitry Kalenichenko, James Philbin, FaceNet: A Unified Embedding for Face Recognition and Clustering, CVPR 2015 [Paper]
- Neural Style, Neural Art
- (#) Leon A. Gatys, Alexander S. Ecker, Matthias Bethge, A Neural Algorithm of Artistic Style, arXiv:1508.06576 [Paper]
- SRCNN (Super-Resolution Using Deep Convolutional Networks), waifu2x
- (#) Chao Dong, Chen Change Loy, Kaiming He, Xiaoou Tang, Image Super-Resolution Using Deep Convolutional Networks, arXiv:1501.00092 [Paper]
- Overfeat
- (#) Pierre Sermanet, David Eigen, Xiang Zhang, Michael Mathieu, Rob Fergus, Yann LeCun, OverFeat: Integrated Recognition, Localization and Detection using Convolutional Networks, arXiv:1312.6229 [Paper]
- Very Deep ConvNet (Very Deep Convolutional Networks)
- (#) K. Simonyan, A. Zisserman, Very Deep Convolutional Networks for Large-Scale Image Recognition, arXiv:1409.1556 [Paper]
- Alexnet, Overfeat, VGG in Torch on multiple GPUs over ImageNet
- Fast neural doodle
- (#) Alex J. Champandard Semantic Style Transfer and Turning Two-Bit Doodles into Fine Artworks, arXiv:1603.01768 [Paper]
- Texture Networks: Feed-forward Synthesis of Textures and Stylized Images
- Dmitry Ulyanov, Vadim Lebedev, Andrea Vedaldi, Victor Lempitsky Texture Networks: Feed-forward Synthesis of Textures and Stylized Images, arXiv:1603.03417 [Paper]
- Artistic style transfer for videos
- Manuel Ruder, Alexey Dosovitskiy, Thomas Brox Artistic style transfer for videos, arXiv:1604.08610 [Paper]
- ResNet training in Torch
- (#) Kaiming He, Xiangyu Zhang, Shaoqing Ren, Jian Sun, Deep Residual Learning for Image Recognition, arXiv:1512.03385, [Paper]
- Deep Networks with Stochastic Depth
- Gao Huang, Yu Sun, Zhuang Liu, Daniel Sedra, Kilian Weinberger, Deep Networks with Stochastic Depth, arXiv:1603.09382, [Paper]
- Sentence Convolution Code in Torch
- (#) Yoon Kim, Convolutional Neural Networks for Sentence Classification, arXiv:1408.5882, [Paper]
- MGANs
- Chuan Li, Michael Wand, Precomputed Real-Time Texture Synthesis with Markovian Generative Adversarial Networks, arXiv:1604.04382, [Paper]
- Deep Residual Networks with 1K Layers
- Kaiming He, Xiangyu Zhang, Shaoqing Ren, Jian Sun, Identity Mappings in Deep Residual Networks, arXiv:1603.05027, [Paper]
- Multi-Scale Context Aggregation by Dilated Convolutions
- (#) Fisher Yu, Vladlen Koltun, Multi-Scale Context Aggregation by Dilated Convolutions, [Paper]
- CNNMRF
- Chuan Li, Michael Wand, Combining Markov Random Fields and Convolutional Neural Networks for Image Synthesis, arXiv:1601.04589, [Paper]
- Stacked Hourglass Networks for Human Pose Estimation (Training Code)
- Alejandro Newell, Kaiyu Yang, Jia Deng, Stacked Hourglass Networks for Human Pose Estimation, arXiv:1603.06937, [Paper]
- Wide Residual Networks
- Sergey Zagoruyko, Nikos Komodakis, Wide Residual Networks, Wide Residual Networks, arXiv:1605.07146, [Paper]
- Joint Unsupervised Learning (JULE) of Deep Representations and Image Clusters
- Jianwei Yang, Devi Parikh, Dhruv Batra, Joint Unsupervised Learning of Deep Representations and Image Clusters, CVPR 2016, [Paper]
- Torch implementation of the Fast R-CNN
- (#) Ross Girshick, Fast R-CNN, ICCV 2015, [Paper]
- Learning Deep Representations of Fine-grained Visual Descriptions
- Scott Reed, Zeynep Akata, Honglak Lee, Bernt Schiele, Learning Deep Representations of Fine-grained Visual Descriptions, CVPR 2016, [Paper]
- Generative Adversarial Text-to-Image Synthesis
- Scott Reed, Zeynep Akata, Xinchen Yan, Lajanugen Logeswaran, Bernt Schiele, Honglak Lee, Generative Adversarial Text to Image Synthesis, ICML 2016, [Paper]
- DarkForest, the Facebook Go engine
- Yuandong Tian, Yan Zhu, Better Computer Go Player with Neural Network and Long-term Prediction, ICLR 2016, [Paper]
- 3D CNN
- deepmask
- imagenet-multiGPU.torchnet
- imagenet-multiGPU.torch + fb.resnet.torch in torchnet
- cvpr2016_stylenet
- Edgar Simo-Serra, Hiroshi Ishikawa, Fashion Style in 128 Floats: Joint Ranking and Classification using Weak Data for Feature Extraction, CVPR 2016, [Paper]
- ENet
- Adam Paszke, Abhishek Chaurasia, Sangpil Kim, Eugenio Culurciello, ENet: A Deep Neural Network Architecture for Real-Time Semantic Segmentation, arXiv:1606.02147, [Paper]
- Oriented Response Networks
- Yanzhao Zhou, Qixiang Ye, Qiang Qiu, Jianbin Jiao, Oriented Response Networks, CVPR 2017, [Paper]
Reinforcement Learning
- Deep Q-network, DeepMind-Atari-Deep-Q-Learner
- Volodymyr Mnih, Koray Kavukcuoglu, David Silver, Andrei A. Rusu, Joel Veness, Marc G. Bellemare, Alex Graves, Martin Riedmiller, Andreas K. Fidjeland, Georg Ostrovski, Stig Petersen, Charles Beattie, Amir Sadik, Ioannis Antonoglou, Helen King, Dharshan Kumaran, Daan Wierstra, Shane Legg, Demis Hassabis, Human-Level Control through Deep Reinforcement Learning, Nature, [Paper]
- Deep Attention Recurrent Q-Network
- (#) Ivan Sorokin, Alexey Seleznev, Mikhail Pavlov, Aleksandr Fedorov, Anastasiia Ignateva, Deep Attention Recurrent Q-Network, NIPS 2015, [Paper]
- Grid World DQN using torch7
- (#) Marc G. Bellemare, Georg Ostrovski, Arthur Guez, Philip S. Thomas, Rรฉmi Munos, Increasing the Action Gap: New Operators for Reinforcement Learning, arXiv:1512.04860, [Paper]
- Deep Q-Networks for Accelerating the Training of Deep Neural Networks
- Jie Fu, Zichuan Lin, Miao Liu, Nicholas Leonard, Jiashi Feng, Tat-Seng Chua, Deep Q-Networks for Accelerating the Training of Deep Neural Networks, arXiv:1606.01467, [Paper]
- ActorMimic
- Emilio Parisotto, Jimmy Lei Ba, Ruslan Salakhutdinov, Actor-Mimic: Deep Multitask and Transfer Reinforcement Learning, ICLR 2016, [Paper]
- MazeBase: a sandbox for learning from games
- Sainbayar Sukhbaatar, Arthur Szlam, Gabriel Synnaeve, Soumith Chintala, Rob Fergus, MazeBase: A Sandbox for Learning from Games, arXiv:1511.07401, [Paper]
- mario-ai
- This project contains code to train a model that automatically plays the first level of Super Mario World using only raw pixels as the input (no hand-engineered features).The used technique is deep Q-learning, as described in the Atari paper (Summary), combined with a Spatial Transformer.
- Deep Successor Reinforcement Learning (DSR)
- Tejas D. Kulkarni, Ardavan Saeedi, Simanta Gautam, Samuel J. Gershman, Deep Successor Reinforcement Learning, arXiv:1606.02396, [Paper]
- ViZDoom
- ViZDoom allows developing AI bots that play Doom using only the visual information (the screen buffer). It is primarily intended for research in machine visual learning, and deep reinforcement learning, in particular.
- MIXER - Sequence Level Training with Recurrent Neural Networks
- Marc'Aurelio Ranzato, Sumit Chopra, Michael Auli, Wojciech Zaremba, Sequence Level Training with Recurrent Neural Networks, ICLR 2016, [Paper]
- TorchQLearning
- Implementation of a simple example of Q learning in Torch.
- rltorch
- This package is a Reinforcement Learning package written in LUA for Torch.
- Opponent Modeling in Deep Reinforcement Learning
- He He, Jordan Boyd-Graber, Kevin Kwok, Hal Daumรฉ III, Opponent Modeling in Deep Reinforcement Learning, ICML 2016, [Paper]
- Neural Attention Model for Abstractive Summarization
- Alexander M. Rush, Sumit Chopra, Jason Weston, A Neural Attention Model for Abstractive Summarization, EMNLP 2015 [Paper]
- Memory Networks
- Sainbayar Sukhbaatar, Arthur Szlam, Jason Weston, Rob Fergus, End-To-End Memory Networks, arXiv:1503.08895, [Paper]
- Neural Turing Machine
- Alex Graves, Greg Wayne, Ivo Danihelka, Neural Turing Machines, arXiv:1410.5401 [Paper]
- Eyescream (Natural Image Generation using ConvNets)
- Emily Denton, Soumith Chintala, Arthur Szlam, Rob Fergus, Deep Generative Image Models using a Laplacian Pyramid of Adversarial Networks, arXiv:1506.05751 [Paper]
- BNN (Bilingual Neural Networks) with LBL and CNN
- Ke Tran, Arianna Bisazza, Christof Monz, Word Translation Prediction for Morphologically Rich Languages with Bilingual Neural Networks, EMNLP 2014 [Paper]
- Net2Net
- (#) Tianqi Chen, Ian Goodfellow, Jonathon Shlens, Net2Net: Accelerating Learning via Knowledge Transfer, arXiv:1511.05641 [Paper]
- DSSM (Deep Structured Semantic Model)
- (#) Po-Sen Huang, Xiaodong He, Jianfeng Gao, Li Deng, Alex Acero, Larry Heck, Learning Deep Structured Semantic Models for Web Search using Clickthrough Data, CIKM 2013 [Paper]
- TensorNet (Tensor Train-layer for Neural Nets)
- (#) Alexander Novikov, Dmitry Podoprikhin, Anton Osokin, Dmitry Vetrov, Tensorizing Neural Networks, NIPS 2015 [Paper]
- TripletNet
- (#) Elad Hoffer, Nir Ailon, Deep metric learning using Triplet network, arXiv:1412.6622 [Paper]
- Word2Vec
- (#) Tomas Mikolov, Kai Chen, Greg Corrado, Jeffrey Dean, Efficient Estimation of Word Representations in Vector Space, ICLR 2013 [Paper]
- TripletLoss (used in Google's FaceNet)
- (#) Florian Schroff, Dmitry Kalenichenko, James Philbin, FaceNet: A Unified Embedding for Face Recognition and Clustering, CVPR 2015 [Paper]
- Let there be Color!: Automatic Colorization of Grayscale Images
- Satoshi Iizuka, Edgar Simo-Serra, Hiroshi Ishikawa, Let there be Color!: Joint End-to-end Learning of Global and Local Image Priors for Automatic Image Colorization with Simultaneous Classification, SIGGRAPH 2016, [Paper]
- Context Encoders: Feature Learning by Inpainting
- Deepak Pathak, Philipp Krahenbuhl, Jeff Donahue, Trevor Darrell, Alexei A. Efros, Context Encoders: Feature Learning by Inpainting, CVPR 2016, [Paper]
- stnbhwd
- Max Jaderberg, Karen Simonyan, Andrew Zisserman, Koray Kavukcuoglu, Spatial Transformer Networks, arXiv:1506.02025, [Paper]
- DrMAD
- (#) Jie Fu, Hongyin Luo, Jiashi Feng, Kian Hsiang Low, Tat-Seng Chua, DrMAD: Distilling Reverse-Mode Automatic Differentiation for Optimizing Hyperparameters of Deep Neural Networks, arXiv:1601.00917, [Paper]
- Adaptive Neural Compilation
- Rudy Bunel, Alban Desmaison, Pushmeet Kohli, Philip H.S. Torr, M. Pawan Kumar, Adaptive Neural Compilation, arXiv:1605.07969, [Paper]
- fasttext_torch
- (#) Armand Joulin, Edouard Grave, Piotr Bojanowski, Tomas Mikolov, Bag of Tricks for Efficient Text Classification, arXiv:1607.01759, [Paper]
- MemNN
- Sainbayar Sukhbaatar, Arthur Szlam, Jason Weston, Rob Fergus, End-To-End Memory Networks, arXiv:1503.08895, [Paper]
- Variational Auto-encoder
- Diederik P Kingma, Max Welling, Auto-Encoding Variational Bayes, arXiv:1312.6114, [Paper]
- Multimodal Compact Bilinear Pooling for Torch7
- (#) Akira Fukui, Dong Huk Park, Daylen Yang, Anna Rohrbach, Trevor Darrell, Marcus Rohrbach, Multimodal Compact Bilinear Pooling for Visual Question Answering and Visual Grounding, [Paper]
- object-detection.torch
- Implementation of some object detection frameworks in torch. (Fast-RCNN, threaded RCNN, etc.)
- N3: Newtonian Image Understanding: Unfolding the Dynamics of Objects in Statis Images
- Roozbeh Mottaghi, Hessam Bagherinezhad, Mohammad Rastegari, Ali Farhadi, Newtonian Image Understanding: Unfolding the Dynamics of Objects in Static Images, CVPR 2016, [Paper]
Libraries
Model related
- nn : an easy and modular way to build and train simple or complex neural networks [Code] [Documentation]
- dpnn : extensions to the nn lib, more modules [Code]
- nnx : extension to the nn lib, experimental neural network modules and criterions [Code]
- nninit : weight initialisation schemes [Code]
- rnn : Recurrent Neural Network library [Code]
- optim : A numeric optimization package for Torch [Code]
- dp : a deep learning library designed for streamlining research and development [Code] [Documentation]
- nngraph : provides graphical computation for nn library [Code] [Oxford Introduction]
- nnlr : Add layer-wise learning rate schemes to Torch [Code]
- optnet: Memory optimizations for torch neural networks. [Code]
- autograd : Autograd automatically differentiates native Torch code. [Code]
- torchnet: framework for torch which provides a set of abstractions aiming at encouraging code re-use as well as encouraging modular programming [Code] [Paper]
GPU related
- distro-cl: An OpenCL distribution for Torch [Code]
- cutorch : A CUDA backend for Torch [Code]
- cudnn : Torch FFI bindings for NVIDIA CuDNN [Code]
- fbcunn : Facebook's extensions to torch/cunn [Code] [Documentation]
IDE related
- iTorch : IPython kernel for Torch with visualization and plotting [Code]
- Lua Development Tools (LDT) : based on Eclipse [Code]
- zbs-torch : A lightweight Lua-based IDE for Lua with code completion, syntax highlighting, live coding, remote debugger, and code analyzer [Code]
ETC
- fblualib : Facebook libraries and utilities for Lua [Code]
- loadcaffe : Load Caffe networks in Torch [Code]
- Purdue e-lab lib : A collection of snippets and libraries [Code]
- torch-android : Torch for Android [Code]
- torch-models : Implementation of state-of-art models in Torch. [Code]
- lutorpy : Lutorpy is a libray built for deep learning with torch in python. [Code]
- CoreNLP.lua : Lua client for Stanford CoreNLP. [Code]
- Torchlib: Data structures and libraries for Torch. [Code]
- THFFmpeg: Torch bindings for FFmpeg (reading videos only) [Code]
- tunnel: Data Driven Framework for Distributed Computing in Torch 7, [Code]
- pytorch: Python wrappers for torch and lua, [Code]
- lutorpy: Use torch in python for deep learning., [Code]
- torch-pcl: Point Cloud Library (PCL) bindings for Torch, [Code]
- Moses: A Lua utility-belt library for functional programming. It complements the built-in Lua table library, making easier operations on arrays, lists, collections. [Cpde]