Awesome Deep Learning Projects
The Gallery by Weights & Biases features curated machine learning reports by researchers exploring deep learning techniques, Kagglers showcasing winning models, and industry leaders sharing best practices.
CONTRIBUTING.md
to learn more.
awesome-dl-projects by wandb is a collection of the code that accompanies the reports.
Report | Description | Author |
---|---|---|
Survival of the Fittest CNN Model | Optimize the hyperparameters of a CNN with Variable Length Genetic Algorithm. | Aritra Roy Gosthipaty |
Under the hood of RNNs | A NumPy implementation of RNN. Gradients and connectivity has been visualised too. | Aritra Roy Gosthipaty |
Under the hood of LSTMs | A NumPy implementation of LSTMs. Difference of architectures and the pros of LSTMs have been discussed. | Aritra Roy Gosthipaty |
Get Started with TensorFlow Lite Examples Using Android Studio | A step-by-step guide towards running example apps on your phone using Android Studio, TensorFlow Lite, and USB debugging. | Ivan Goncharov |
Part 2: Deep Representations, a way towards neural style transfer | A top down approach to conceiving neural style transfer. | Aritra Roy Gosthipaty and Devjyoti Chakraborty |
Part 1: Deep Representations, a way towards neural style transfer | A top down approach to conceiving neural style transfer. | Aritra Roy Gosthipaty and Devjyoti Chakraborty |
Hyperparameter Optimization for Huggingface Transformers | The report explains three strategies for hyperparameter optimization for Huggingface Transformers. | Ayush Chaurasia |
Ray Tune: Distributed Hyperparameter Optimization at Scale | How to use Ray Tune with W&B to run an effective distributed hyperparameter optimization pipeline at scale. | Ayush Chaurasia and Lavanya Shukla |
Modern Scalable Hyperparameter Tuning Methods | A comparison of Random search, Bayesian search using HyperOpt, Bayesian search combined with Asynchronous Hyperband and Population Based Training. | Ayush Chaurasia |
Train HuggingFace Models Twice As Fast | This reports summarizes our 14 experiments + 5 reproducibility experiments regarding 2+1 optimizations to reduce training time. | Michaรซl Benesty |
Distilling Knowledge in Neural Networks | This report discusses the compelling model optimization technique - knowledge distillation with code walkthroughs in TensorFlow. | Sayak Paul |
Using SimpleTransformers for Common NLP Applications | Explore Language Modeling, Named Entity Recognition, Question Answering with the SimpleTransformer library. | Ayush Chaurasia |
SimpleTransformers: Transformers Made Easy | Simple Transformers removes complexity and lets you get down to what matters โ model training and experimenting with the Transformer model architectures. | Ayush Thakur |
Sentence Classification With Huggingface BERT and W&B | In this tutorial, weโll build a near state of the art sentence classifier leveraging the power of recent breakthroughs in the field of Natural Language Processing. | Ayush Chaurasia |
3D Image Inpainting | A novel way to convert a single RGB-D image into a 3D image. | Ayush Thakur |
Unsupervised Visual Representation Learning with SwAV | In this report, we explore the SwAV framework, as presented in the paper "Unsupervised Learning of Visual Features by Contrasting Cluster Assignments" by Caron et al. SwAV is currently the SoTA in self-supervised learning for visual recognition. We also address common problems in existing self-supervised methods. | Ayush Thakur and Sayak Paul |
DeepFaceDrawing: An Overview | This is an overview report to break down the key components of DeepFaceDrawing | Ayush Thakur |
Tips for Kagglers From Winning a Solo Silver Medal | How I got started with the competition, my struggles, journey to the final solution and everything I learned in between. | Aman Dalmia |
Part 2 โ Comparing Message Passing Based GNN Architectures | In this report, we shall review various message-passing based GNN architectures and compare them using Sweeps by Weights and Biases. | Yash Kotadia |
Part 1 โ Introduction to Graph Neural Networks with GatedGCN | This report summarizes the need for Graph Neural Networks and analyzes one particular architecture โ the Gated Graph Convolutional Network. | Yash Kotadia |
An Introduction to Adversarial Examples in Deep Learning | This report provides an intuitive introduction to adversarial examples, discusses a wide variety of different adversarial attacks and, most notably, provides advice on defending against them. | Sayak Paul |
Text Recognition with CRNN-CTC Network | This report explains how to detect & recognize text from images. | Rajesh Shreedhar Bhat |
Visualize Scikit Models | Visualize your scikit-learn model's performance with just a few lines of code | Lavanya Shukla |
DeOldify | Understand how the DeOldify model works, reproduce and visualize the results obtained by the author. | Boris Dayma |
HuggingTweets - Train a model to generate tweets | In this project, we fine-tune a pre-trained transformer on tweets using HuggingFace Transformers โ a popular library with pre-trained architectures and frameworks for NLP. | Boris Dayma |
Organize Your Machine Learning Pipelines with Artifacts | In this report, we will show you how to use W&B Artifacts to store and keep track of datasets, models, and evaluation results across machine learning pipelines. | Ayush Chaurasia |
Weights & Biases and Ray Tune: Distributed hyperparameter optimization at scale | How to use Ray Tune with W&B to run an effective distributed hyperparameter optimization pipeline at scale. | Ayush Chaurasia and Lavanya Shukla |
Towards Representation Learning for an Image Retrieval Task | This report explains self-supervised and regularized supervised image retrieval with the help of the latent space of an autoencoder. | Aritra Roy Gosthipaty and Souradip Chakraborty |
Build the World's Open Hedge Fund by Modeling the Stock Market | In this report, we show you how to get started with Numerai, a crowdsourced AI hedge fund and compete on the hardest data science tournament on the planet using Weights & Biases. | Carlo Lepelaars |
Understanding the Effectivity of Ensembles in Deep Learning | The report explores the ideas presented in Deep Ensembles: A Loss Landscape Perspective by Stanislav Fort, Huiyi Hu, and Balaji Lakshminarayanan. | Ayush Thakur and Sayak Paul |
Plunging into Model Pruning in Deep Learning | This report discusses pruning techniques in the context of deep learning. | Sayak Paul |
Visualizing Confusion Matrices with W&B | Using Keras with Weights & Biases, plot a confusion matrix at every step of model training and see where your algorithm is wrong. | Mathรฏs Fรฉdรฉrico |
Experiments with OpenAI Jukebox | Exploring generative models that create music based on raw audio. | Ishaan Malhi |
The Power of Random Features of a CNN | This report presents a number of experiments based on the ideas shown in https://arxiv.org/abs/2003.00152 by Frankle et al. | Sayak Paul |
The Al-Dente Neural Network: Part I | Much like making pasta, training a neural network is easy to learn but takes a lifetime to master. What follows is probably the best recipe to make your own Al-Dente Neural Net, courtesy of Andrej Karpathy. | Sairam Sundaresan |
A Comparative Study of Activation Functions | Walking through different activation functions and comparing their performance. | Sweta Shaw |
Generating Digital Painting Lighting Effects via RGB-space Geometry | Exploring the paper "Generating Digital Painting Lighting Effects via RGB-space Geometry" in which the authors propose an image processing algorithm to generate digital painting lighting effects from a single image. | Ayush Thakur |
Two Shots to Green Screen: Collage with Deep Learning | Train a deep net to extract foreground and background in natural images and videos | Stacey Svetlichnaya |
A Step by Step Guide to Tracking Hugging Face Model Performance | A quick tutorial for training NLP models with HuggingFace and & visualizing their performance with Weights & Biases | Jack Morris |
A Tale of Model Quantization in TF Lite | Model optimization strategies and quantization techniques to help deploy machine learning models in resource constrained environments. | Sayak Paul |
Drought Watch Benchmark Progress | Developing the baseline and exploring submissions to the Drought Watch benchmark | Stacey Svetlichnaya |
Who is Them? Text Disambiguation with Transformers | Using HuggingFace to explore models for natural language understanding | Stacey Svetlichnaya |
Lightning Kitti | Semantic segmentation on Kitti dataset with Pytorch-Lightning | Boris Dayma |
Interpretability in Deep Learning with W&B - CAM and GradCAM | This report will review how Grad-CAM counters the common criticism that neural networks are not interpretable. We'll review feature visualization, class activation maps and implement a custom callback that you can use in your own projects. | Ayush Thakur |
Adversarial Policies in Multi-Agent Settings | One way to win is not to play the game | Stacey Svetlichnaya |
Bounding Boxes for Object Detection | How to log and explore bounding boxes | Stacey Svetlichnaya |
Deep Q Networks with the Cartpole Environment | A brief explanation of the DQN algorithm for reinforcement learning, focusing on the Cartpole-v1 environment from OpenAI gym. | Jari |
Using simpleTransformer on common NLP applications | Explore Language Modeling, Named Entity Recognition, Question Answering with distilbert from the simpleTransformer library. | Ayush Chaurasia |
Transfer Learning with EfficientNet family of models | Learn to use the EfficientNet family of models for transfer learning in TensorFlow using TFHub. | Sayak Paul |
Automate Kaggle model training with Skorch and W&B | Skorch combines the simplicity of scikit, with the power of pytorch and makes for a great framework to use in Kaggle competitions | Ayush Chaurasia |
EvoNorm layers in TensorFlow 2 | Experimental summary of my implementation of EvoNorm layers proposed in https://arxiv.org/pdf/2004.02967.pdf. | Sayak Paul |
When Inception-ResNet-V2 is too slow | Some versions of Inception parallelize better than others | Stacey Svetlichnaya |
Using W&B in a Kaggle Competition | In this tutorial, weโll see how you can use W&B in a Kaggle competition. We'll also see how W&B's Scikit-learn integration enables you to visualize performance metrics for your model with a single line of code. Finally, we'll run a hyperparameter sweep to pick the best model. | Ayush Chaurasia |
Image Masks for Semantic Segmentation | How to log and explore semantic segmentation masks | Stacey Svetlichnaya |
COVID-19 research using PyTorch and W&B | How to train T5 on SQUAD with Transformers and Nlp | Ayush Chaurasia |
Video to 3D: Depth Perception for Self-Driving Cars | Unsupervised learning of depth perception from dashboard cameras | Stacey Svetlichnaya |
The View from the Driver's Seat | Semantic segmentation for scene parsing on Berkeley Deep Drive 100K | Stacey Svetlichnaya |
Meaning and Noise in Hyperparameter Search | How do we distinguish signal from pareidolia (imaginary patterns)? | Stacey Svetlichnaya |
Kaggle Starter Kernel - Jigsaw Multilingual Toxic Comment Classification | This report presents a comparison between three models, trained to compete on Kaggle's Jigsaw Multilingual Toxic Comment Classification. | Sayak Paul |
Sentence classification with Huggingface BERT and W&B | How to train T5 on SQUAD with Transformers and Nlp | Ayush Chaurasia |
Visualizing and Debugging Neural Networks with PyTorch and W&B | In this post, weโll see what makes a neural network underperform and ways we can debug this by visualizing the gradients and other parameters associated with model training. Weโll also discuss the problem of vanishing and exploding gradients and methods to overcome them. | Ayush Thakur |
Track Model Performance | In this report, I'll show you show you can visualize any model's performance with Weights & Biases. We'll see how to log metrics from vanilla for loops, boosting models (xgboost & lightgbm), sklearn and neural networks. | Lavanya Shukla |
Log ROC, PR curves and Confusion Matrices with W&B | You can now log precision recall and ROC curves, and confusion matrices natively using Weights & Biases. You can also use our heatmaps to create attention maps. | Lavanya Shukla |
Visualize models in TensorBoard with Weights and Biases | In this article, we are going see how to spin up and host a TensorBoard instance online with Weights and Biases. We'll end with visualizing a confusion matrix in TensorBoard. | Lavanya Shukla |
Visualizing and Debugging Neural Networks with PyTorch and W&B | In this post I'll show you how to use wandb.Molecule, to visualize molecular data with Weights and Biases. | Nicholas Bardy |
Visualize Model Predictions | In this report, I'll show you how to visualize a model's predictions with Weights & Biases โ images, videos, audio, tables, HTML, metrics, plots, 3d objects and point clouds. | Lavanya Shukla |
Use Pytorch Lightning with Weights & Biases | PyTorch Lightning lets you decouple the science code from engineering code. Try this quick tutorial to visualize Lightning models and optimize hyperparameters with an easy Weights & Biases integration. | Ayush Chaurasia |
Evaluating the Impact of Sequence Convolutions and Embeddings on Protein Structure Prediction | A vignette on recent work using deep learning for protein structure prediction. March 26, 2020. | Jonathan King |
Towards Deep Generative Modeling with W&B | In this report, we will learn about the evolution of generative modeling. We'll start with Autoencoders and Variational Autoencoders and then dive into Generative Adversarial Modeling. | Ayush Thakur |
Exploring ResNets With W&B | Post by Ayush Chaurasia | Lavanya Shukla |
NeRF โ Representing Scenes as Neural Radiance Fields for View Synthesis | In the Representing Scenes as Neural Radiance Fields for View Synthesis paper, the authors present a method that achieves state-of-the-art results for synthesizing novel views of complex scenes by optimizing an underlying continuous volumetric scene function using a sparse set of input views. | Lavanya Shukla |
How Efficient is EfficientNet? | Evaluating the EfficientNet Family on a Smaller ImageNet-like Dataset | Ajay Arasanipalai |
Distributed training in tf.keras with W&B | Explore the ways to distribute your training workloads with minimal code changes and analyze system metrics with Weights and Biases. | Sayak Paul |
Reproducible Models with W&B | Discover simple techniques to make your ML experiments as reproducible as possible. | Sayak Paul |
Effects of Weight Initialization on Neural Networks | In this article, weโll review and compare a plethora of weight initialization methods for neural nets. We will also outline a simple recipe for initializing the weights in a neural net. | Sayak Paul |
An Introduction to Image Inpainting using Deep Learning | In this report, we are going to learn how to do โimage inpaintingโ, i.e. fill in missing parts of images precisely using deep learning. | Ayush Thakur and Sayak Paul |
Distributed Training | Getting started with distributed training in Keras | Stacey Svetlichnaya |
Curriculum Learning in Nature | Applying human learning strategies to neural nets on iNaturalist 2017 | Stacey Svetlichnaya |
Fashion MNIST | Explore various hyperparameters of a CNN trained on Fashion MNIST to identify 10 types of clothing | Stacey Svetlichnaya |
Classify the Natural World | Training and fine-tuning convolutional networks to identify species beyond ImageNet | Stacey Svetlichnaya |
Colorizing Black and White Images | How can we add realistic color to black & white images? Explore the effect of up-convolutions, weight decay, and deeper architectures. | Nicholas Bardy |
Text Generation with Sequence Models | Explore network architectures and training settings for character-based text generation. Compare RNNs, GRUs, and LSTMS, with different depths, layer widths, and dropout. Also consider the training data length, sequence length, and number of sequences per batch. | Carey |
RNNs for Video Understanding | Comparing various recurrent models in Pytorch on YouTube videos | rchavezj |
FastText Nanobot for the Transformer Age | Integrate FastText with W&B to visualize incredibly efficient natural language processing | Stacey Svetlichnaya |
Dropout in PyTorch โ An Example | Regularize your PyTorch model with Dropout | Ayush Thakur |
Compare & monitor fastai2 models | Exploring generative models that create music based on raw audio. | Boris Dayma |
Generate Meaningful Captions for Images with Attention Models | Image captioning has many use cases that include generating captions for Google image search and live video surveillance as well as helping visually impaired people to get information about their surroundings. | Rajesh Shreedhar Bhat and Souradip Chakraborty |
Train HuggingFace models twice as fast | This reports summarizes our 14 experiments + 5 reproducibility experiments regarding 2+1 optimizations to reduce training time. | Michaรซl Benesty |
Build the world's open hedge fund by modeling the stock market. | In this report, we show you how to get started with Numerai, a crowdsourced AI hedge fund and compete on the hardest data science tournament on the planet using Weights & Biases. | Carlo Lepelaars |
Use GPUs with Keras | A short tutorial on using GPUs for your deep learning models with Keras | Ayush Thakur |
Implementing and tracking the performance of a CNN in Pytorch - An Example | A guide to implementing and tracking the performance of a Convolutional Neural Network in Pytorch. | Ayush Thakur |
Measuring Mode Collapse in GANs | Evaluate and quantitatively measure the GAN failure case of mode collapse - when the model fails to generate diverse enough outputs. | Kevin Shen |