• Stars
    star
    114
  • Rank 298,895 (Top 7 %)
  • Language
    Jupyter Notebook
  • Created almost 5 years ago
  • Updated over 3 years ago

Reviews

There are no reviews yet. Be the first to send feedback to the community and the maintainers!

Repository Details

Code implementation of Data Efficient Stagewise Knowledge Distillation paper.

Data Efficient Stagewise Knowledge Distillation

Stagewise Training Procedure

Table of Contents

This repository presents the code implementation for Stagewise Knowledge Distillation, a technique for improving knowledge transfer between a teacher model and student model.

Requirements

  • Install the dependencies using conda with the requirements.yml file
    conda env create -f environment.yml
    
  • Setup the stagewise-knowledge-distillation package itself
    pip install -e .
    
  • Apart from the above mentioned dependencies, it is recommended to have an Nvidia GPU (CUDA compatible) with at least 8 GB of video memory (most of the experiments will work with 6 GB also). However, the code works with CPU only machines as well.

Image Classification

Introduction

In this work, ResNet architectures are used. Particularly, we used ResNet10, 14, 18, 20 and 26 as student networks and ResNet34 as the teacher network. The datasets used are CIFAR10, Imagenette and Imagewoof. Note that Imagenette and Imagewoof are subsets of ImageNet.

Preparation

  • Before any experiments, you need to download the data and saved weights of teacher model to appropriate locations.

  • The following script

    • downloads the datasets
    • saves 10%, 20%, 30% and 40% splits of each dataset separately
    • downloads teacher model weights for all 3 datasets
    # assuming you are in the root folder of the repository
    cd image_classification/scripts
    bash setup.sh
    

Experiments

For detailed information on the various experiments, refer to the paper. In all the image classification experiments, the following common training arguments are listed with the possible values they can take:

  • dataset (-d) : imagenette, imagewoof, cifar10
  • model (-m) : resnet10, resnet14, resnet18, resnet20, resnet26, resnet34
  • number of epochs (-e) : Integer is required
  • percentage of dataset (-p) : 10, 20, 30, 40 (don't use this argument at all for full dataset experiments)
  • random seed (-s) : Give any random seed (for reproducibility purposes)
  • gpu (-g) : Don't use unless training on CPU (in which case, use -g 'cpu' as the argument). In case of multi-GPU systems, run CUDA_VISIBLE_DEVICES=id in the terminal before the experiment, where id is the ID of your GPU according to nvidia-smi output.
  • Comet ML API key (-a) (optional) : If you want to use Comet ML for tracking your experiments, then either put your API key as the argument or make it the default argument in the arguments.py file. Otherwise, no need of using this argument.
  • Comet ML workspace (-w) (optional) : If you want to use Comet ML for tracking your experiments, then either put your workspace name as the argument or make it the default argument in the arguments.py file. Otherwise, no need of using this argument.

In the following subsections, example commands for training are given for one experiment each.

No Teacher

Full Imagenette dataset, ResNet10

python3 no_teacher.py -d imagenette -m resnet10 -e 100 -s 0

Traditional KD (FitNets)

20% Imagewoof dataset, ResNet18

python3 traditional_kd.py -d imagewoof -m resnet18 -p 20 -e 100 -s 0

FSP KD

30% CIFAR10 dataset, ResNet14

python3 fsp_kd.py -d cifar10 -m resnet14 -p 30 -e 100 -s 0

Attention Transfer KD

10% Imagewoof dataset, ResNet26

python3 attention_transfer_kd.py -d imagewoof -m resnet26 -p 10 -e 100 -s 0

Hinton KD

Full CIFAR10 dataset, ResNet14

python3 hinton_kd.py -d cifar10 -m resnet14 -e 100 -s 0

Simultaneous KD (Proposed Baseline)

40% Imagenette dataset, ResNet20

python3 simultaneous_kd.py -d imagenette -m resnet20 -p 40 -e 100 -s 0

Stagewise KD (Proposed Method)

Full CIFAR10 dataset, ResNet10

python3 stagewise_kd.py -d cifar10 -m resnet10 -e 100 -s 0

Semantic Segmentation

Introduction

In this work, ResNet backbones are used to construct symmetric U-Nets for semantic segmentation. Particularly, we used ResNet10, 14, 18, 20 and 26 as the backbones for student networks and ResNet34 as the backbone for the teacher network. The dataset used is the Cambridge-driving Labeled Video Database (CamVid).

Preparation

  • The following script
    • downloads the data (and shifts it to appropriate folder)
    • saves 10%, 20%, 30% and 40% splits of each dataset separately
    • downloads the pretrained teacher weights in appropriate folder
    # assuming you are in the root folder of the repository
    cd semantic_segmentation/scripts
    bash setup.sh
    

Experiments

For detailed information on the various experiments, refer to the paper. In all the semantic segmentation experiments, the following common training arguments are listed with the possible values they can take:

  • dataset (-d) : camvid
  • model (-m) : resnet10, resnet14, resnet18, resnet20, resnet26, resnet34
  • number of epochs (-e) : Integer is required
  • percentage of dataset (-p) : 10, 20, 30, 40 (don't use this argument at all for full dataset experiments)
  • random seed (-s) : Give any random seed (for reproducibility purposes)
  • gpu (-g) : Don't use unless training on CPU (in which case, use -g 'cpu' as the argument). In case of multi-GPU systems, run CUDA_VISIBLE_DEVICES=id in the terminal before the experiment, where id is the ID of your GPU according to nvidia-smi output.
  • Comet ML API key (-a) (optional) : If you want to use Comet ML for tracking your experiments, then either put your API key as the argument or make it the default argument in the arguments.py file. Otherwise, no need of using this argument.
  • Comet ML workspace (-w) (optional) : If you want to use Comet ML for tracking your experiments, then either put your workspace name as the argument or make it the default argument in the arguments.py file. Otherwise, no need of using this argument.

Note: Currently, there are no plans for adding Attention Transfer KD and FSP KD experiments for semantic segmentation.

In the following subsections, example commands for training are given for one experiment each.

No Teacher

Full CamVid dataset, ResNet10

python3 pretrain.py -d camvid -m resnet10 -e 100 -s 0

Traditional KD (FitNets)

20% CamVid dataset, ResNet18

python3 traditional_kd.py -d camvid -m resnet18 -p 20 -e 100 -s 0

Simultaneous KD (Proposed Baseline)

40% CamVid dataset, ResNet20

python3 simultaneous_kd.py -d camvid -m resnet20 -p 40 -e 100 -s 0

Stagewise KD (Proposed Method)

10 % CamVid dataset, ResNet10

python3 stagewise_kd.py -d camvid -m resnet10 -p 10 -e 100 -s 0

Citation

If you use this code or method in your work, please cite using

@misc{kulkarni2020data,
      title={Data Efficient Stagewise Knowledge Distillation}, 
      author={Akshay Kulkarni and Navid Panchi and Sharath Chandra Raparthy and Shital Chiddarwar},
      year={2020},
      eprint={1911.06786},
      archivePrefix={arXiv},
      primaryClass={cs.LG}
}

Built by Akshay Kulkarni, Navid Panchi and Sharath Chandra Raparthy.

More Repositories

1

resources

Resources on various topics being worked on at IvLabs
339
star
2

ResearchPaperNotes

Initiative to read research papers
178
star
3

person_following_bot

Algorithm for user tracking and following (turtle bot control)
Python
99
star
4

biped_trajectory_optimization

Implementing trajectory optimization on bipedal system
Python
82
star
5

autonomous-delivery-robot

Repository for Autonomous Delivery Robot project of IvLabs, VNIT
C++
69
star
6

Sahayak-v3

C++
61
star
7

Natural-Language-Processing

Contains various architectures and novel paper implementations for Natural Language Processing tasks like Sequence Modelling and Neural Machine Translation.
Jupyter Notebook
52
star
8

Variational-DL

Variational Deep Learning implementations, starting from simple Autoencoders.
Python
49
star
9

Quantum-Machine-Learning

This repository contains implementations of Quantum Machine Learning algorithms, feature maps, variational circuits and research papers.
Jupyter Notebook
48
star
10

autonomous_stair_climbing_robot_v2

Deep Learning based pipeline for alignment of a robot with stairs
Jupyter Notebook
41
star
11

robust_quadcopter_control

Develop and learn the dynamics of Quadcopter and implement control algorithms to the Quadcopter system.
MATLAB
36
star
12

Deep-Learning-Based-Stair-Segmentation-and-Behavioral-Cloning-for-Autonomous-Stair-Climbing

Code for "Deep Learning-Based Stair Segmentation and Behavioral Cloning for Autonomous Stair Climbing"
Jupyter Notebook
35
star
13

Stair-Climber

Version 3 of Autonomous Stair Climbing Robot
Jupyter Notebook
34
star
14

autonomous_stair_climbing_robot_v1

Deep Learning and Computer Vision based pipeline for alignment of a robot with stairs
Python
33
star
15

Summer-Projects

Projects done by summer interns
Jupyter Notebook
28
star
16

shayak_android

Android App for Shayak
Kotlin
27
star
17

Real-Time-Digit-Classifier

Deep Learning and Computer Vision based Summer Project for the year 2019.
Jupyter Notebook
26
star
18

pc_guidelines

Guidelines for using IvLabs PC. General instructions for maintaining and using any PC/laptop while using Ubuntu for Robotics/DL/RL research.
24
star
19

udacity-self-driving-car

Self driving car using udacity's open source simulator
Python
24
star
20

grid_motion_planning

Jinja
18
star
21

Algorithms_on_Carla

self driving algorithms on carla platform
18
star
22

Face-Unlock

Face Recognition project for automatic door unlock system.
Jupyter Notebook
18
star
23

ivlabs.github.io

Github Pages Repo for IvLabs
15
star
24

manipulation

Control a Robotic arm and perform tasks autonomously
Jupyter Notebook
14
star
25

Manipulator-Control-using-RL

This project attempts to use Reinforcement Learning to train a model to perform various control task on a manipulator.
Python
14
star
26

Reading-Group

Weekly reading group at IvLabs
12
star
27

model-based-RL

Python
11
star
28

contact_trajectory_optimization

Implementations for Trajectory Optimization of Rigid Bodies Through Contact
Python
10
star
29

OLSAC

Open-Source Library For Swarm Algorithm And Communication
C
9
star
30

tethered-aerial-vehicle

Assisted navigation using a tethered UAV
Python
9
star
31

os-nsmt

Official Code release for Open-Set Multi-Source Multi-Target Domain Adaptation
Jupyter Notebook
8
star
32

Autonomous-Quadcopter-Control-RL

This project attempts to use Reinforcement Learning to train a model to perform simple maneuvers, plan navigation and avoid dynamic obstacles.
Python
8
star
33

Drone-Delivery

Makefile
6
star
34

Object-Detection

Jupyter Notebook
6
star
35

Humanoid-vision

C++
5
star
36

SWAYAT_IMPROVED

Improvement of Swayat including hardware and software
C
5
star
37

motion_test

C++
5
star
38

robust_control_in_windfield

Python
5
star
39

learn-to-use-github

This is a repository for demo on how to use git and Github. For more details check out the README.
5
star
40

humanoid_dance

Python
4
star
41

HUMANOID_walking

walking files
Python
4
star
42

decision_making

PostScript
4
star
43

HuroCup

Eagle
4
star
44

Swayat

MATLAB
4
star
45

MOTION

The motion architecture used for humanoid walk.
Shell
4
star
46

Med-VQA

Medical question answering and representational learning.
Python
4
star
47

E-Braille-Reader

Arduino
4
star
48

MPGR

Motion Planning for Ground Robots
CMake
3
star
49

quadcopter_navigation

Autonomous navigation of a quadcopter in Gazebo Sim
Python
3
star
50

vscode-devcontainers

Devcontrainer files
Dockerfile
1
star