• Stars
    star
    506
  • Rank 87,236 (Top 2 %)
  • Language
    Python
  • License
    MIT License
  • Created almost 6 years ago
  • Updated over 3 years ago

Reviews

There are no reviews yet. Be the first to send feedback to the community and the maintainers!

Repository Details

Evaluate three types of task shifting with popular continual learning algorithms.

Continual-Learning-Benchmark

Evaluate three types of task shifting with popular continual learning algorithms.

This repository implemented and modularized following algorithms with PyTorch:

  • EWC: code, paper (Overcoming catastrophic forgetting in neural networks)
  • Online EWC: code, paper
  • SI: code, paper (Continual Learning Through Synaptic Intelligence)
  • MAS: code, paper (Memory Aware Synapses: Learning what (not) to forget)
  • GEM: code, paper (Gradient Episodic Memory for Continual Learning)
  • (More are coming)

All the above algorithms are compared to following baselines with the same static memory overhead:

Key tables:

If this repository helps your work, please cite:

@inproceedings{Hsu18_EvalCL,
  title={Re-evaluating Continual Learning Scenarios: A Categorization and Case for Strong Baselines},
  author={Yen-Chang Hsu and Yen-Cheng Liu and Anita Ramasamy and Zsolt Kira},
  booktitle={NeurIPS Continual learning Workshop },
  year={2018},
  url={https://arxiv.org/abs/1810.12488}
}

Preparation

This repository was tested with Python 3.6 and PyTorch 1.0.1.post2. Part of the cases is tested with PyTorch 1.5.1 and gives the same results.

pip install -r requirements.txt

Demo

The scripts for reproducing the results of this paper are under the scripts folder.

  • Example: Run all algorithms in the incremental domain scenario with split MNIST.
./scripts/split_MNIST_incremental_domain.sh 0
# The last number is gpuid
# Outputs will be saved in ./outputs
  • Eaxmple outputs: Summary of repeats
===Summary of experiment repeats: 3 / 3 ===
The regularization coefficient: 400.0
The last avg acc of all repeats: [90.517 90.648 91.069]
mean: 90.74466666666666 std: 0.23549144829955856
  • Eaxmple outputs: The grid search for regularization coefficient
reg_coef: 0.1 mean: 76.08566666666667 std: 1.097717733400629
reg_coef: 1.0 mean: 77.59100000000001 std: 2.100847606721314
reg_coef: 10.0 mean: 84.33933333333334 std: 0.3592671553160509
reg_coef: 100.0 mean: 90.83800000000001 std: 0.6913701372395712
reg_coef: 1000.0 mean: 87.48566666666666 std: 0.5440161353816179
reg_coef: 5000.0 mean: 68.99133333333333 std: 1.6824762174313899

Usage

  • Enable the grid search for the regularization coefficient: Use the option with a list of values, ex: -reg_coef 0.1 1 10 100 ...
  • Repeat the experiment N times: Use the option -repeat N

Lookup available options:

python iBatchLearn.py -h

Other results

Below are CIFAR100 results. Please refer to the scripts for details.

More Repositories

1

L2C

Learning to Cluster. A deep clustering strategy.
Python
313
star
2

Awesome-LLM-Robotics

A comprehensive list of papers using large language/multi-modal models for Robotics/RL, including papers, codes, and related websites
298
star
3

CODA-Prompt

PyTorch code for the CVPR'23 paper: "CODA-Prompt: COntinual Decomposed Attention-based Prompting for Rehearsal-Free Continual Learning"
Python
125
star
4

AlwaysBeDreaming-DFCIL

PyTorch code for the ICCV'21 paper: "Always Be Dreaming: A New Approach for Class-Incremental Learning"
Python
59
star
5

Xmodal-Ctx

Official PyTorch implementation of our CVPR 2022 paper: Beyond a Pre-Trained Object Detector: Cross-Modal Textual and Visual Context for Image Captioning
Python
46
star
6

FeatMatch

PyTorch code for the paper: "FeatMatch: Feature-Based Augmentation for Semi-Supervised Learning"
Python
43
star
7

robo-vln

Pytorch code for ICRA'21 paper: "Hierarchical Cross-Modal Agent for Robotics Vision-and-Language Navigation"
Python
34
star
8

MultiAgentPerception

Official source code to CVPR'20 paper, "When2com: Multi-Agent Perception via Communication Graph Grouping"
Python
34
star
9

UNO-IC

Python
28
star
10

DomainGeneralization-Stylization

PyTorch code for: Frustratingly Simple Domain Generalization via Image Stylization
24
star
11

Geometric-Sensitivity-Decomposition

Python
18
star
12

DistillMatch-SSCL

PyTorch code for the IJCNN'21 paper: "Memory-Efficient Semi-Supervised Continual Learning: The World is its Own Replay Buffer"
Python
12
star
13

FTP

This repo hosts the code for the Fast Trainable Projection (FTP) project.
Python
10
star
14

FedFOR

TO BE RELEASED PyTorch code for the preprint: "FedFOR: Stateless Heterogeneous Federated Learning with First-Order Regularization"
2
star