• Stars
    star
    743
  • Rank 61,046 (Top 2 %)
  • Language
    Python
  • License
    Apache License 2.0
  • Created about 2 years ago
  • Updated 5 months ago

Reviews

There are no reviews yet. Be the first to send feedback to the community and the maintainers!

Repository Details

Adan: Adaptive Nesterov Momentum Algorithm for Faster Optimizing Deep Models

Adan: Adaptive Nesterov Momentum Algorithm for Faster Optimizing Deep Models

This is an official PyTorch implementation of Adan. See the paper here. If you find our adan helpful or heuristic to your projects, please cite this paper and also star this repository. Thanks!

@article{xie2022adan,
  title={Adan: Adaptive Nesterov Momentum Algorithm for Faster Optimizing Deep Models},
  author={Xie, Xingyu and Zhou, Pan and Li, Huan and Lin, Zhouchen and Yan, Shuicheng},
  journal={arXiv preprint arXiv:2208.06677},
  year={2022}
}

News

  • πŸ”₯ πŸ”₯ πŸ”₯FusedAdan with less memory footprint is released.
  • Adan is supported in the lasted version of Timm.
  • Results on large language models, like GPT2, are released.
  • Adan is chosen as the default optimizer in the text-to-3D DreamFusion Project. See more results here.
  • TF's implementation (third party) refers to DenisVorotyntsev/Adan.
  • JAX's version (third party) is implemented and also supported in Deepmind/optax.
  • Adan is supported in the MMClassification of the OpenMMLab project. The user can find the log and example of using Adan to train ViT-B here. The results of the detection tasks are coming soon.

Installation

python3 -m pip install git+https://github.com/sail-sg/Adan.git

FusedAdan is installed by default. If you want to use the original Adan, please install it by:

git clone https://github.com/sail-sg/Adan.git
cd Adan
python3 setup.py install --unfused

A brief comparison of peak memory and wall duration for the optimizer is as follows. The duration time is the total time of 200 optimizer.step(). We further compare Adam and FusedAdan in great detail on GPT-2. See more results here.

Model Model Size (MB) Adam Peak (MB) Adan Peak (MB) FusedAdan Peak (MB) Adam Time (ms) Adan Time (ms) FusedAdan Time (ms)
ResNet-50 25 7142 7195 7176 9.0 4.2 1.9
ResNet-101 44 10055 10215 10160 17.5 7.0 3.4
ViT-B 86 9755 9758 9758 8.9 12.3 4.3
Swin-B 87 16118 16202 16173 17.9 12.8 4.9
ConvNext-B 88 17353 17389 17377 19.1 15.6 5.0
Swin-L 196 24299 24316 24310 17.5 28.1 10.1
ConvNext-L 197 26025 26055 26044 18.6 31.1 10.2
ViT-L 304 25652 25658 25656 18.0 43.2 15.1
GPT-2 758 25096 25406 25100 49.9 107.7 37.4
GPT-2 1313 34357 38595 34363 81.8 186.0 64.4

Usage

For your convenience to use Adan, we briefly provide some intuitive instructions below, then provide some general experimental tips, and finally provide more details (e.g., specific commands and hyper-parameters) for each experiment in the paper.

1) Two steps to use Adan

Step 1. add Adan-dependent hyper-parameters by adding the following hyper-parameters to the config:

parser.add_argument('--max-grad-norm', type=float, default=0.0, help='if the l2 norm is large than this hyper-parameter, then we clip the gradient  (default: 0.0, no gradient clip)')
parser.add_argument('--weight-decay', type=float, default=0.02,  help='weight decay, similar one used in AdamW (default: 0.02)')
parser.add_argument('--opt-eps', default=None, type=float, metavar='EPSILON', help='optimizer epsilon to avoid the bad case where second-order moment is zero (default: None, use opt default 1e-8 in adan)')
parser.add_argument('--opt-betas', default=None, type=float, nargs='+', metavar='BETA', help='optimizer betas in Adan (default: None, use opt default [0.98, 0.92, 0.99] in Adan)')
parser.add_argument('--no-prox', action='store_true', default=False, help='whether perform weight decay like AdamW (default=False)')

opt-betas: To keep consistent with our usage habits, the $\beta$'s in the paper are actually the $(1-\beta)$'s in the code.

foreach (bool): If True, Adan would use torch._foreach implementation. It is faster but uses slightly more memory.

no-prox: It determines the update rule of parameters with weight decay. By default, Adan updates the parameters in the way presented in Algorithm 1 in the paper:

$$\boldsymbol{\theta}_{k+1} = ( 1+\lambda \eta)^{-1}\left[\boldsymbol{\theta}_k - \boldsymbol{\eta}_k \circ (\mathbf{m}_k+(1-{\color{blue}\beta_2})\mathbf{v}k)\right],$$

But one also can update the parameter like Adamw:

$$\boldsymbol{\theta}_{k+1} = ( 1-\lambda \eta)\boldsymbol{\theta}_k - \boldsymbol{\eta}_k \circ (\mathbf{m}_k+(1-{\color{blue}\beta_2})\mathbf{v}_k).$$ In all experiments, we set no-prox=False in our paper.

Step 2. create the Adan optimizer as follows. In this step, we can directly replace the vanilla optimizer by using the following command:

from adan import Adan
optimizer = Adan(param, lr=args.lr, weight_decay=args.weight_decay, betas=args.opt_betas, eps = args.opt_eps, max_grad_norm=args.max_grad_norm, no_prox=args.no_prox)

2) Tips for Experiments

  • To make Adan simple, in all experiments except Table 12 in the paper, we do not use the restart strategy in Adan. But Table 12 shows that the restart strategy can further slightly improve the performance of Adan.
  • Adan often allows one to use a large peak learning rate which often fails other optimizers, e.g., Adam and AdamW. For example, in all experiments except for the MAE pre-training and LSTM, the learning rate used by Adan is 5-10 times larger than that in Adam/AdamW.
  • Adan is relatively robust to beta1, beta2, and beta3, especially for beta2. If you want better performance, you can first tune beta3 and then beta1.
  • Interestingly, we found that weight_decay = 0.02 is suitable for all experiments in our paper.
  • Adan has a slightly higher GPU memory cost than Adam/AdamW on a single node. However, this problem can be solved using the ZeroRedundancyOptimizer, which shares optimizer states across distributed data-parallel processes to reduce per-process memory footprint. Specifically, when using the ZeroRedundancyOptimizer on more than two GPUs, Adan and Adam consume almost the same amount of memory.

3) More extra detailed steps&results

Please refer to the following links for detailed steps. In these detailed steps, we even include the docker images for reproducibility.

Model Zoo

Results on vision tasks

For your convenience to use Adan, we provide the configs and log files for the experiments on ImageNet-1k.

Model Epoch Training Setting Acc. (%) Config Batch Size Download
ViT-S 150 I 80.1 config 2048 log/model
ViT-S 150 II 79.6 config 2048 log/model
ViT-S 300 I 81.1 config 2048 log/model
ViT-S 300 II 80.7 config 2048 log/model
ViT-B 150 II 81.7 config 2048 log/model
ViT-B 300 II 82.6 config 2048 log/model
ResNet-50 100 I 78.1 config 2048 log/model
ResNet-50 200 I 79.7 config 2048 log/model
ResNet-50 300 I 80.2 config 2048 log/model
ResNet-101 100 I 80.0 config 2048 log/model
ResNet-101 200 I 81.6 config 2048 log/model
ResNet-101 300 I 81.9 config 2048 log/model
ConvNext-tiny 150 II 81.7 config 2048 log//model
ConvNext-tiny 300 II 82.4 config 2048 log/model
MAE-small 800+100 --- 83.8 config 4096/2048 log-pretrain/log-finetune/model
MAE-Large 800+50 --- 85.9 config 4096/2048 log-pretrain/log-finetune/model

Results on NLP tasks

BERT-base

We give the configs and log files of the BERT-base model pre-trained on the Bookcorpus and Wikipedia datasets and fine-tuned on GLUE tasks. Note that we provide the config, log file, and detailed instructions for BERT-base in the folder ./NLP/BERT.

Pretraining Config Batch Size Log Model
Adan config 256 log model
Fine-tuning on GLUE-Task Metric Result Config
CoLA Matthew's corr. 64.6 config
SST-2 Accuracy 93.2 config
STS-B Person corr. 89.3 config
QQP Accuracy 91.2 config
MNLI Matched acc./Mismatched acc. 85.7/85.6 config
QNLI Accuracy 91.3 config
RTE Accuracy 73.3 config

For fine-tuning on GLUE-Task, see the total batch size in their corresponding configure files.

Transformer-XL-base

We provide the config and log for Transformer-XL-base trained on the WikiText-103 dataset. The total batch size for this experiment is 60*4.

Steps Test PPL Download
Baseline (Adam) 200k 24.2 log&config
Transformer-XL-base 50k 26.2 log&config
Transformer-XL-base 100k 24.2 log&config
Transformer-XL-base 200k 23.5 log&config

Results on Large Language Models

GPT2-345m

We provide the config and log for GPT2-345m pre-trained on the dataset that comes from BigCode and evaluated on the HumanEval dataset by zero-shot learning. HumanEval is used to measure functional correctness for synthesizing programs from docstrings. It consists of 164 original programming problems, assessing language comprehension, algorithms, and simple mathematics, with some comparable to simple software interview questions. We set Temperature = 0.8 during evaluation.

Steps pass@1 pass@10 pass@100 Download
GPT2-345m (Adam) 300k 0.0840 0.209 0.360 log&config
GPT2-345m (Adan) 150k 0.0843 0.221 0.377 log&config

Adan obtains comparable results with only half cost.

Results on Diffusion Models

We show the results of the text-to-3D task supported by the DreamFusion Project. More visualization results could be founded here. Examples generated from text prompt Sydney opera house, aerial view with Adam and Adan:

opera-adan.mp4
opera-adam.mp4

More Repositories

1

EditAnything

Edit anything in images powered by segment-anything, ControlNet, StableDiffusion, etc. (ACM MM)
Python
3,256
star
2

poolformer

PoolFormer: MetaFormer Is Actually What You Need for Vision (CVPR 2022 Oral)
Python
1,290
star
3

envpool

C++-based high-performance parallel environment execution engine (vectorized env) for general RL environments.
C++
1,084
star
4

volo

VOLO: Vision Outlooker for Visual Recognition
Jupyter Notebook
922
star
5

MDT

Masked Diffusion Transformer is the SOTA for image synthesis. (ICCV 2023)
Python
494
star
6

metaformer

MetaFormer Baselines for Vision (TPAMI 2024)
Python
414
star
7

lorahub

The official repository of paper "LoraHub: Efficient Cross-Task Generalization via Dynamic LoRA Composition".
Python
380
star
8

mvp

NeurIPS-2021: Direct Multi-view Multi-person 3D Human Pose Estimation
Python
324
star
9

CLoT

CVPR'24, Official Codebase of our Paper: "Let's Think Outside the Box: Exploring Leap-of-Thought in Large Language Models with Creative Humor Generation".
Python
290
star
10

inceptionnext

InceptionNeXt: When Inception Meets ConvNeXt (CVPR 2024)
Python
245
star
11

iFormer

iFormer: Inception Transformer
Python
226
star
12

ptp

[CVPR2023] The code for γ€ŠPosition-guided Text Prompt for Vision-Language Pre-training》
Python
148
star
13

BindDiffusion

BindDiffusion: One Diffusion Model to Bind Them All
Python
140
star
14

sailor-llm

βš“οΈ Sailor: Open Language Models for South-East Asia
Python
87
star
15

FDM

The official PyTorch implementation of Fast Diffusion Model
Python
83
star
16

mugs

A PyTorch implementation of Mugs proposed by our paper "Mugs: A Multi-Granular Self-Supervised Learning Framework".
Python
78
star
17

Agent-Smith

[ICML2024] Agent Smith: A Single Image Can Jailbreak One Million Multimodal LLM Agents Exponentially Fast
Python
69
star
18

sdft

[ACL 2024] The official codebase for the paper "Self-Distillation Bridges Distribution Gap in Language Model Fine-tuning".
Shell
67
star
19

symbolic-instruction-tuning

The official repository for the paper "From Zero to Hero: Examining the Power of Symbolic Tasks in Instruction Tuning".
Python
58
star
20

scaling-with-vocab

πŸ“ˆ Scaling Laws with Vocabulary: Larger Models Deserve Larger Vocabularies https://arxiv.org/abs/2407.13623
Python
52
star
21

ScaleLong

The official repository of paper "ScaleLong: Towards More Stable Training of Diffusion Model via Scaling Network Long Skip Connection" (NeurIPS 2023)
Python
47
star
22

VGT

Video Graph Transformer for Video Question Answering (ECCV'22)
Python
44
star
23

jax_xc

Exchange correlation functionals translated from libxc to jax
Python
43
star
24

d4ft

A JAX library for Density Functional Theory.
Python
40
star
25

finetune-fair-diffusion

Code of the paper: Finetuning Text-to-Image Diffusion Models for Fairness
Python
38
star
26

dice

Official implementation of Bootstrapping Language Models via DPO Implicit Rewards
Python
36
star
27

ILD

Imitation Learning via Differentiable Physics
Python
33
star
28

GP-Nerf

Official implementation for GP-NeRF (ECCV 2022)
Python
33
star
29

Consistent3D

The official PyTorch implementation of Consistent3D (CVPR 2024)
Python
33
star
30

edp

[NeurIPS 2023] Efficient Diffusion Policy
Python
32
star
31

rosmo

Codes for "Efficient Offline Policy Optimization with a Learned Model", ICLR2023
Python
28
star
32

MMCBench

Python
27
star
33

GDPO

Graph Diffusion Policy Optimization
Python
24
star
34

dualformer

Python
23
star
35

hloenv

an environment based on XLA for deep learning compiler optimization research.
C++
23
star
36

DiffMemorize

On Memorization in Diffusion Models
Python
21
star
37

optim4rl

Optim4RL is a Jax framework of learning to optimize for reinforcement learning.
Python
21
star
38

TEC

Python
15
star
39

numcc

NU-MCC: Multiview Compressive Coding with Neighborhood Decoder and Repulsive UDF
Python
12
star
40

PatchAIL

Implementation of PatchAIL in the ICLR 2023 paper <Visual Imitation with Patch Rewards>
Python
12
star
41

offbench

Python
11
star
42

OPER

code for the paper Offline Prioritized Experience Replay
Jupyter Notebook
11
star
43

win

Python
4
star
44

P-DoS

[ArXiv 2024] Denial-of-Service Poisoning Attacks on Large Language Models
Python
4
star
45

sailcompass

Python
3
star
46

SLRLA-optimizer

Python
2
star
47

Cheating-LLM-Benchmarks

Jupyter Notebook
2
star
48

I-FSJ

Improved Few-Shot Jailbreaking Can Circumvent Aligned Language Models and Their Defenses
Python
2
star
49

MISA

[NeurIPS 2023] Mutual Information Regularized Offline Reinforcement Learning
Python
1
star