Nash-MTL
Official implementation of "Multi-Task Learning as a Bargaining Game".
Setup environment
conda create -n nashmtl python=3.9.7
conda activate nashmtl
conda install pytorch==1.9.0 torchvision==0.10.0 cudatoolkit=10.2 -c pytorch
conda install pyg -c pyg -c conda-forge
Install the repo:
git clone https://github.com/AvivNavon/nash-mtl.git
cd nash-mtl
pip install -e .
Run experiment
To run experiments:
cd experiment/<expirimnet name>
python trainer.py --method=nashmtl
Follow instruction on the experiment README file for more information regarding, e.g., datasets.
Here <experiment name>
is one of [toy, quantum_chemistry, nyuv2]
. You can also replace nashmtl
with on of the following MTL methods.
We also support experiment tracking with Weights & Biases with two additional parameters:
python trainer.py --method=nashmtl --wandb_project=<project-name> --wandb_entity=<entity-name>
MTL methods
We support the following MTL methods with a unified API. To run experiment with MTL method X
simply run:
python trainer.py --method=X
Method (code name) | Paper (notes) |
---|---|
Nash-MTL (nashmtl ) |
Multi-Task Learning as a Bargaining Game |
CAGrad (cagrad ) |
Conflict-Averse Gradient Descent for Multi-task Learning |
PCGrad (pcgrad ) |
Gradient Surgery for Multi-Task Learning |
IMTL-G (imtl ) |
Towards Impartial Multi-task Learning |
MGDA (mgda ) |
Multi-Task Learning as Multi-Objective Optimization |
DWA (dwa ) |
End-to-End Multi-Task Learning with Attention |
Uncertainty weighting (uw ) |
Multi-Task Learning Using Uncertainty to Weigh Losses for Scene Geometry and Semantics |
Linear scalarization (ls ) |
- (equal weighting) |
Scale-invariant baseline (scaleinvls ) |
- (see Nash-MTL paper for details) |
Random Loss Weighting (rlw ) |
A Closer Look at Loss Weighting in Multi-Task Learning |
Citation
If you find Nash-MTL
to be useful in your own research, please consider citing the following paper:
@article{navon2022multi,
title={Multi-Task Learning as a Bargaining Game},
author={Navon, Aviv and Shamsian, Aviv and Achituve, Idan and Maron, Haggai and Kawaguchi, Kenji and Chechik, Gal and Fetaya, Ethan},
journal={arXiv preprint arXiv:2202.01017},
year={2022}
}