• Stars
    star
    380
  • Rank 112,741 (Top 3 %)
  • Language
    Python
  • License
    MIT License
  • Created over 1 year ago
  • Updated about 1 year ago

Reviews

There are no reviews yet. Be the first to send feedback to the community and the maintainers!

Repository Details

The official repository of paper "LoraHub: Efficient Cross-Task Generalization via Dynamic LoRA Composition".

LoraHub: Efficient Cross-Task Generalization via Dynamic LoRA Composition

The official repository which contains the code and pre-trained models for our paper LoraHub: Efficient Cross-Task Generalization via Dynamic LoRA Composition.

๐Ÿ”ฅ Updates

  • [2023-9-13]: Now Available for Easy Installation via pip install lorahub. For usage instructions regarding the interface, please refer to the example.py file
  • [2023-8-29]: We released the full produce code at reproduce_bbh.py. Please checkout the script to reproduce our results!
  • [2023-8-03]: Integrated into Replicate, check out the demo!
  • [2023-7-27]: We released our code and demo. Check it out!
  • [2023-7-26]: We released our paper.

๐Ÿด๓ ถ๓ ต๓ ญ๓ ก๓ ฐ๓ ฟ Overview

Low-rank adaptations (LoRA) are techniques for fine-tuning large language models on new tasks. We propose LoraHub, a framework that allows composing multiple LoRA modules trained on different tasks. The goal is to achieve good performance on unseen tasks using just a few examples, without needing extra parameters or training. And we want to build a marketplace where users can share their trained LoRA modules, thereby facilitating the application of these modules to new tasks.

The figure demostrates the zero-shot learning, few-shot in-context learning and few-shot lorahub learning (ours). Note that the Compose procedure is conducted per task rather than per example. Our method achieves similar inference throughput as zero-shot learning, yet approaches the performance of in-context learning on the BIG-Bench Hard (BBH) benchmark. The experimental results show the superior efficacy of our method in comparison to zero-shot learning while closely resembling the performance of in-context learning (ICL) in few-shot scenarios.


The figure shows the pipeline of LoraHub Learning. Our method encompasses two stages: the Compose stage and the Adapt stage. During the Compose stage, existing LoRA modules are integrated into one unified module, employing a set of weights, denoted as w, as coefficients. In the Adapt stage, the amalgamated LoRA module is evaluated on a few examples from the unseen task. Subsequently, a gradient-free algorithm is applied to refine w. After executing K iterations, a highly adapted LoRA module is produced, which can be incorporated with the LLM to perform the intended task.


โšก๏ธ Quickstart

You can install lorahub using

pip install lorahub

And then you can use lorahub learning by simply calling the function lorahub_learning:

from lorahub.algorithm import lorahub_learning, lorahub_inference
from lorahub.constant import LORA_MODULE_NAMES
import random


def get_examples_for_learning():
    """
    Get a few examples to learn to compose given LoRA modules
    """
    return [
        {"input":
            "Infer the date from context.\n\nQ: Jane is celebrating the last day of Jan 2012. What is the date tomorrow in MM/DD/YYYY?\nOptions:\n(A) 02/02/2012\n(B) 02/15/2012\n(C) 01/25/2012\n(D) 04/22/2012\n(E) 02/01/2012\n(F) 02/11/2012\nA:", "output": "(E)"}
    ]

def get_lora_module_list():
    """
    You can have a custom filtering strategy to select the modules to be used in the composition. Here we randomly select 20 modules.
    """
    random.seed(42)
    return random.sample(LORA_MODULE_NAMES, 20)


# get a list of modules to be used in the composition
modules = get_lora_module_list()
print("modules:", modules)

# construct input list and output list
example_inputs, examples_outputs = [], []
for example in get_examples_for_learning():
    example_inputs.append(example["input"])
    examples_outputs.append(example["output"])

# perform LoRAHub learning
module_weights, model, tokenizer = lorahub_learning(lora_module_list=modules,
                                                    example_inputs=example_inputs,
                                                    example_outputs=examples_outputs,
                                                    max_inference_step=40,
                                                    batch_size=1)

print("module_weights:", module_weights)

The lorahub_learning function lorahub_learning has the following interface design:

lorahub_learning(lora_module_list: List[str], # list of lora candidates
                 example_inputs: List[str],
                 example_outputs: List[str],
                 max_inference_step: int, 
                 model_name_or_path=None, # if not given, we will use the model_name_or_path in lora config
                 batch_size=None, 
                 get_loss=default_get_loss, # The function to get the objective for optimiztion, use loss as default (can be changed to something like acc. or similarity)
                 get_regular=default_l1_regularization,  # The function to get regularization term for the weight, use 0.05*|w_i| as default
                 seed=42)

A full example can be found in example.py.

๐ŸŒฒ Project Structure

The lorahub source code is organized as below:

|-- lorahub
    -- algorithm.py # main code for lorahub learning and inference
    -- constant.py # lora candidate module names
|-- example.py # usage code for demonstration purpose

๐Ÿฐ Resource

LoRA Candidates

Our methodology requires a compendium of LoRA modules trained on preceding tasks. For parity with Flan, we adopt the tasks utilized to instruct Flan-T5, thereby incorporating nearly 196 distinct tasks and their corresponding instructions via https://huggingface.co/datasets/conceptofmind/FLAN_2022. Following this, we created several LoRA modules as possible candidates. These LoRA modules can be accessed at https://huggingface.co/models?search=lorahub.

๐Ÿ’ฌ Citation

If our work is useful for you, please consider citing our paper:

@misc{huang2023lorahub,
    title={LoraHub: Efficient Cross-Task Generalization via Dynamic LoRA Composition}, 
    author={Chengsong Huang and Qian Liu and Bill Yuchen Lin and Tianyu Pang and Chao Du and Min Lin},
    year={2023},
    eprint={2307.13269},
    archivePrefix={arXiv},
    primaryClass={cs.CL}
}

More Repositories

1

EditAnything

Edit anything in images powered by segment-anything, ControlNet, StableDiffusion, etc. (ACM MM)
Python
3,256
star
2

poolformer

PoolFormer: MetaFormer Is Actually What You Need for Vision (CVPR 2022 Oral)
Python
1,290
star
3

envpool

C++-based high-performance parallel environment execution engine (vectorized env) for general RL environments.
C++
1,084
star
4

volo

VOLO: Vision Outlooker for Visual Recognition
Jupyter Notebook
922
star
5

Adan

Adan: Adaptive Nesterov Momentum Algorithm for Faster Optimizing Deep Models
Python
743
star
6

MDT

Masked Diffusion Transformer is the SOTA for image synthesis. (ICCV 2023)
Python
494
star
7

metaformer

MetaFormer Baselines for Vision (TPAMI 2024)
Python
414
star
8

mvp

NeurIPS-2021: Direct Multi-view Multi-person 3D Human Pose Estimation
Python
324
star
9

CLoT

CVPR'24, Official Codebase of our Paper: "Let's Think Outside the Box: Exploring Leap-of-Thought in Large Language Models with Creative Humor Generation".
Python
290
star
10

inceptionnext

InceptionNeXt: When Inception Meets ConvNeXt (CVPR 2024)
Python
245
star
11

iFormer

iFormer: Inception Transformer
Python
226
star
12

ptp

[CVPR2023] The code for ใ€ŠPosition-guided Text Prompt for Vision-Language Pre-trainingใ€‹
Python
148
star
13

BindDiffusion

BindDiffusion: One Diffusion Model to Bind Them All
Python
140
star
14

sailor-llm

โš“๏ธ Sailor: Open Language Models for South-East Asia
Python
87
star
15

FDM

The official PyTorch implementation of Fast Diffusion Model
Python
83
star
16

mugs

A PyTorch implementation of Mugs proposed by our paper "Mugs: A Multi-Granular Self-Supervised Learning Framework".
Python
78
star
17

Agent-Smith

[ICML2024] Agent Smith: A Single Image Can Jailbreak One Million Multimodal LLM Agents Exponentially Fast
Python
69
star
18

sdft

[ACL 2024] The official codebase for the paper "Self-Distillation Bridges Distribution Gap in Language Model Fine-tuning".
Shell
67
star
19

symbolic-instruction-tuning

The official repository for the paper "From Zero to Hero: Examining the Power of Symbolic Tasks in Instruction Tuning".
Python
58
star
20

scaling-with-vocab

๐Ÿ“ˆ Scaling Laws with Vocabulary: Larger Models Deserve Larger Vocabularies https://arxiv.org/abs/2407.13623
Python
52
star
21

ScaleLong

The official repository of paper "ScaleLong: Towards More Stable Training of Diffusion Model via Scaling Network Long Skip Connection" (NeurIPS 2023)
Python
47
star
22

VGT

Video Graph Transformer for Video Question Answering (ECCV'22)
Python
44
star
23

jax_xc

Exchange correlation functionals translated from libxc to jax
Python
43
star
24

d4ft

A JAX library for Density Functional Theory.
Python
40
star
25

finetune-fair-diffusion

Code of the paper: Finetuning Text-to-Image Diffusion Models for Fairness
Python
38
star
26

dice

Official implementation of Bootstrapping Language Models via DPO Implicit Rewards
Python
36
star
27

ILD

Imitation Learning via Differentiable Physics
Python
33
star
28

GP-Nerf

Official implementation for GP-NeRF (ECCV 2022)
Python
33
star
29

Consistent3D

The official PyTorch implementation of Consistent3D (CVPR 2024)
Python
33
star
30

edp

[NeurIPS 2023] Efficient Diffusion Policy
Python
32
star
31

rosmo

Codes for "Efficient Offline Policy Optimization with a Learned Model", ICLR2023
Python
28
star
32

MMCBench

Python
27
star
33

GDPO

Graph Diffusion Policy Optimization
Python
24
star
34

dualformer

Python
23
star
35

hloenv

an environment based on XLA for deep learning compiler optimization research.
C++
23
star
36

DiffMemorize

On Memorization in Diffusion Models
Python
21
star
37

optim4rl

Optim4RL is a Jax framework of learning to optimize for reinforcement learning.
Python
21
star
38

TEC

Python
15
star
39

numcc

NU-MCC: Multiview Compressive Coding with Neighborhood Decoder and Repulsive UDF
Python
12
star
40

PatchAIL

Implementation of PatchAIL in the ICLR 2023 paper <Visual Imitation with Patch Rewards>
Python
12
star
41

offbench

Python
11
star
42

OPER

code for the paper Offline Prioritized Experience Replay
Jupyter Notebook
11
star
43

win

Python
4
star
44

P-DoS

[ArXiv 2024] Denial-of-Service Poisoning Attacks on Large Language Models
Python
4
star
45

sailcompass

Python
3
star
46

SLRLA-optimizer

Python
2
star
47

Cheating-LLM-Benchmarks

Jupyter Notebook
2
star
48

I-FSJ

Improved Few-Shot Jailbreaking Can Circumvent Aligned Language Models and Their Defenses
Python
2
star
49

MISA

[NeurIPS 2023] Mutual Information Regularized Offline Reinforcement Learning
Python
1
star