• Stars
    star
    984
  • Rank 46,528 (Top 1.0 %)
  • Language
  • Created over 1 year ago
  • Updated about 1 month ago

Reviews

There are no reviews yet. Be the first to send feedback to the community and the maintainers!

Repository Details

[TMLR 2024] Efficient Large Language Models: A Survey

Efficient Large Language Models: A Survey

Efficient Large Language Models: A Survey [arXiv] (Version 2 released on 12/23/2023; Version 1 released on 12/06/2023)

Zhongwei Wan1, Xin Wang1, Che Liu2, Samiul Alam1, Yu Zheng3, Jiachen Liu4, Zhongnan Qu5, Shen Yan6, Yi Zhu7, Quanlu Zhang8, Mosharaf Chowdhury4, Mi Zhang1

1The Ohio State University, 2Imperial College London, 3Michigan State University, 4University of Michigan, 5Amazon AWS AI, 6Google Research, 7Boson AI, 8Microsoft Research Asia

πŸ“Œ What is This Survey About?

Large Language Models (LLMs) have demonstrated remarkable capabilities in many important tasks and have the potential to make a substantial impact on our society. Such capabilities, however, come with considerable resource demands, highlighting the strong need to develop effective techniques for addressing the efficiency challenges posed by LLMs. In this survey, we provide a systematic and comprehensive review of efficient LLMs research. We organize the literature in a taxonomy consisting of three main categories, covering distinct yet interconnected efficient LLMs topics from model-centric, data-centric, and framework-centric perspective, respectively. We hope our survey and this GitHub repository can serve as valuable resources to help researchers and practitioners gain a systematic understanding of the research developments in efficient LLMs and inspire them to contribute to this important and exciting field.

We will actively maintain this repository and update the survey by incorporating new research as it emerges.

πŸ€” Why Efficient LLMs are Needed?

img/image.jpg

Although LLMs are leading the next wave of AI revolution, the remarkable capabilities of LLMs come at the cost of their substantial resource demands. Figure 1 (left) illustrates the relationship between model performance and model training time in terms of GPU hours for LLaMA series, where the size of each circle is proportional to the number of model parameters. As shown, although larger models are able to achieve better performance, the amounts of GPU hours used for training them grow exponentially as model sizes scale up. In addition to training, inference also contributes quite significantly to the operational cost of LLMs. Figure 2 (right) depicts the relationship between model performance and inference throughput. Similarly, scaling up the model size enables better performance but comes at the cost of lower inference throughput (higher inference latency), presenting challenges for these models in expanding their reach to a broader customer base and diverse applications in a cost-effective way. The high resource demands of LLMs highlight the strong need to develop techniques to enhance the efficiency of LLMs. As shown in Figure 2, compared to LLaMA-1-33B, Mistral-7B, which uses grouped-query attention and sliding window attention to speed up inference, achieves comparable performance and much higher throughput. This superiority highlights the feasibility and significance of designing efficiency techniques for LLMs.

πŸ“– Table of Content

πŸ€– Model-Centric Methods

Model Compression

Quantization

Post-Training Quantization
Weight-Only Quantization
  • GPTQ: Accurate Quantization for Generative Pre-trained Transformers, ICLR, 2023 [Paper] [Code]
  • QuIP: 2-Bit Quantization of Large Language Models With Guarantees, arXiv, 2023 [Paper] [Code]
  • AWQ: Activation-aware Weight Quantization for LLM Compression and Acceleration, arXiv, 2023 [Paper] [Code]
  • OWQ: Lessons Learned from Activation Outliers for Weight Quantization in Large Language Models, arXiv, 2023 [Paper] [Code]
  • SpQR: A Sparse-Quantized Representation for Near-Lossless LLM Weight Compression, arXiv, 2023 [Paper] [Code]
  • FineQuant: Unlocking Efficiency with Fine-Grained Weight-Only Quantization for LLMs, NeurIPS-ENLSP, 2023 [Paper]
  • LLM.int8(): 8-bit Matrix Multiplication for Transformers at Scale, NeurlPS, 2022 [Paper] [Code]
  • Optimal Brain Compression: A Framework for Accurate Post-Training Quantization and Pruning, NeurIPS, 2022 [Paper] [Code]
Weight-Activation Co-Quantization
  • Intriguing Properties of Quantization at Scale, NeurIPS, 2023 [Paper]
  • ZeroQuant-V2: Exploring Post-training Quantization in LLMs from Comprehensive Study to Low Rank Compensation, arXiv, 2023 [Paper] [Code]
  • ZeroQuant-FP: A Leap Forward in LLMs Post-Training W4A8 Quantization Using Floating-Point Formats, NeurIPS-ENLSP, 2023 [Paper] [Code]
  • OliVe: Accelerating Large Language Models via Hardware-friendly Outlier-Victim Pair Quantization, ISCA, 2023 [Paper] [Code]
  • RPTQ: Reorder-based Post-training Quantization for Large Language Models, arXiv, 2023 [Paper] [Code]
  • Outlier Suppression+: Accurate Quantization of Large Language Models by Equivalent and Optimal Shifting and Scaling, arXiv, 2023 [Paper] [Code]
  • QLLM: Accurate and Efficient Low-Bitwidth Quantization for Large Language Models, arXiv, 2023 [Paper]
  • SmoothQuant: Accurate and Efficient Post-Training Quantization for Large Language Models, ICML, 2023 [Paper] [Code]
  • ZeroQuant: Efficient and Affordable Post-Training Quantization for Large-Scale Transformers, NeurIPS, 2022 [Paper]
Quantization-Aware Training
  • BitNet: Scaling 1-bit Transformers for Large Language Models, arXiv, 2023 [Paper]
  • LLM-QAT: Data-Free Quantization Aware Training for Large Language Models, arXiv, 2023 [Paper] [Code]
  • Compression of Generative Pre-trained Language Models via Quantization, ACL, 2022 [Paper]

Parameter Pruning

Structured Pruning
  • LoRAShear: Efficient Large Language Model Structured Pruning and Knowledge Recovery, arXiv, 2023 [Paper]
  • LLM-Pruner: On the Structural Pruning of Large Language Models, NeurIPS, 2023 [Paper] [Code]
  • Sheared LLaMA: Accelerating Language Model Pre-training via Structured Pruning, Β NeurIPS-ENLSP, 2023 [Paper] [Code]
  • LoRAPrune: Pruning Meets Low-Rank Parameter-Efficient Fine-Tuning, arXiv, 2023 [Paper]
Unstructured Pruning
  • SparseGPT: Massive Language Models Can Be Accurately Pruned in One-Shot, ICML, 2023 [Paper] [Code]
  • A Simple and Effective Pruning Approach for Large Language Models, arXiv, 2023 [Paper] [Code]
  • One-Shot Sensitivity-Aware Mixed Sparsity Pruning for Large Language Models, arXiv, 2023 [Paper]

Low-Rank Approximation

  • The Truth is in There: Improving Reasoning in Language Models with Layer-Selective Rank Reductionhttps, arXiv, 2023 [Paper]
  • TensorGPT: Efficient Compression of the Embedding Layer in LLMs based on the Tensor-Train Decomposition, arXiv, 2023 [Paper]
  • LoSparse: Structured Compression of Large Language Models based on Low-Rank and Sparse Approximation, ICML, 2023 [Paper] [Code]

Knowledge Distillation

White-Box KD
  • Towards the Law of Capacity Gap in Distilling Language Models, arXiv, 2023 [Paper] [Code]
  • Baby Llama: Knowledge Distillation from an Ensemble of Teachers Trained on a Small Dataset with no Performance Penalty, arXiv, 2023 [Paper]
  • Knowledge Distillation of Large Language Models, arXiv, 2023 [Paper] [Code]
  • GKD: Generalized Knowledge Distillation for Auto-regressive Sequence Models, arXiv, 2023 [Paper]
  • Propagating Knowledge Updates to LMs Through Distillation, arXiv, 2023 [Paper] [Code]
  • Less is More: Task-aware Layer-wise Distillation for Language Model Compression, ICML, 2023 [Paper]
  • Token-Scaled Logit Distillation for Ternary Weight Generative Language Models, arXiv, 2023 [Paper]
Black-Box KD
  • Zephyr: Direct Distillation of LM Alignment, arXiv, 2023 [Paper]
  • Instruction Tuning with GPT-4, arXiv, 2023 [Paper] [Code]
  • Lion: Adversarial Distillation of Closed-Source Large Language Model, arXiv, 2023 [Paper] [Code]
  • Specializing Smaller Language Models towards Multi-Step Reasoning, ICML, 2023 [Paper] [Code]
  • Distilling Step-by-Step! Outperforming Larger Language Models with Less Training Data and Smaller Model Sizes, ACL, 2023 [Paper]
  • Large Language Models Are Reasoning Teachers, ACL, 2023 [Paper] [Code]
  • SCOTT: Self-Consistent Chain-of-Thought Distillation, ACL, 2023 [Paper] [Code]
  • Symbolic Chain-of-Thought Distillation: Small Models Can Also "Think" Step-by-Step, ACL, 2023 [Paper]
  • Distilling Reasoning Capabilities into Smaller Language Models, ACL, 2023 [Paper] [Code]
  • In-context Learning Distillation: Transferring Few-shot Learning Ability of Pre-trained Language Models, arXiv, 2022 [Paper]
  • Explanations from Large Language Models Make Small Reasoners Better, arXiv, 2022 [Paper]
  • DISCO: Distilling Counterfactuals with Large Language Models, arXiv, 2022 [Paper] [Code]

Efficient Pre-Training

Mixed Precision Acceleration

  • GACT: Activation Compressed Training for Generic Network Architectures, ICML, 2022 [Paper] [Code]
  • Mesa: A Memory-saving Training Framework for Transformers, arXiv, 2021 [Paper] [Code]
  • Bfloat16 Processing for Neural Networks, ARITH, 2019 [Paper]
  • A Study of BFLOAT16 for Deep Learning Training, arXiv, 2019 [Paper]
  • Mixed Precision Training, ICLR, 2018 [Paper]

Scaling Models

  • Learning to Grow Pretrained Models for Efficient Transformer Training, ICLR, 2023 [Paper] [Code]
  • 2x Faster Language Model Pre-training via Masked Structural Growth, arXiv, 2023 [Paper]
  • Reusing Pretrained Models by Multi-linear Operators for Efficient Training, NeurIPS, 2023 [Paper]
  • FLM-101B: An Open LLM and How to Train It with $100 K Budget, arXiv, 2023 [Paper] [Code]
  • Knowledge Inheritance for Pre-trained Language Models, NAACL, 2022 [Paper] [Code]
  • Staged Training for Transformer Language Models, ICML, 2022 [Paper] [Code]

Initialization Techniques

  • Deepnet: Scaling transformers to 1,000 layers, arXiv, 2022 [Paper] [Code]
  • ZerO Initialization: Initializing Neural Networks with only Zeros and Ones, TMLR, 2022 [Paper] [Code]
  • Rezero is All You Need: Fast Convergence at Large Depth, UAI, 2021 [Paper] [Code]
  • Batch Normalization Biases Residual Blocks Towards the Identity Function in Deep Networks, NeurIPS, 2020 [Paper]
  • Improving Transformer Optimization Through Better Initialization, ICML, 2020 [Paper] [Code]
  • Fixup Initialization: Residual Learning without Normalization, ICLR, 2019 [Paper]
  • On Weight Initialization in Deep Neural Networks, arXiv, 2017 [Paper]

Optimization Strategies

  • Symbolic Discovery of Optimization Algorithms, arXiv, 2023 [Paper]
  • Sophia: A Scalable Stochastic Second-order Optimizer for Language Model Pre-training, arXiv, 2023 [Paper] [Code]

Efficient Fine-Tuning

Parameter-Efficient Fine-Tuning

Adapter-based Tuning
  • OpenDelta: A Plug-and-play Library for Parameter-efficient Adaptation of Pre-trained Models, ACL Demo, 2023 [Paper] [Code]
  • LLM-Adapters: An Adapter Family for Parameter-Efficient Fine-Tuning of Large Language Models, EMNLP, 2023 [Paper] [Code]
  • Compacter: Efficient Low-Rank Hypercomplex Adapter Layers, NeurIPS, 2023 [Paper] [Code]
  • Few-Shot Parameter-Efficient Fine-Tuning is Better and Cheaper than In-Context Learning, NeurIPS, 2022 [Paper] [Code]
  • Meta-Adapters: Parameter Efficient Few-shot Fine-tuning through Meta-Learning, AutoML, 2022 [Paper]
  • AdaMix: Mixture-of-Adaptations for Parameter-efficient Model Tuning, EMNLP, 2022 [Paper] [Code]
  • SparseAdapter: An Easy Approach for Improving the Parameter-Efficiency of Adapters, EMNLP, 2022 [Paper] [Code]
Low-Rank Adaptation
  • LoRA-FA: Memory-efficient Low-rank Adaptation for Large Language Models Fine-tuning, arXiv, 2023 [Paper]
  • LoraHub: Efficient Cross-Task Generalization via Dynamic LoRA Composition, arXiv, 2023 [Paper] [Code]
  • LongLoRA: Efficient Fine-tuning of Long-Context Large Language Models, arXiv, 2023 [Paper] [Code]
  • Multi-Head Adapter Routing for Cross-Task Generalization, NeurIPS, 2023 [Paper] [Code]
  • Adaptive Budget Allocation for Parameter-Efficient Fine-Tuning, ICLR, 2023 [Paper]
  • DyLoRA: Parameter-Efficient Tuning of Pretrained Models using Dynamic Search-Free Low Rank Adaptation, EACL, 2023 [Paper] [Code]
  • Tied-Lora: Enhacing Parameter Efficiency of LoRA with Weight Tying, arXiv, 2023 [Paper]
  • LoRA: Low-Rank Adaptation of Large Language Models, ICLR, 2022 [Paper] [Code]
Prefix Tuning
  • LLaMA-Adapter: Efficient Fine-tuning of Language Models with Zero-init Attention, arXiv, 2023 [Paper] [Code]
  • Prefix-Tuning: Optimizing Continuous Prompts for Generation ACL, 2021 [Paper] [Code]
Prompt Tuning
  • Compress, Then Prompt: Improving Accuracy-Efficiency Trade-off of LLM Inference with Transferable Prompt, arXiv, 2023 [Paper]
  • GPT Understands, Too, AI Open, 2023 [Paper] [Code]
  • Multi-Task Pre-Training of Modular Prompt for Few-Shot Learning ACL, 2023 [Paper] [Code]
  • Multitask Prompt Tuning Enables Parameter-Efficient Transfer Learning, ICLR, 2023 [Paper]
  • PPT: Pre-trained Prompt Tuning for Few-shot Learning, ACL, 2022 [Paper] [Code]
  • Parameter-Efficient Prompt Tuning Makes Generalized and Calibrated Neural Text Retrievers, EMNLP-Findings, 2022 [Paper] [Code]
  • P-Tuning v2: Prompt Tuning Can Be Comparable to Finetuning Universally Across Scales and Tasks,ACL-Short, 2022 [Paper] [Code]
  • The Power of Scale for Parameter-Efficient Prompt Tuning, EMNLP, 2021 [Paper]

Memory-Efficient Fine-Tuning

  • Winner-Take-All Column Row Sampling for Memory Efficient Adaptation of Language Model, NeurIPS, 2023 [Paper] [Code]
  • Memory-Efficient Selective Fine-Tuning, ICML Workshop, 2023 [Paper]
  • Full Parameter Fine-tuning for Large Language Models with Limited Resources, arXiv, 2023 [Paper] [Code]
  • Fine-Tuning Language Models with Just Forward Passes, NeurIPS, 2023 [Paper] [Code]
  • Memory-Efficient Fine-Tuning of Compressed Large Language Models via sub-4-bit Integer Quantization, NeurIPS, 2023 [Paper]
  • LoftQ: LoRA-Fine-Tuning-Aware Quantization for Large Language Models, arXiv, 2023 [Paper] [Code]
  • QA-LoRA: Quantization-Aware Low-Rank Adaptation of Large Language Models, arXiv, 2023 [Paper] [Code]
  • QLoRA: Efficient Finetuning of Quantized LLMs, NeurIPS, 2023 [Paper] [Code1] [Code2]

Efficient Inference

Speculative Decoding

  • PaSS: Parallel Speculative Sampling, NeurIPS Workshop, 2023 [Paper]
  • Accelerating Transformer Inference for Translation via Parallel Decoding, ACL, 2023 [Paper] [Code]
  • Medusa: Simple Framework for Accelerating LLM Generation with Multiple Decoding Heads, Blog, 2023 [Blog] [Code]
  • Fast Inference from Transformers via Speculative Decoding, ICML, 2023 [Paper]
  • Accelerating LLM Inference with Staged Speculative Decoding, ICML Workshop, 2023 [Paper]
  • Accelerating Large Language Model Decoding with Speculative Sampling, arXiv, 2023 [Paper]
  • Speculative Decoding with Big Little Decoder, NeurIPS, 2023 [Paper] [Code]
  • SpecInfer: Accelerating Generative LLM Serving with Speculative Inference and Token Tree Verification, arXiv, 2023 [Paper] [Code]
  • Inference with Reference: Lossless Acceleration of Large Language Models, arXiv, 2023 [Paper] [Code]

KV-Cache Optimization

  • Model Tells You What to Discard: Adaptive KV Cache Compression for LLMs, arXiv, 2023 [Paper]
  • SkipDecode: Autoregressive Skip Decoding with Batching and Caching for Efficient LLM Inference, arXiv, 2023 [Paper]
  • H2O: Heavy-Hitter Oracle for Efficient Generative Inference of Large Language Models, NeurIPS, 2023 [Paper]
  • Scissorhands: Exploiting the Persistence of Importance Hypothesis for LLM KV Cache Compression at Test Time, NeurIPS, 2023 [Paper]
  • Dynamic Context Pruning for Efficient and Interpretable Autoregressive Transformers, arXiv, 2023 [Paper]

Efficient Architecture

Efficient Attention

Sharing-based Attention
  • GQA: Training Generalized Multi-Query Transformer Models from Multi-Head Checkpoints, EMNLP, 2023 [Paper]
  • Fast Transformer Decoding: One Write-Head is All You Need, arXiv, 2019 [Paper]
Feature Information Reduction
  • NystrΓΆmformer: A nystrΓΆm-based algorithm for approximating self-attention, AAAI, 2021 [Paper] [Code]
  • Funnel-Transformer: Filtering out Sequential Redundancy for Efficient Language Processing, NeurIPS, 2020 [Paper] [Code]
  • Set Transformer: A Framework for Attention-based Permutation-Invariant Neural Networks, ICML, 2019 [Paper]
Kernelization or Low-Rank
  • Sumformer: Universal Approximation for Efficient Transformers, ICML Workshop, 2023 [Paper]
  • FLuRKA: Fast fused Low-Rank & Kernel Attention, arXiv, 2023 [Paper]
  • Scatterbrain: Unifying Sparse and Low-rank Attention, NeurlPS, 2021 [Paper] [Code]
  • Rethinking Attention with Performers, ICLR, 2021 [Paper] [Code]
  • Random Feature Attention, ICLR, 2021 [Paper]
  • Linformer: Self-Attention with Linear Complexity, arXiv, 2020 [Paper] [Code]
  • Lightweight and Efficient End-to-End Speech Recognition Using Low-Rank Transformer, ICASSP, 2020 [Paper]
  • Transformers are RNNs: Fast Autoregressive Transformers with Linear Attention, ICML, 2020 [Paper] [Code]
Fixed Pattern Strategies
  • Lightning Attention-2: A Free Lunch for Handling Unlimited Sequence Lengths in Large Language Models, arXiv, 2024 [Paper] [Code]
  • Faster Causal Attention Over Large Sequences Through Sparse Flash Attention, ICML Workshop, 2023 [Paper]
  • Poolingformer: Long Document Modeling with Pooling Attention, ICML, 2021 [Paper]
  • Big Bird: Transformers for Longer Sequences, NeurIPS, 2020 [Paper] [Code]
  • Longformer: The Long-Document Transformer, arXiv, 2020 [Paper] [Code]
  • Blockwise Self-Attention for Long Document Understanding, EMNLP, 2020 [Paper] [Code]
  • Generating Long Sequences with Sparse Transformers, arXiv, 2019 [Paper]
Learnable Pattern Strategies
  • HyperAttention: Long-context Attention in Near-Linear Time, arXiv, 2023 [Paper] [Code]
  • ClusterFormer: Neural Clustering Attention for Efficient and Effective Transformer, ACL, 2022 [Paper]
  • Reformer: The Efficient Transformer, ICLR, 2022 [Paper] [Code]
  • Sparse Sinkhorn Attention, ICML, 2020 [Paper]
  • Fast Transformers with Clustered Attention, NeurIPS, 2020 [Paper] [Code]
  • Efficient Content-Based Sparse Attention with Routing Transformers, TACL, 2020 [Paper] [Code]

Mixture of Experts

MoE-based LLMs
  • Mixtral of Experts, arXiv, 2024 [Paper] [Code]
  • Mistral 7B, arXiv, 2023 [Paper] [Code]
  • PanGu-Ξ£: Towards Trillion Parameter Language Model with Sparse Heterogeneous Computing, arXiv, 2023 [Paper]
  • Switch Transformers: Scaling to Trillion Parameter Models with Simple and Efficient Sparsity, JMLR, 2022 [Paper] [Code]
  • Efficient Large Scale Language Modeling with Mixtures of Experts, EMNLP, 2022 [Paper] [Code]
  • BASE Layers: Simplifying Training of Large, Sparse Models, ICML, 2021 [Paper] [Code]
  • GShard: Scaling Giant Models with Conditional Computation and Automatic Sharding, ICLR, 2021 [Paper]
Algorithm-Level MoE Optimization
  • Lifelong Language Pretraining with Distribution-Specialized Experts, ICML, 2023 [Paper]
  • Mixture-of-Experts Meets Instruction Tuning:A Winning Combination for Large Language Models, arXiv, 2023 [Paper]
  • Mixture-of-Experts with Expert Choice Routing, NeurIPS, 2022 [Paper]
  • StableMoE: Stable Routing Strategy for Mixture of Experts, ACL, 2022 [Paper] [Code]
  • On the Representation Collapse of Sparse Mixture of Experts, NeurIPS, 2022 [Paper]

Long Context LLMs

Extrapolation and Interpolation
  • Scaling Laws of RoPE-based Extrapolation, arXiv, 2023 [Paper]
  • A Length-Extrapolatable Transformer, ACL, 2023 [Paper] [Code]
  • Extending Context Window of Large Language Models via Positional Interpolation, arXiv, 2023 [Paper]
  • NTK Interpolation, Blog, 2023 [Reddit post]
  • YaRN: Efficient Context Window Extension of Large Language Models, arXiv, 2023 [Paper] [Code]
  • CLEX: Continuous Length Extrapolation for Large Language Models, arXiv, 2023 [Paper][Code]
  • PoSE: Efficient Context Window Extension of LLMs via Positional Skip-wise Training, arXiv, 2023 [Paper][Code]
  • Functional Interpolation for Relative Positions Improves Long Context Transformers, arXiv, 2023 [Paper]
  • Train Short, Test Long: Attention with Linear Biases Enables Input Length Extrapolation, ICLR, 2022 [Paper] [Code]
  • Exploring Length Generalization in Large Language Models, NeurIPS, 2022 [Paper]
  • The EOS Decision and Length Extrapolation, EMNLP, 2020 [Paper] [Code]
Recurrent Structure
  • Retentive Network: A Successor to Transformer for Large Language Models, arXiv, 2023 [Paper] [Code]
  • Recurrent Memory Transformer, NeurIPS, 2022 [Paper] [Code]
  • Block-Recurrent Transformers, NeurIPS, 2022 [Paper] [Code]
  • ∞-former: Infinite Memory Transformer, ACL, 2022 [Paper] [Code]
  • Memformer: A Memory-Augmented Transformer for Sequence Modeling, AACL-Findings, 2020 [Paper] [Code]
  • Transformer-XL: Attentive Language Models Beyond a Fixed-Length Context, ACL, 2019 [Paper] [Code]
Segmentation and Sliding Window
  • Soaring from 4K to 400K: Extending LLM’s Context with Activation Beacon, arXiv, 2024 [Paper] [Code]
  • LLM Maybe LongLM: Self-Extend LLM Context Window Without Tuning, arXiv, 2024 [Paper] [Code]
  • Extending Context Window of Large Language Models via Semantic Compression, arXiv, 2023 [Paper]
  • Efficient Streaming Language Models with Attention Sinks, arXiv, 2023 [Paper] [Code]
  • Parallel Context Windows for Large Language Models, ACL, 2023 [Paper] [Code]
  • LongNet: Scaling Transformers to 1,000,000,000 Tokens, arXiv, 2023 [Paper] [Code]
  • Efficient Long-Text Understanding with Short-Text Models, TACL, 2023 [Paper] [Code]
Memory-Retrieval Augmentation
  • Landmark Attention: Random-Access Infinite Context Length for Transformers, arXiv, 2023 [Paper] [Code]
  • Augmenting Language Models with Long-Term Memory, NeurIPS, 2023 [Paper]
  • Unlimiformer: Long-Range Transformers with Unlimited Length Input, NeurIPS, 2023 [Paper] [Code]
  • Focused Transformer: Contrastive Training for Context Scaling, NeurIPS, 2023 [Paper] [Code]
  • Retrieval meets Long Context Large Language Models, arXiv, 2023 [Paper]
  • Memorizing Transformers, ICLR, 2022 [Paper] [Code]

Transformer Alternative Architecture

State Space Models
  • Sparse Modular Activation for Efficient Sequence Modeling, NeurIPS, 2023 [Paper] [Code]
  • Mamba: Linear-Time Sequence Modeling with Selective State Spaces, arXiv, 2023 [Paper] [Code]
  • Hungry Hungry Hippos: Towards Language Modeling with State Space Models, ICLR 2023 [Paper] [Code]
  • Long Range Language Modeling via Gated State Spaces, ICLR, 2023 [Paper]
  • Block-State Transformers, NeurIPS, 2023 [Paper]
  • Efficiently Modeling Long Sequences with Structured State Spaces, ICLR, 2022 [Paper] [Code]
  • Diagonal State Spaces are as Effective as Structured State Spaces, NeurIPS, 2022 [Paper] [Code]
Other Sequential Models
  • PanGu-Ο€: Enhancing Language Model Architectures via Nonlinearity Compensation, arXiv, 2023 [Paper]
  • RWKV: Reinventing RNNs for the Transformer Era, EMNLP-Findings, 2023 [Paper]
  • Hyena Hierarchy: Towards Larger Convolutional Language Models, arXiv, 2023 [Paper]
  • MEGABYTE: Predicting Million-byte Sequences with Multiscale Transformers, arXiv, 2023 [Paper]

πŸ”’ Data-Centric Methods

Data Selection

Data Selection for Efficient Pre-Training

  • Data Selection for Language Models via Importance Resampling, NeurIPS, 2023 [Paper] [Code]
  • NLP From Scratch Without Large-Scale Pretraining: A Simple and Efficient Framework, ICML, 2022 [Paper] [Code]
  • Span Selection Pre-training for Question Answering, ACL, 2020 [Paper] [Code]

Data Selection for Efficient Fine-Tuning

  • What Makes Good Data for Alignment? A Comprehensive Study of Automatic Data Selection in Instruction Tuning, arXiv, 2023 [Paper] [Code]
  • One Shot Learning as Instruction Data Prospector for Large Language Models, arXiv, 2023 [Paper]
  • MoDS: Model-oriented Data Selection for Instruction Tuning, arXiv, 2023 [Paper] [Code]
  • Instruction Mining: When Data Mining Meets Large Language Model Finetuning, arXiv, 2023 [Paper]
  • Data-Efficient Finetuning Using Cross-Task Nearest Neighbors, ACL, 2023 [Paper] [Code]
  • Data Selection for Fine-tuning Large Language Models Using Transferred Shapley Values, ACL SRW, 2023 [Paper] [Code]
  • Maybe Only 0.5% Data is Needed: A Preliminary Exploration of Low Training Data Instruction Tuning, arXiv, 2023 [Paper]
  • AlpaGasus: Training A Better Alpaca with Fewer Data, arXiv, 2023 [Paper] [Code]
  • LIMA: Less Is More for Alignment, arXiv, 2023 [Paper]

Prompt Engineering

Few-Shot Prompting

Demonstration Organization
Demonstration Selection
  • Unified Demonstration Retriever for In-Context Learning, ACL, 2023 [Paper] [Code]
  • Large Language Models Are Latent Variable Models: Explaining and Finding Good Demonstrations for In-Context Learning, NeurIPS, 2023 [Paper] [Code]
  • In-Context Learning with Iterative Demonstration Selection, arXiv, 2022 [Paper]
  • Dr.ICL: Demonstration-Retrieved In-context Learning, arXiv, 2022 [Paper]
  • Learning to Retrieve In-Context Examples for Large Language Models, arXiv, 2022 [Paper]
  • Finding Supporting Examples for In-Context Learning, arXiv, 2022 [Paper]
  • Self-Adaptive In-Context Learning: An Information Compression Perspective for In-Context Example Selection and Ordering, ACL, 2023 [Paper] [Code]
  • Selective Annotation Makes Language Models Better Few-Shot Learners, ICLR, 2023 [Paper] [Code]
  • What Makes Good In-Context Examples for GPT-3? DeeLIO, 2022 [Paper]
  • Learning To Retrieve Prompts for In-Context Learning, NAACL-HLT, 2022 [Paper] [Code]
  • Active Example Selection for In-Context Learning, EMNLP, 2022 [Paper] [Code]
  • Rethinking the Role of Demonstrations: What makes In-context Learning Work? EMNLP, 2022 [Paper] [Code]
Demonstration Ordering
  • Fantastically Ordered Prompts and Where to Find Them: Overcoming Few-Shot Prompt Order Sensitivity, ACL, 2022 [Paper]
Template Formatting
Instruction Generation
  • Large Language Models as Optimizers, arXiv, 2023 [Paper]
  • Instruction Induction: From Few Examples to Natural Language Task Descriptions, ACL, 2023 [Paper] [Code]
  • Large Language Models Are Human-Level Prompt Engineers, ICLR, 2023 [Paper] [Code]
  • TeGit: Generating High-Quality Instruction-Tuning Data with Text-Grounded Task Design, arXiv, 2023 [Paper]
  • Self-Instruct: Aligning Language Model with Self Generated Instructions, ACL, 2023 [Paper] [Code]
Multi-Step Reasoning
  • Automatic Chain of Thought Prompting in Large Language Models, ICLR, 2023 [Paper] [Code]
  • Measuring and Narrowing the Compositionality Gap in Language Models, EMNLP, 2023 [Paper] [Code]
  • ReAct: Synergizing Reasoning and Acting in Language Models, ICLR, 2023 [Paper] [Code]
  • Least-to-Most Prompting Enables Complex Reasoning in Large Language Models, ICLR, 2023 [Paper]
  • Graph of Thoughts: Solving Elaborate Problems with Large Language Models, arXiv, 2023 [Paper] [Code]
  • Tree of Thoughts: Deliberate Problem Solving with Large Language Models, NeurIPS, 2023 [Paper] [Code]
  • Self-Consistency Improves Chain of Thought Reasoning in Language Models, ICLR, 2023 [Paper]
  • Graph of Thoughts: Solving Elaborate Problems with Large Language Models, arXiv, 2023 [Paper] [Code]
  • Contrastive Chain-of-Thought Prompting, arXiv, 2023 [Paper] [Code]
  • Everything of Thoughts: Defying the Law of Penrose Triangle for Thought Generation, arXiv, 2023 [Paper]
  • Chain-of-Thought Prompting Elicits Reasoning in Large Language Models, NeurIPS, 2022 [Paper]
Parallel Generation
  • Skeleton-of-Thought: Large Language Models Can Do Parallel Decoding, arXiv, 2023 [Paper] [Code]

Prompt Compression

  • Learning to Compress Prompts with Gist Tokens, arXiv, 2023 [Paper]
  • Adapting Language Models to Compress Contexts, EMNLP, 2023 [Paper] [Code]
  • In-context Autoencoder for Context Compression in a Large Language Model, arXiv, 2023 [Paper] [Code]
  • LongLLMLingua: Accelerating and Enhancing LLMs in Long Context Scenarios via Prompt Compression, arXiv, 2023 [Paper] [Code]
  • Discrete Prompt Compression with Reinforcement Learning, arXiv, 2023 [Paper]
  • Nugget 2D: Dynamic Contextual Compression for Scaling Decoder-only Language Models, arXiv, 2023 [Paper]

Prompt Generation

  • TempLM: Distilling Language Models into Template-Based Generators, arXiv, 2022 [Paper] [Code]
  • PromptGen: Automatically Generate Prompts using Generative Models, NAACL Findings, 2022 [Paper]
  • AutoPrompt: Eliciting Knowledge from Language Models with Automatically Generated Prompts, EMNLP, 2020 [Paper] [Code]

πŸ§‘β€πŸ’» System-Level Efficiency Optimization and LLM Frameworks

System-Level Efficiency Optimization

System-Level Pre-Training Efficiency Optimization

  • CoLLiE: Collaborative Training of Large Language Models in an Efficient Way, EMNLP, 2023 [Paper] [Code]
  • An Efficient 2D Method for Training Super-Large Deep Learning Models, IPDPS, 2023 [Paper] [Code]
  • PyTorch FSDP: Experiences on Scaling Fully Sharded Data Parallel, VLDB, 2023 [Paper]
  • Bamboo: Making Preemptible Instances Resilient for Affordable Training, NSDI, 2023 [Paper] [Code]
  • Oobleck: Resilient Distributed Training of Large Models Using Pipeline Templates, SOSP, 2023 [Paper] [Code]
  • Varuna: Scalable, Low-cost Training of Massive Deep Learning Models, EuroSys, 2022 [Paper] [Code]
  • Unity: Accelerating DNN Training Through Joint Optimization of Algebraic Transformations and Parallelization, OSDI, 2022 [Paper] [Code]
  • Tesseract: Parallelize the Tensor Parallelism Efficiently, ICPP, 2022, [Paper]
  • Alpa: Automating Inter- and Intra-Operator Parallelism for Distributed Deep Learning, OSDI, 2022, [Paper][Code]
  • Maximizing Parallelism in Distributed Training for Huge Neural Networks, arXiv, 2021 [Paper]
  • Megatron-LM: Training Multi-Billion Parameter Language Models Using Model Parallelism, arXiv, 2020 [Paper]
  • Efficient Large-Scale Language Model Training on GPU Clusters Using Megatron-LM, SC, 2021 [Paper] [Code]
  • ZeRO-Infinity: breaking the GPU memory wall for extreme scale deep learning, SC, 2021 [Paper]
  • ZeRO-Offload: Democratizing Billion-Scale Model Training, USENIX ATC, 2021 [Paper] [Code]
  • ZeRO: Memory Optimizations Toward Training Trillion Parameter Models, SC, 2020 [Paper] [Code]

System-Level Serving Efficiency Optimization

Serving System Design
  • TurboTransformers: an efficient GPU serving system for transformer models, PPoPP, 2021 [Paper]
  • Orca: A Distributed Serving System for Transformer-Based Generative Models, OSDI, 2022 [Paper]
  • FlexGen: High-Throughput Generative Inference of Large Language Models with a Single GPU, ICML, 2023 [Paper] [Code]
  • Efficiently Scaling Transformer Inference, MLSys, 2023 [Paper]
  • DeepSpeed-Inference: Enabling Efficient Inference of Transformer Models at Unprecedented Scale, SC, 2022 [Paper]
  • Efficient Memory Management for Large Language Model Serving with PagedAttention, SOSP, 2023 [Paper] [Code]
  • S-LoRA: Serving Thousands of Concurrent LoRA Adapters, arXiv, 2023 [Paper] [Code]
  • Petals: Collaborative Inference and Fine-tuning of Large Models, arXiv, 2023 [Paper]
  • SpotServe: Serving Generative Large Language Models on Preemptible Instances, arXiv, 2023 [Paper]
Serving Performance Optimization:
  • S3: Increasing GPU Utilization during Generative Inference for Higher Throughput, arXiv, 2023 [Paper]
  • Fast Distributed Inference Serving for Large Language Models, arXiv, 2023 [Paper]
  • Response Length Perception and Sequence Scheduling: An LLM-Empowered LLM Inference Pipeline, arXiv, 2023 [Paper]
  • SARATHI: Efficient LLM Inference by Piggybacking Decodes with Chunked Prefills, arXiv, 2023 [Paper]
  • FrugalGPT: How to Use Large Language Models While Reducing Cost and Improving Performance, arXiv, 2023 [Paper]
  • Prompt Cache: Modular Attention Reuse for Low-Latency Inference, arXiv, 2023 [Paper]
  • Fairness in Serving Large Language Models, arXiv, 2023 [Paper]

Hardware Co-Design

  • FlashAttention: Fast and Memory-Efficient Exact Attention with IO-Awareness, NeurIPS, 2022 [Paper] [Code]
  • FlashAttention-2: Faster Attention with Better Parallelism and Work Partitioning, arXiv, 2023 [Paper] [Code]
  • Flash-Decoding for Long-Context Inference, Blog, 2023 [Blog]
  • FlashDecoding++: Faster Large Language Model Inference on GPUs, arXiv, 2023 [Paper]
  • PowerInfer: Fast Large Language Model Serving with a Consumer-grade GPU, arXiv, 2023 [Paper] [Code]
  • LLM in a flash: Efficient Large Language Model Inference with Limited Memory, arXiv, 2023 [Paper]
  • Chiplet Cloud: Building AI Supercomputers for Serving Large Generative Language Models, arXiv, 2023 [Paper]
  • EdgeMoE: Fast On-Device Inference of MoE-based Large Language Models, arXiv, 2022 [Paper]

LLM Frameworks

Efficient Training Efficient Inference Efficient Fine-Tuning
DeepSpeed [Code] βœ… βœ… βœ…
Megatron [Code] βœ… βœ… βœ…
Alpa [Code] βœ… βœ… βœ…
ColossalAI [Code] βœ… βœ… βœ…
FairScale [Code] βœ… βœ… βœ…
Pax [Code] βœ… βœ… βœ…
Composer [Code] βœ… βœ… βœ…
vLLM [Code] ❌ βœ… ❌
TensorRT-LLM [Code] ❌ βœ… ❌
LightLLM [Code] ❌ βœ… ❌
OpenLLM [Code] ❌ βœ… βœ…
Ray-LLM [Code] ❌ βœ… ❌
MLC-LLM [Code] ❌ βœ… ❌
Sax [Code] ❌ βœ… ❌
Mosec [Code] ❌ βœ… ❌
LLM-Foundry [Code] βœ… βœ… ❌

πŸ–ŒοΈ Citation

If you find this survey useful to your work, please consider citing:

@misc{wan2023efficient,
      title={Efficient Large Language Models: A Survey}, 
      author={Zhongwei Wan and Xin Wang and Che Liu and Samiul Alam and Yu Zheng and Jiachen Liu and Zhongnan Qu and Shen Yan and Yi Zhu and Quanlu Zhang and Mosharaf Chowdhury and Mi Zhang},
      year={2023},
      eprint={2312.03863},
      archivePrefix={arXiv},
      primaryClass={cs.CL}
}

❀️ Contribution

This repository is maintained by tuidan ([email protected]), SUSTechBruce ([email protected]), samiul272 ([email protected]), and mi-zhang ([email protected]). We welcome feedback, suggestions, and contributions that can help improve this survey and repository so as to make them valuable resources to benefit the entire community.

  1. If you have any suggestions regarding our taxonomy, find any missed papers, or update any preprint arXiv paper that has been accepted to some venue, feel free to send us an email or submit a pull request using the following markdown format.
Paper Title, <ins>Conference/Journal/Preprint, Year</ins>  [[pdf](link)] [[other resources](link)].
  1. If one preprint paper has multiple versions, please use the earliest submitted year.

  2. Display the papers in a year descending order (the latest, the first).

More Repositories

1

SVD-LLM

Official Code for "SVD-LLM: Truncation-aware Singular Value Decomposition for Large Language Model Compression"
Python
90
star
2

DeepAA

[ICLR 2022] "Deep AutoAugment" by Yu Zheng, Zhi Zhang, Shen Yan, Mi Zhang
Python
61
star
3

FedRolex

[NeurIPS 2022] "FedRolex: Model-Heterogeneous Federated Learning with Rolling Sub-Model Extraction" by Samiul Alam, Luyang Liu, Ming Yan, and Mi Zhang
Python
59
star
4

arch2vec

[NeurIPS 2020] "Does Unsupervised Architecture Representation Learning Help Neural Architecture Search?" by Shen Yan, Yu Zheng, Wei Ao, Xiao Zeng, Mi Zhang
Python
49
star
5

FedAIoT

[DMLR 2024] FedAIoT: A Federated Learning Benchmark for Artificial Intelligence of Things
Python
47
star
6

AIoT-Survey

[TOSN 2024] Artificial Intelligence of Things: A Survey
30
star
7

Famba-V

[2024 ECCV Workshop] Famba-V: Fast Vision Mamba with Cross-Layer Token Fusion
Python
26
star
8

CATE

[ICML 2021 Oral] "CATE: Computation-aware Neural Architecture Encoding with Transformers" by Shen Yan, Kaiqiang Song, Fei Liu, Mi Zhang
Python
19
star
9

Distream

[SenSys 2020] "Distream: Scaling Live Video Analytics with Workload-Adaptive Distributed Edge Intelligence" by Xiao Zeng, Biyi Fang, Haichen Shen, Mi Zhang
Go
5
star
10

AIoT-MLSys-Lab.github.io

Homepage of OSU AIoT and MLSys Lab
JavaScript
4
star
11

MSUNet

[NeurIPS 2019 Google MicroNet Challenge] MSUNet is an efficient model that won the 4th place in the Google MicroNet Challenge CIFAR-100 Track hosted at NeurIPS 2019 designed by Yu Zheng, Shen Yan, Mi Zhang
Python
4
star
12

Mercury

[SenSys 2021] "Mercury: Efficient On-Device Distributed DNN Training via Stochastic Importance Sampling" by Xiao Zeng, Ming Yan, Mi Zhang
Python
1
star