There are no reviews yet. Be the first to send feedback to the community and the maintainers!
SN-Net
[CVPR 2023 Highlight] This is the official implementation of "Stitchable Neural Networks".LITv2
[NeurIPS 2022 Spotlight] This is the official PyTorch implementation of "Fast Vision Transformers with HiLo Attention"Mesa
This is the official PyTorch implementation for "Mesa: A Memory-saving Training Framework for Transformers".SPViT
[TPAMI 2024] This is the official repository for our paper: ''Pruning Self-attentions into Convolutional Layers in Single Path''.LIT
[AAAI 2022] This is the official PyTorch implementation of "Less is More: Pay Less Attention in Vision Transformers"PTQD
The official implementation of PTQD: Accurate Post-Training Quantization for Diffusion ModelsQTool
Collections of model quantization algorithms. Any issues, please contact Peng Chen ([email protected])EcoFormer
[NeurIPS 2022 Spotlight] This is the official PyTorch implementation of "EcoFormer: Energy-Saving Attention with Linear Complexity"SPT
[ICCV 2023 oral] This is the official repository for our paper: ''Sensitivity-Aware Visual Parameter-Efficient Fine-Tuning''.FASeg
[CVPR 2023] This is the official PyTorch implementation for "Dynamic Focus-aware Positional Queries for Semantic Segmentation".SAQ
This is the official PyTorch implementation for "Sharpness-aware Quantization for Deep Neural Networks".LongVLM
HVT
[ICCV 2021] Official implementation of "Scalable Vision Transformers with Hierarchical Pooling"MPVSS
SN-Netv2
[ECCV 2024] This is the official implementation of "Stitched ViTs are Flexible Vision Backbones".QLLM
[ICLR 2024] This is the official PyTorch implementation of "QLLM: Accurate and Efficient Low-Bitwidth Quantization for Large Language Models"efficient-stable-diffusion
Stitched_LLaMA
[CVPR 2024] A framework to fine-tune LLaMAs on instruction-following task and get many Stitched LLaMAs with customized number of parameters, e.g., Stitched LLaMA 8B, 9B, and 10B...ZipLLM
Love Open Source and this site? Check out how you can help us