There are no reviews yet. Be the first to send feedback to the community and the maintainers!
GPTQ-for-LLaMa
4 bits quantization of LLaMA using GPTQlama-with-maskdino
automatic image inpainting (lama(with refinement) and maskdino)GPTQ-for-KoAlpaca
SoftPool
softpool implementation(Refining activation downsampling with SoftPool) This is an unofficial implementation. https://arxiv.org/pdf/2101.00440v2.pdfMaxVIT-pytorch
MaxVIT implementation(MaxViT: Multi-Axis Vision Transformer) This is an unofficial implementation. https://arxiv.org/abs/2204.01697llama-danbooru-qlora
AutoQuant
NatIR
NatIR: Image Restoration Using Neighborhood-Attention-TransformerNeighborhood-Attention-Transformer
NAT implementation(Neighborhood Attention Transformer) This is an unofficial implementation. https://arxiv.org/pdf/2204.07143.pdfpale-transformer-pytorch
Pale Transformer implementation(Pale Transformer: A General Vision Transformer Backbone with Pale-Shaped Attention) This is an unofficial implementation. https://arxiv.org/abs/2112.14000KoLIMA
Magneto-pytorch
Magneto implementation(Foundation Transformers) This is an unofficial implementation. https://arxiv.org/abs/2210.06423MLP-Mixer-tf2
MLP-Mixer implementation(MLP-Mixer: An all-MLP Architecture for Vision) This is an unofficial implementation. https://arxiv.org/pdf/2105.01601v1.pdfSwin-MLP-Mixer
This code is a structure that combines swim-transformer and mlp mixer, and performance may be poor because I didn’t train and test this modelD-Adaptation-Adan
Adan with D-Adaptation automatic step-sizeshalonet-tf2
halonet implementation(Scaling Local Self-Attention for Parameter Efficient Visual Backbones) This is an unofficial implementation.https://arxiv.org/pdf/2103.12731v2.pdfpsnr_ssim_ycbcr
Code for calculating psnr and ssim in y channel in ycbcr.This code is based on BasicSR (https://github.com/xinntao/BasicSR).Love Open Source and this site? Check out how you can help us