vit-pytorch
Implementation of Vision Transformer, a simple way to achieve SOTA in vision classification with only a single transformer encoder, in PytorchDALLE2-pytorch
Implementation of DALL-E 2, OpenAI's updated text-to-image synthesis neural network, in Pytorchimagen-pytorch
Implementation of Imagen, Google's Text-to-Image Neural Network, in PytorchPaLM-rlhf-pytorch
Implementation of RLHF (Reinforcement Learning with Human Feedback) on top of the PaLM architecture. Basically ChatGPT but with PaLMDALLE-pytorch
Implementation / replication of DALL-E, OpenAI's Text to Image Transformer, in Pytorchdeep-daze
Simple command line tool for text to image generation using OpenAI's CLIP and Siren (Implicit neural representation network). Technique was originally created by https://twitter.com/advadnoundenoising-diffusion-pytorch
Implementation of Denoising Diffusion Probabilistic Model in Pytorchstylegan2-pytorch
Simplest working implementation of Stylegan2, state of the art generative adversarial network, in Pytorch. Enabling everyone to experience disentanglementmusiclm-pytorch
Implementation of MusicLM, Google's new SOTA model for music generation using attention networks, in Pytorchx-transformers
A simple but complete full-attention transformer with a set of promising experimental features from various papersbig-sleep
A simple command line tool for text to image generation, using OpenAI's CLIP and a BigGAN. Technique was originally created by https://twitter.com/advadnounaudiolm-pytorch
Implementation of AudioLM, a SOTA Language Modeling Approach to Audio Generation out of Google Research, in Pytorchlion-pytorch
š¦ Lion, new optimizer discovered by Google Brain using genetic algorithms that is purportedly better than Adam(w), in Pytorchtoolformer-pytorch
Implementation of Toolformer, Language Models That Can Use Tools, by MetaAIreformer-pytorch
Reformer, the efficient Transformer, in Pytorchmake-a-video-pytorch
Implementation of Make-A-Video, new SOTA text to video generator from Meta AI, in Pytorchgigagan-pytorch
Implementation of GigaGAN, new SOTA GAN out of Adobe. Culmination of nearly a decade of research into GANsalphafold2
To eventually become an unofficial Pytorch implementation / replication of Alphafold2, as details of the architecture get releasedlightweight-gan
Implementation of 'lightweight' GAN, proposed in ICLR 2021, in Pytorch. High resolution image generations that can be trained within a day or twolambda-networks
Implementation of LambdaNetworks, a new approach to image recognition that reaches SOTA with less computebyol-pytorch
Usable Implementation of "Bootstrap Your Own Latent" self-supervised learning, from Deepmind, in Pytorchself-rewarding-lm-pytorch
Implementation of the training framework proposed in Self-Rewarding Language Model, from MetaAInaturalspeech2-pytorch
Implementation of Natural Speech 2, Zero-shot Speech and Singing Synthesizer, in Pytorchflamingo-pytorch
Implementation of š¦© Flamingo, state-of-the-art few-shot visual question answering attention net out of Deepmind, in Pytorchvideo-diffusion-pytorch
Implementation of Video Diffusion Models, Jonathan Ho's new paper extending DDPMs to Video Generation - in Pytorchsoundstorm-pytorch
Implementation of SoundStorm, Efficient Parallel Audio Generation from Google Deepmind, in PytorchCoCa-pytorch
Implementation of CoCa, Contrastive Captioners are Image-Text Foundation Models, in Pytorchperformer-pytorch
An implementation of Performer, a linear attention-based transformer, in Pytorchperceiver-pytorch
Implementation of Perceiver, General Perception with Iterative Attention, in PytorchRETRO-pytorch
Implementation of RETRO, Deepmind's Retrieval based Attention net, in Pytorchmlp-mixer-pytorch
An All-MLP solution for Vision, from Google AImuse-maskgit-pytorch
Implementation of Muse: Text-to-Image Generation via Masked Generative Transformers, in PytorchPaLM-pytorch
Implementation of the specific Transformer architecture from PaLM - Scaling Language Modeling with Pathwaysvector-quantize-pytorch
Vector Quantization, in Pytorchphenaki-pytorch
Implementation of Phenaki Video, which uses Mask GIT to produce text guided videos of up to 2 minutes in length, in Pytorchx-clip
A concise but complete implementation of CLIP with various experimental improvements from recent papersbottleneck-transformer-pytorch
Implementation of Bottleneck Transformer in Pytorchmemorizing-transformers-pytorch
Implementation of Memorizing Transformers (ICLR 2022), attention net augmented with indexing and retrieval of memories using approximate nearest neighbors, in PytorchTimeSformer-pytorch
Implementation of TimeSformer from Facebook AI, a pure attention-based solution for video classificationMEGABYTE-pytorch
Implementation of MEGABYTE, Predicting Million-byte Sequences with Multiscale Transformers, in Pytorchmeshgpt-pytorch
Implementation of MeshGPT, SOTA Mesh generation using Attention, in Pytorchnuwa-pytorch
Implementation of NĆWA, state of the art attention network for text to video synthesis, in Pytorchvoicebox-pytorch
Implementation of Voicebox, new SOTA Text-to-speech network from MetaAI, in Pytorchpoint-transformer-pytorch
Implementation of the Point Transformer layer, in Pytorchparti-pytorch
Implementation of Parti, Google's pure attention-based text-to-image neural network, in Pytorchtab-transformer-pytorch
Implementation of TabTransformer, attention network for tabular data, in Pytorchalphafold3-pytorch
Implementation of Alphafold 3 in Pytorchlinear-attention-transformer
Transformer based on a variant of attention that is linear complexity in respect to sequence lengthmagvit2-pytorch
Implementation of MagViT2 Tokenizer in Pytorchema-pytorch
A simple way to keep track of an Exponential Moving Average (EMA) version of your pytorch modelegnn-pytorch
Implementation of E(n)-Equivariant Graph Neural Networks, in Pytorchg-mlp-pytorch
Implementation of gMLP, an all-MLP replacement for Transformers, in Pytorchrecurrent-memory-transformer-pytorch
Implementation of Recurrent Memory Transformer, Neurips 2022 paper, in Pytorchring-attention-pytorch
Implementation of š Ring Attention, from Liu et al. at Berkeley AI, in Pytorchsiren-pytorch
Pytorch implementation of SIREN - Implicit Neural Representations with Periodic Activation Functionenformer-pytorch
Implementation of Enformer, Deepmind's attention network for predicting gene expression, in PytorchiTransformer
Unofficial implementation of iTransformer - SOTA Time Series Forecasting using Attention networks, out of Tsinghua / Ant grouprobotic-transformer-pytorch
Implementation of RT1 (Robotic Transformer) in Pytorchmemory-efficient-attention-pytorch
Implementation of a memory efficient multi-head attention as proposed in the paper, "Self-attention Does Not Need O(nĀ²) Memory"FLASH-pytorch
Implementation of the Transformer variant proposed in "Transformer Quality in Linear Time"bit-diffusion
Implementation of Bit Diffusion, Hinton's group's attempt at discrete denoising diffusion, in Pytorchmedical-chatgpt
Implementation of ChatGPT, but tailored towards primary care medicine, with the reward being able to collect patient histories in a thorough and efficient manner and come up with a reasonable differential diagnosisslot-attention
Implementation of Slot Attention from GoogleAIq-transformer
Implementation of Q-Transformer, Scalable Offline Reinforcement Learning via Autoregressive Q-Functions, out of Google DeepmindBS-RoFormer
Implementation of Band Split Roformer, SOTA Attention network for music source separation out of ByteDance AI Labsclassifier-free-guidance-pytorch
Implementation of Classifier Free Guidance in Pytorch, with emphasis on text conditioning, and flexibility to include multiple text embedding modelstransformer-in-transformer
Implementation of Transformer in Transformer, pixel level attention paired with patch level attention for image classification, in Pytorchaxial-attention
Implementation of Axial attention - attending to multi-dimensional data efficientlyconformer
Implementation of the convolutional module from the Conformer paper, for use in Transformersmixture-of-experts
A Pytorch implementation of Sparsely-Gated Mixture of Experts, for massively increasing the parameter count of language modelsdeformable-attention
Implementation of Deformable Attention in Pytorch from the paper "Vision Transformer with Deformable Attention"magic3d-pytorch
Implementation of Magic3D, Text to 3D content synthesis, in Pytorchx-unet
Implementation of a U-net complete with efficient attention as well as the latest research findingsrouting-transformer
Fully featured implementation of Routing TransformerAdan-pytorch
Implementation of the Adan (ADAptive Nesterov momentum algorithm) Optimizer in Pytorchspear-tts-pytorch
Implementation of Spear-TTS - multi-speaker text-to-speech attention network, in Pytorchst-moe-pytorch
Implementation of ST-Moe, the latest incarnation of MoE after years of research at Brain, in Pytorchperfusion-pytorch
Implementation of Key-Locked Rank One Editing, from Nvidia AIequiformer-pytorch
Implementation of the Equiformer, SE3/E3 equivariant attention network that reaches new SOTA, and adopted for use by EquiFold for protein foldingsegformer-pytorch
Implementation of Segformer, Attention + MLP neural network for segmentation, in Pytorchsinkhorn-transformer
Sinkhorn Transformer - Practical implementation of Sparse Sinkhorn Attentionpixel-level-contrastive-learning
Implementation of Pixel-level Contrastive Learning, proposed in the paper "Propagate Yourself", in Pytorchlumiere-pytorch
Implementation of Lumiere, SOTA text-to-video generation from Google Deepmind, in Pytorchlocal-attention
An implementation of local windowed attention for language modelingCoLT5-attention
Implementation of the conditionally routed attention in the CoLT5 architecture, in Pytorchnatural-speech-pytorch
Implementation of the neural network proposed in Natural Speech, a text-to-speech generator that is indistinguishable from human recordings for the first time, from Microsoft Researchsoft-moe-pytorch
Implementation of Soft MoE, proposed by Brain's Vision team, in Pytorchse3-transformer-pytorch
Implementation of SE3-Transformers for Equivariant Self-Attention, in Pytorch. This specific repository is geared towards integration with eventual Alphafold2 replication.block-recurrent-transformer-pytorch
Implementation of Block Recurrent Transformer - PytorchMega-pytorch
Implementation of Mega, the Single-head Attention with Multi-headed EMA architecture that currently holds SOTA on Long Range Arenasimple-hierarchical-transformer
Experiments around a simple idea for inducing multiple hierarchical predictive model within a GPTmed-seg-diff-pytorch
Implementation of MedSegDiff in Pytorch - SOTA medical segmentation using DDPM and filtering of features in fourier spacetriton-transformer
Implementation of a Transformer, but completely in Tritonjax2torch
Use Jax functions in Pytorchflash-cosine-sim-attention
Implementation of fused cosine similarity attention in the same style as Flash Attentionhalonet-pytorch
Implementation of the š Attention layer from the paper, Scaling Local Self-Attention For Parameter Efficient Visual Backbonesattention
This repository will house a visualization that will attempt to convey instant enlightenment of how Attention works to someone not working in artificial intelligence, with 3Blue1Brown as inspirationrecurrent-interface-network-pytorch
Implementation of Recurrent Interface Network (RIN), for highly efficient generation of images and video without cascading networks, in Pytorchelectra-pytorch
A simple and working implementation of Electra, the fastest way to pretrain language models from scratch, in PytorchPaLM-jax
Implementation of the specific Transformer architecture from PaLM - Scaling Language Modeling with Pathways - in Jax (Equinox framework)Love Open Source and this site? Check out how you can help us