awesome-mlp-papers
An up-to-date list of MLP-based paper without attention! Maintained by Haofan Wang ([email protected]).
MLP is all you Need (The topper, the newer!)
UNeXt: MLP-based Rapid Medical Image Segmentation Network, Johns Hopkins University, code, MICCAI 2022.
MotionMixer: MLP-based 3D Human Body Pose Forecasting, Mercedes-Benz AG, code, IJCAI 2022 Oral.
MLP-3D: A MLP-like 3D Architecture with Grouped Time Mixing, JD AI Research, CVPR 2022
MAXIM: Multi-Axis MLP for Image Processing, Google Research, UT-Austin, 2022
ConvMLP: Hierarchical Convolutional MLPs for Vision, University of Oregon, 2021
Axial-MLP for automatic segmentation of choroid plexus in multiple sclerosis, Paris Brain Institute - Inria, 2021
Sparse-MLP: A Fully-MLP Architecture with Conditional Computation, NUS, 2021
Hire-MLP: Vision MLP via Hierarchical Rearrangement, Noah’s Ark Lab, Huawei Technologies, 2021
RaftMLP: Do MLP-based Models Dream of Winning Over Computer Vision?, Rikkyo University, 2021
S2-MLPv2: Improved Spatial-Shift MLP Architecture for Vision, Baidu Research, 2021
CycleMLP: A MLP-like Architecture for Dense Prediction, The University of Hong Kong, 2021, [code]
AS-MLP: An Axial Shifted MLP Architecture for Vision, ShanghaiTech University, 2021
PC-MLP: Model-based Reinforcement Learning with Policy Cover Guided Exploration, CMU, ICML 2021
Global Filter Networks for Image Classification, Tsinghua University, 2021
Rethinking Token-Mixing MLP for MLP-based Vision Backbone, Baidu Research, 2021
Vision Permutator: A Permutable MLP-Like Architecture for Visual Recognition, NUS, 2021
S2-MLP: Spatial-Shift MLP Architecture for Vision, Baidu Research, 2021
Graph-MLP: Node Classification without Message Passing in Graph, MegVii Inc, 2021
Container: Context Aggregation Network, CUHK, 2021
Less is More: Pay Less Attention in Vision Transformers, Monash University, 2021
Can Attention Enable MLPs To Catch Up With CNNs?, Tsinghua University, CVM 2021
Pay Attention to MLPs, Google Research, 2021, [code]
FNet: Mixing Tokens with Fourier Transforms, Google Research, 2021, [code]
ResMLP: Feedforward networks for image classification with data-efficient training, Facebook AI, CVPR 2021, [code]
Are Pre-trained Convolutions Better than Pre-trained Transformers?, Google Research, ACL 2021
Do You Even Need Attention? A Stack of Feed-Forward Layers Does Surprisingly Well on ImageNet, Oxford University, 2021, [code]
RepMLP: Re-parameterizing Convolutions into Fully-connected Layers for Image Recognition, Tsinghua University, 2021, [code]
Beyond Self-attention: External Attention using Two Linear Layers for Visual Tasks, Tsinghua University, 2021
MLP-Mixer: An all-MLP Architecture for Vision, Google Research, 2021, [code]
Synthesizer: Rethinking Self-Attention in Transformer Models, Google Research, ICML 2021
Contributing
Please help in contributing to this list by submitting an issue or a pull request
- Paper Name [[pdf]](link) [[code]](link)
Other topics
More interested advanced resources collection can be found at Awesome-Computer-Vision