• Stars
    star
    957
  • Rank 47,767 (Top 1.0 %)
  • Language
  • Created over 2 years ago
  • Updated 7 months ago

Reviews

There are no reviews yet. Be the first to send feedback to the community and the maintainers!

Repository Details

Awesome papers about Multi-Camera 3D Object Detection and Segmentation in Bird's-Eye-View, such as DETR3D, BEVDet, BEVFormer, BEVDepth, UniAD

Awesome BEV Perception from Multi-Cameras

ECCV 2020

  • LSS: Lift, Splat, Shoot: Encoding Images from Arbitrary Camera Rigs by Implicitly Unprojecting to 3D [paper] [Github]

CoRL 2021

  • DETR3D: 3D Object Detection from Multi-view Images via 3D-to-2D Queries [paper] [Github]

CVPR 2021

  • CaDDN:Categorical Depth Distribution Network for Monocular 3D Object Detection [paper] [Github]

ICCV 2021

  • FIERY: Future Instance Prediction in Bird's-Eye View from Surround Monocular Cameras [paper] [Github]

CVPR 2022

  • CVT: Cross-view Transformers for real-time Map-view Semantic Segmentation [paper] [Github]

ICRA 2022

ACMM 2022

  • Graph-DETR3D: Rethinking Overlapping Regions for Multi-View 3D Object Detection [paper]

ECCV 2022

  • BEVFormer: Learning Bird's-Eye-View Representation from Multi-Camera Images via Spatiotemporal Transformers [paper] [Github]
  • PETR: Position Embedding Transformation for Multi-View 3D Object Detection [paper][Github]
  • SpatialDETR: Robust Scalable Transformer-Based 3D Object Detection from Multi-View Camera Images with Global Cross-Sensor Attention[paper] [Github]

2022

  • BEVDet: High-Performance Multi-Camera 3D Object Detection in Bird-Eye-View [paper] [Github]
  • BEVDet4D: Exploit Temporal Cues in Multi-camera 3D Object Detection [paper]
  • PETRv2: A Unified Framework for 3D Perception from Multi-Camera Images [paper][Github]
  • M2BEV: Multi-Camera Joint 3D Detection and Segmentation with Unified Birds-Eye View Representation [paper]
  • BEVerse: Unified Perception and Prediction in Birds-Eye-View for Vision-Centric Autonomous Driving [paper] [Github]
  • PolarDETR: Polar Parametrization for Vision-based Surround-View 3D Detection[paper] [Github]
  • (CoRL 2022) LaRa: Latents and Rays for Multi-Camera Bird's-Eye-View Semantic Segmentation [paper] [Github]
  • (AAAI 2023) PolarFormer: Multi-camera 3D Object Detection with Polar Transformers[paper] [Github]
  • (ICRA 2023) CrossDTR: Cross-view and Depth-guided Transformers for 3D Object Detection[paper] [Github]
  • (AAAI 2023) BEVDepth: Acquisition of Reliable Depth for Multi-view 3D Object Detection [paper][Github]
  • A Simple Baseline for BEV Perception Without LiDAR [paper] [Github]
  • BEVFormer v2: Adapting Modern Image Backbones to Bird's-Eye-View Recognition via Perspective Supervision [paper]
  • AeDet: Azimuth-invariant Multi-view 3D Object Detection [paper] [Github
  • (WACV 2023) BEVSegFormer: Bird’s Eye View Semantic Segmentation From Arbitrary Camera Rigs [paper]

Longterm BEV

  • Time Will Tell: New Outlooks and A Baseline for Temporal Multi-View 3D Object Detection [paper][Github]
  • VideoBEV: Exploring Recurrent Long-term Temporal Fusion for Multi-view 3D Perception [paper]
  • HoP: Temporal Enhanced Training of Multi-view 3D Object Detector via Historical Object Prediction [paper]
  • StreamPETR: Exploring Object-Centric Temporal Modeling for Efficient Multi-View 3D Object Detection [paper][Github]

BEV + Stereo

  • (AAAI 2023) BEVStereo: Enhancing Depth Estimation in Multi-view 3D Object Detection with Dynamic Temporal Stereo [paper] [Github]
  • STS: Surround-view Temporal Stereo for Multi-view 3D Detection [paper]

End to End BEV Perception

  • ST-P3: End-to-end Vision-based Autonomous Driving via Spatial-Temporal Feature Learning [paper][Github]
  • UniAD: Planning-oriented Autonomous Driving [paper][Github]

BEV + Distillation

  • (ICLR 2023) BEVDistill: Cross-Modal BEV Distillation for Multi-View 3D Object Detection [paper] [Github]
  • TiG-BEV: Multi-view BEV 3D Object Detection via Target Inner-Geometry Learning [paper][Github]

Robust BEV

  • RoboBEV: Towards Robust Bird's Eye View Detection under Corruptions [paper] [Github]

Fast BEV

  • Fast-BEV: A Fast and Strong Bird’s-Eye View Perception Baseline [paper] [Github]
  • MatrixVT: Efficient Multi-Camera to BEV Transformation for 3D Perception [paper][Github]

HD Map Construction

  • (ICRA 2022) HDMapNet: An Online HD Map Construction and Evaluation Framework [paper] [Github]
  • (ICLR 2023) MapTR: Structured Modeling and Learning for Online Vectorized HD Map Construction [paper] [Github]

Multi-sensor fusion

  • FUTR3D: A Unified Sensor Fusion Framework for 3D Detection [paper] [Github]
  • (NeurIPS 2022) BEVFusion: A Simple and Robust LiDAR-Camera Fusion Framework [paper] [Github]
  • (NeurIPS 2022) Unifying Voxel-based Representation with Transformer for 3D Object Detection [paper] [Github]
  • BEVFusion: Multi-Task Multi-Sensor Fusion with Unified Bird's-Eye View Representation [paper] [Github]
  • CMT: Cross Modal Transformer via Coordinates Encoding for 3D Object Dectection [paper] [Github]
  • BEVFusion4D: Learning LiDAR-Camera Fusion Under Bird's-Eye-View via Cross-Modality Guidance and Temporal Aggregation [paper]

Survey

  • Vision-Centric BEV Perception: A Survey [paper] [Github]
  • Delving into the Devils of Bird's-eye-view Perception: A Review, Evaluation and Recipe [paper][Github]

Occupancy Network

  • TPVFormer: An academic alternative to Tesla's Occupancy Network [Github]

Pre-training

  • Occ-BEV: Multi-Camera Unified Pre-training via 3D Scene Reconstruction [paper][Github]
  • Occupancy-MAE: Self-supervised Pre-training Large-scale LiDAR Point Clouds with Masked Occupancy Autoencoders [paper][Github]

BEV + Dataset

  • aiMotive Dataset: A Multimodal Dataset for Robust Autonomous Driving with Long-Range Perception [paper] [Github]

others

  • Focal Sparse Convolutional Networks for 3D Object Detection [paper] [Github]
  • Voxel Field Fusion for 3D Object Detection [paper] [Github]
  • Scaling up Kernels in 3D CNNs [paper] [Github]

nuScenes detection task Leaderboard