• Stars
    star
    5,886
  • Rank 6,876 (Top 0.2 %)
  • Language
    Python
  • License
    MIT License
  • Created about 5 years ago
  • Updated over 1 year ago

Reviews

There are no reviews yet. Be the first to send feedback to the community and the maintainers!

Repository Details

Code for the paper Hybrid Spectrogram and Waveform Source Separation

Demucs Music Source Separation

Support Ukraine tests badge linter badge

This is the 4th release of Demucs (v4), featuring Hybrid Transformer based source separation. For the classic Hybrid Demucs (v3): Go this commit. If you are experiencing issues and want the old Demucs back, please fill an issue, and then you can get back to the v3 with git checkout v3. You can also go Demucs v2.

Demucs is a state-of-the-art music source separation model, currently capable of separating drums, bass, and vocals from the rest of the accompaniment. Demucs is based on a U-Net convolutional architecture inspired by Wave-U-Net. The v4 version features Hybrid Transformer Demucs, a hybrid spectrogram/waveform separation model using Transformers. It is based on Hybrid Demucs (also provided in this repo) with the innermost layers are replaced by a cross-domain Transformer Encoder. This Transformer uses self-attention within each domain, and cross-attention across domains. The model achieves a SDR of 9.00 dB on the MUSDB HQ test set. Moreover, when using sparse attention kernels to extend its receptive field and per source fine-tuning, we achieve state-of-the-art 9.20 dB of SDR.

Samples are available on our sample page. Checkout our paper for more information. It has been trained on the MUSDB HQ dataset + an extra training dataset of 800 songs. This model separates drums, bass and vocals and other stems for any song.

As Hybrid Transformer Demucs is brand new, it is not activated by default, you can activate it in the usual commands described hereafter with -n htdemucs_ft. The single, non fine-tuned model is provided as -n htdemucs, and the retrained baseline as -n hdemucs_mmi. The Sparse Hybrid Transformer model decribed in our paper is not provided as its requires custom CUDA code that is not ready for release yet. We are also releasing an experimental 6 sources model, that adds a guitar and piano source. Quick testing seems to show okay quality for guitar, but a lot of bleeding and artifacts for the piano source.

Schema representing the structure of Hybrid Transformer Demucs,
    with a dual U-Net structure, one branch for the temporal domain,
    and one branch for the spectral domain. There is a cross-domain Transformer between the Encoders and Decoders.

Important news if you are already using Demucs

See the release notes for more details.

  • 22/02/2023: added support for the SDX 2023 Challenge, see the dedicated doc page
  • 07/12/2022: Demucs v4 now on PyPI. htdemucs model now used by default. Also releasing a 6 sources models (adding guitar and piano, although the latter doesn't work so well at the moment).
  • 16/11/2022: Added the new Hybrid Transformer Demucs v4 models. Adding support for the torchaudio implementation of HDemucs.
  • 30/08/2022: added reproducibility and ablation grids, along with an updated version of the paper.
  • 17/08/2022: Releasing v3.0.5: Set split segment length to reduce memory. Compatible with pyTorch 1.12.
  • 24/02/2022: Releasing v3.0.4: split into two stems (i.e. karaoke mode). Export as float32 or int24.
  • 17/12/2021: Releasing v3.0.3: bug fixes (thanks @keunwoochoi), memory drastically reduced on GPU (thanks @famzah) and new multi-core evaluation on CPU (-j flag).
  • 12/11/2021: Releasing Demucs v3 with hybrid domain separation. Strong improvements on all sources. This is the model that won Sony MDX challenge.
  • 11/05/2021: Adding support for MusDB-HQ and arbitrary wav set, for the MDX challenge. For more information on joining the challenge with Demucs see the Demucs MDX instructions

Comparison with other models

We provide hereafter a summary of the different metrics presented in the paper. You can also compare Hybrid Demucs (v3), KUIELAB-MDX-Net, Spleeter, Open-Unmix, Demucs (v1), and Conv-Tasnet on one of my favorite songs on my soundcloud playlist.

Comparison of accuracy

Overall SDR is the mean of the SDR for each of the 4 sources, MOS Quality is a rating from 1 to 5 of the naturalness and absence of artifacts given by human listeners (5 = no artifacts), MOS Contamination is a rating from 1 to 5 with 5 being zero contamination by other sources. We refer the reader to our paper, for more details.

Model Domain Extra data? Overall SDR MOS Quality MOS Contamination
Wave-U-Net waveform no 3.2 - -
Open-Unmix spectrogram no 5.3 - -
D3Net spectrogram no 6.0 - -
Conv-Tasnet waveform no 5.7 -
Demucs (v2) waveform no 6.3 2.37 2.36
ResUNetDecouple+ spectrogram no 6.7 - -
KUIELAB-MDX-Net hybrid no 7.5 2.86 2.55
Band-Spit RNN spectrogram no 8.2 - -
Hybrid Demucs (v3) hybrid no 7.7 2.83 3.04
MMDenseLSTM spectrogram 804 songs 6.0 - -
D3Net spectrogram 1.5k songs 6.7 - -
Spleeter spectrogram 25k songs 5.9 - -
Band-Spit RNN spectrogram 1.7k (mixes only) 9.0 - -
HT Demucs f.t. (v4) hybrid 800 songs 9.0 - -

Requirements

You will need at least Python 3.8. See requirements_minimal.txt for requirements for separation only, and environment-[cpu|cuda].yml (or requirements.txt) if you want to train a new model.

For Windows users

Everytime you see python3, replace it with python.exe. You should always run commands from the Anaconda console.

For musicians

If you just want to use Demucs to separate tracks, you can install it with

python3 -m pip install -U demucs

For bleeding edge versions, you can install directly from this repo using

python3 -m pip install -U git+https://github.com/facebookresearch/demucs#egg=demucs

Advanced OS support are provided on the following page, you must read the page for your OS before posting an issues:

For machine learning scientists

If you have anaconda installed, you can run from the root of this repository:

conda env update -f environment-cpu.yml  # if you don't have GPUs
conda env update -f environment-cuda.yml # if you have GPUs
conda activate demucs
pip install -e .

This will create a demucs environment with all the dependencies installed.

You will also need to install soundstretch/soundtouch: on Mac OSX you can do brew install sound-touch, and on Ubuntu sudo apt-get install soundstretch. This is used for the pitch/tempo augmentation.

Running in Docker

Thanks to @xserrat, there is now a Docker image definition ready for using Demucs. This can ensure all libraries are correctly installed without interfering with the host OS. See his repo Docker Facebook Demucs for more information.

Running from Colab

I made a Colab to easily separate track with Demucs. Note that transfer speeds with Colab are a bit slow for large media files, but it will allow you to use Demucs without installing anything.

Demucs on Google Colab

Web Demo

Integrated to Hugging Face Spaces with Gradio. See demo: Hugging Face Spaces

Graphical Interface

@CarlGao4 has released a GUI for Demucs: CarlGao4/Demucs-Gui. Downloads for Windows and macOS is available here. Use FossHub mirror to speed up your download.

@Anjok07 is providing a self contained GUI in UVR (Ultimate Vocal Remover) that supports Demucs.

Other providers

Audiostrip is providing free online separation with Demucs on their website https://audiostrip.co.uk/.

MVSep also provides free online separation, select Demucs3 model B for the best quality.

Neutone provides a realtime Demucs model in their free VST/AU plugin that can be used in your favorite DAW.

Separating tracks

In order to try Demucs, you can just run from any folder (as long as you properly installed it)

demucs PATH_TO_AUDIO_FILE_1 [PATH_TO_AUDIO_FILE_2 ...]   # for Demucs
# If you used `pip install --user` you might need to replace demucs with python3 -m demucs
python3 -m demucs --mp3 --mp3-bitrate BITRATE PATH_TO_AUDIO_FILE_1  # output files saved as MP3
        # use --mp3-preset to change encoder preset, 2 for best quality, 7 for fastest
# If your filename contain spaces don't forget to quote it !!!
demucs "my music/my favorite track.mp3"
# You can select different models with `-n` mdx_q is the quantized model, smaller but maybe a bit less accurate.
demucs -n mdx_q myfile.mp3
# If you only want to separate vocals out of an audio, use `--two-stems=vocal` (You can also set to drums or bass)
demucs --two-stems=vocals myfile.mp3

If you have a GPU, but you run out of memory, please use --segment SEGMENT to reduce length of each split. SEGMENT should be changed to a integer. Personally recommend not less than 10 (the bigger the number is, the more memory is required, but quality may increase). Create an environment variable PYTORCH_NO_CUDA_MEMORY_CACHING=1 is also helpful. If this still cannot help, please add -d cpu to the command line. See the section hereafter for more details on the memory requirements for GPU acceleration.

Separated tracks are stored in the separated/MODEL_NAME/TRACK_NAME folder. There you will find four stereo wav files sampled at 44.1 kHz: drums.wav, bass.wav, other.wav, vocals.wav (or .mp3 if you used the --mp3 option).

All audio formats supported by torchaudio can be processed (i.e. wav, mp3, flac, ogg/vorbis on Linux/Mac OS X etc.). On Windows, torchaudio has limited support, so we rely on ffmpeg, which should support pretty much anything. Audio is resampled on the fly if necessary. The output will be a wave file encoded as int16. You can save as float32 wav files with --float32, or 24 bits integer wav with --int24. You can pass --mp3 to save as mp3 instead, and set the bitrate with --mp3-bitrate (default is 320kbps).

It can happen that the output would need clipping, in particular due to some separation artifacts. Demucs will automatically rescale each output stem so as to avoid clipping. This can however break the relative volume between stems. If instead you prefer hard clipping, pass --clip-mode clamp. You can also try to reduce the volume of the input mixture before feeding it to Demucs.

Other pre-trained models can be selected with the -n flag. The list of pre-trained models is:

  • htdemucs: first version of Hybrid Transformer Demucs. Trained on MusDB + 800 songs. Default model.
  • htdemucs_ft: fine-tuned version of htdemucs, separation will take 4 times more time but might be a bit better. Same training set as htdemucs.
  • htdemucs_6s: 6 sources version of htdemucs, with piano and guitar being added as sources. Note that the piano source is not working great at the moment.
  • hdemucs_mmi: Hybrid Demucs v3, retrained on MusDB + 800 songs.
  • mdx: trained only on MusDB HQ, winning model on track A at the MDX challenge.
  • mdx_extra: trained with extra training data (including MusDB test set), ranked 2nd on the track B of the MDX challenge.
  • mdx_q, mdx_extra_q: quantized version of the previous models. Smaller download and storage but quality can be slightly worse.
  • SIG: where SIG is a single model from the model zoo.

The --two-stems=vocals option allows to separate vocals from the rest (e.g. karaoke mode). vocals can be changed into any source in the selected model. This will mix the files after separating the mix fully, so this won't be faster or use less memory.

The --shifts=SHIFTS performs multiple predictions with random shifts (a.k.a the shift trick) of the input and average them. This makes prediction SHIFTS times slower. Don't use it unless you have a GPU.

The --overlap option controls the amount of overlap between prediction windows. Default is 0.25 (i.e. 25%) which is probably fine. It can probably be reduced to 0.1 to improve a bit speed.

The -j flag allow to specify a number of parallel jobs (e.g. demucs -j 2 myfile.mp3). This will multiply by the same amount the RAM used so be careful!

Memory requirements for GPU acceleration

If you want to use GPU acceleration, you will need at least 3GB of RAM on your GPU for demucs. However, about 7GB of RAM will be required if you use the default arguments. Add --segment SEGMENT to change size of each split. If you only have 3GB memory, set SEGMENT to 8 (though quality may be worse if this argument is too small). Creating an environment variable PYTORCH_NO_CUDA_MEMORY_CACHING=1 can help users with even smaller RAM such as 2GB (I separated a track that is 4 minutes but only 1.5GB is used), but this would make the separation slower.

If you do not have enough memory on your GPU, simply add -d cpu to the command line to use the CPU. With Demucs, processing time should be roughly equal to 1.5 times the duration of the track.

Calling from another Python program

The main function provides a opt parameter as a simple API. You can just pass the parsed command line as this parameter:

# Assume that your command is `demucs --mp3 --two-stems vocals -n mdx_extra "track with space.mp3"`
# The following codes are same as the command above:
import demucs.separate
demucs.separate.main(["--mp3", "--two-stems", "vocals", "-n", "mdx_extra", "track with space.mp3"])

# Or like this
import demucs.separate
import shlex
demucs.separate.main(shlex.split('--mp3 --two-stems vocals -n mdx_extra "track with space.mp3"'))

Training Demucs

If you want to train (Hybrid) Demucs, please follow the training doc.

MDX Challenge reproduction

In order to reproduce the results from the Track A and Track B submissions, checkout the MDX Hybrid Demucs submission repo.

How to cite

@inproceedings{rouard2022hybrid,
  title={Hybrid Transformers for Music Source Separation},
  author={Rouard, Simon and Massa, Francisco and D{\'e}fossez, Alexandre},
  booktitle={ICASSP 23},
  year={2023}
}

@inproceedings{defossez2021hybrid,
  title={Hybrid Spectrogram and Waveform Source Separation},
  author={D{\'e}fossez, Alexandre},
  booktitle={Proceedings of the ISMIR 2021 Workshop on Music Source Separation},
  year={2021}
}

License

Demucs is released under the MIT license as found in the LICENSE file.

More Repositories

1

llama

Inference code for LLaMA models
Python
44,989
star
2

segment-anything

The repository provides code for running inference with the SegmentAnything Model (SAM), links for downloading the trained model checkpoints, and example notebooks that show how to use the model.
Jupyter Notebook
42,134
star
3

Detectron

FAIR's research platform for object detection research, implementing popular algorithms like Mask R-CNN and RetinaNet.
Python
25,771
star
4

fairseq

Facebook AI Research Sequence-to-Sequence Toolkit written in Python.
Python
25,718
star
5

detectron2

Detectron2 is a platform for object detection, segmentation and other visual recognition tasks.
Python
25,567
star
6

fastText

Library for fast text representation and classification.
HTML
24,973
star
7

faiss

A library for efficient similarity search and clustering of dense vectors.
C++
24,035
star
8

audiocraft

Audiocraft is a library for audio processing and generation with deep learning. It features the state-of-the-art EnCodec audio compressor / tokenizer, along with MusicGen, a simple and controllable music generation LM with textual and melodic conditioning.
Python
19,691
star
9

codellama

Inference code for CodeLlama models
Python
13,303
star
10

sam2

The repository provides code for running inference with the Meta Segment Anything Model 2 (SAM 2), links for downloading the trained model checkpoints, and example notebooks that show how to use the model.
Jupyter Notebook
11,906
star
11

detr

End-to-End Object Detection with Transformers
Python
11,076
star
12

seamless_communication

Foundational Models for State-of-the-Art Speech and Text Translation
Jupyter Notebook
10,584
star
13

ParlAI

A framework for training and evaluating AI models on a variety of openly available dialogue datasets.
Python
10,085
star
14

maskrcnn-benchmark

Fast, modular reference implementation of Instance Segmentation and Object Detection algorithms in PyTorch.
Python
9,104
star
15

pifuhd

High-Resolution 3D Human Digitization from A Single Image.
Python
8,923
star
16

hydra

Hydra is a framework for elegantly configuring complex applications
Python
8,550
star
17

nougat

Implementation of Nougat Neural Optical Understanding for Academic Documents
Python
8,088
star
18

AnimatedDrawings

Code to accompany "A Method for Animating Children's Drawings of the Human Figure"
Python
8,032
star
19

ImageBind

ImageBind One Embedding Space to Bind Them All
Python
7,630
star
20

llama-recipes

Scripts for fine-tuning Llama2 with composable FSDP & PEFT methods to cover single/multi-node GPUs. Supports default & custom datasets for applications such as summarization & question answering. Supporting a number of candid inference solutions such as HF TGI, VLLM for local or cloud deployment.Demo apps to showcase Llama2 for WhatsApp & Messenger
Jupyter Notebook
7,402
star
21

pytorch3d

PyTorch3D is FAIR's library of reusable components for deep learning with 3D data
Python
7,322
star
22

dinov2

PyTorch code and models for the DINOv2 self-supervised learning method.
Jupyter Notebook
7,278
star
23

DensePose

A real-time approach for mapping all human pixels of 2D RGB images to a 3D surface-based model of the body
Jupyter Notebook
6,547
star
24

pytext

A natural language modeling framework based on PyTorch
Python
6,357
star
25

DiT

Official PyTorch Implementation of "Scalable Diffusion Models with Transformers"
Python
5,995
star
26

metaseq

Repo for external large-scale work
Python
5,947
star
27

SlowFast

PySlowFast: video understanding codebase from FAIR for reproducing state-of-the-art video models.
Python
5,678
star
28

mae

PyTorch implementation of MAE https//arxiv.org/abs/2111.06377
Python
5,495
star
29

mmf

A modular framework for vision & language multimodal research from Facebook AI Research (FAIR)
Python
5,235
star
30

ConvNeXt

Code release for ConvNeXt model
Python
4,971
star
31

dino

PyTorch code for Vision Transformers training with the Self-Supervised learning method DINO
Python
4,830
star
32

AugLy

A data augmentations library for audio, image, text, and video.
Python
4,739
star
33

Kats

Kats, a kit to analyze time series data, a lightweight, easy-to-use, generalizable, and extendable framework to perform time series analysis, from understanding the key statistics and characteristics, detecting change points and anomalies, to forecasting future trends.
Python
4,387
star
34

DrQA

Reading Wikipedia to Answer Open-Domain Questions
Python
4,374
star
35

sapiens

High-resolution models for human tasks.
Python
4,340
star
36

xformers

Hackable and optimized Transformers building blocks, supporting a composable construction.
Python
4,191
star
37

moco

PyTorch implementation of MoCo: https://arxiv.org/abs/1911.05722
Python
4,035
star
38

StarSpace

Learning embeddings for classification, retrieval and ranking.
C++
3,856
star
39

lingua

Meta Lingua: a lean, efficient, and easy-to-hack codebase to research LLMs.
Python
3,829
star
40

fairseq-lua

Facebook AI Research Sequence-to-Sequence Toolkit
Lua
3,765
star
41

nevergrad

A Python toolbox for performing gradient-free optimization
Python
3,446
star
42

deit

Official DeiT repository
Python
3,425
star
43

dlrm

An implementation of a deep learning recommendation model (DLRM)
Python
3,417
star
44

ReAgent

A platform for Reasoning systems (Reinforcement Learning, Contextual Bandits, etc.)
Python
3,395
star
45

LASER

Language-Agnostic SEntence Representations
Python
3,308
star
46

VideoPose3D

Efficient 3D human pose estimation in video using 2D keypoint trajectories
Python
3,294
star
47

PyTorch-BigGraph

Generate embeddings from large-scale graph-structured data.
Python
3,238
star
48

deepmask

Torch implementation of DeepMask and SharpMask
Lua
3,113
star
49

MUSE

A library for Multilingual Unsupervised or Supervised word Embeddings
Python
3,094
star
50

vissl

VISSL is FAIR's library of extensible, modular and scalable components for SOTA Self-Supervised Learning with images.
Jupyter Notebook
3,038
star
51

pytorchvideo

A deep learning library for video understanding research.
Python
2,885
star
52

XLM

PyTorch original implementation of Cross-lingual Language Model Pretraining.
Python
2,763
star
53

audio2photoreal

Code and dataset for photorealistic Codec Avatars driven from audio
Python
2,696
star
54

ijepa

Official codebase for I-JEPA, the Image-based Joint-Embedding Predictive Architecture. First outlined in the CVPR paper, "Self-supervised learning from images with a joint-embedding predictive architecture."
Python
2,670
star
55

jepa

PyTorch code and models for V-JEPA self-supervised learning from video.
Python
2,646
star
56

habitat-sim

A flexible, high-performance 3D simulator for Embodied AI research.
C++
2,621
star
57

co-tracker

CoTracker is a model for tracking any point (pixel) on a video.
Jupyter Notebook
2,564
star
58

hiplot

HiPlot makes understanding high dimensional data easy
TypeScript
2,481
star
59

fairscale

PyTorch extensions for high performance and large scale training.
Python
2,319
star
60

encodec

State-of-the-art deep learning based audio codec supporting both mono 24 kHz audio and stereo 48 kHz audio.
Python
2,313
star
61

InferSent

InferSent sentence embeddings
Jupyter Notebook
2,264
star
62

Pearl

A Production-ready Reinforcement Learning AI Agent Library brought by the Applied Reinforcement Learning team at Meta.
Python
2,193
star
63

pyrobot

PyRobot: An Open Source Robotics Research Platform
Python
2,109
star
64

darkforestGo

DarkForest, the Facebook Go engine.
C
2,108
star
65

ELF

An End-To-End, Lightweight and Flexible Platform for Game Research
C++
2,089
star
66

pycls

Codebase for Image Classification Research, written in PyTorch.
Python
2,053
star
67

esm

Evolutionary Scale Modeling (esm): Pretrained language models for proteins
Python
2,026
star
68

frankmocap

A Strong and Easy-to-use Single View 3D Hand+Body Pose Estimator
Python
1,972
star
69

video-nonlocal-net

Non-local Neural Networks for Video Classification
Python
1,931
star
70

SentEval

A python tool for evaluating the quality of sentence embeddings.
Python
1,930
star
71

habitat-lab

A modular high-level library to train embodied AI agents across a variety of tasks and environments.
Python
1,867
star
72

ResNeXt

Implementation of a classification framework from the paper Aggregated Residual Transformations for Deep Neural Networks
Lua
1,863
star
73

SparseConvNet

Submanifold sparse convolutional networks
C++
1,847
star
74

schedule_free

Schedule-Free Optimization in PyTorch
Python
1,842
star
75

chameleon

Repository for Meta Chameleon, a mixed-modal early-fusion foundation model from FAIR.
Python
1,811
star
76

swav

PyTorch implementation of SwAV https//arxiv.org/abs/2006.09882
Python
1,790
star
77

TensorComprehensions

A domain specific language to express machine learning workloads.
C++
1,747
star
78

Mask2Former

Code release for "Masked-attention Mask Transformer for Universal Image Segmentation"
Python
1,638
star
79

fvcore

Collection of common code that's shared among different research projects in FAIR computer vision team.
Python
1,623
star
80

TransCoder

Public release of the TransCoder research project https://arxiv.org/pdf/2006.03511.pdf
Python
1,611
star
81

poincare-embeddings

PyTorch implementation of the NIPS-17 paper "PoincarΓ© Embeddings for Learning Hierarchical Representations"
Python
1,587
star
82

votenet

Deep Hough Voting for 3D Object Detection in Point Clouds
Python
1,563
star
83

pytorch_GAN_zoo

A mix of GAN implementations including progressive growing
Python
1,554
star
84

ClassyVision

An end-to-end PyTorch framework for image and video classification
Python
1,552
star
85

deepcluster

Deep Clustering for Unsupervised Learning of Visual Features
Python
1,544
star
86

higher

higher is a pytorch library allowing users to obtain higher order gradients over losses spanning training loops rather than individual training steps.
Python
1,524
star
87

UnsupervisedMT

Phrase-Based & Neural Unsupervised Machine Translation
Python
1,496
star
88

consistent_depth

We estimate dense, flicker-free, geometrically consistent depth from monocular video, for example hand-held cell phone video.
Python
1,479
star
89

ConvNeXt-V2

Code release for ConvNeXt V2 model
Python
1,454
star
90

Detic

Code release for "Detecting Twenty-thousand Classes using Image-level Supervision".
Python
1,446
star
91

end-to-end-negotiator

Deal or No Deal? End-to-End Learning for Negotiation Dialogues
Python
1,368
star
92

DomainBed

DomainBed is a suite to test domain generalization algorithms
Python
1,355
star
93

multipathnet

A Torch implementation of the object detection network from "A MultiPath Network for Object Detection" (https://arxiv.org/abs/1604.02135)
Lua
1,349
star
94

CommAI-env

A platform for developing AI systems as described in A Roadmap towards Machine Intelligence - http://arxiv.org/abs/1511.08130
1,324
star
95

theseus

A library for differentiable nonlinear optimization
Python
1,306
star
96

DPR

Dense Passage Retriever - is a set of tools and models for open domain Q&A task.
Python
1,292
star
97

CrypTen

A framework for Privacy Preserving Machine Learning
Python
1,283
star
98

denoiser

Real Time Speech Enhancement in the Waveform Domain (Interspeech 2020)We provide a PyTorch implementation of the paper Real Time Speech Enhancement in the Waveform Domain. In which, we present a causal speech enhancement model working on the raw waveform that runs in real-time on a laptop CPU. The proposed model is based on an encoder-decoder architecture with skip-connections. It is optimized on both time and frequency domains, using multiple loss functions. Empirical evidence shows that it is capable of removing various kinds of background noise including stationary and non-stationary noises, as well as room reverb. Additionally, we suggest a set of data augmentation techniques applied directly on the raw waveform which further improve model performance and its generalization abilities.
Python
1,272
star
99

DeepSDF

Learning Continuous Signed Distance Functions for Shape Representation
Python
1,191
star
100

TimeSformer

The official pytorch implementation of our paper "Is Space-Time Attention All You Need for Video Understanding?"
Python
1,172
star