• Stars
    star
    575
  • Rank 75,220 (Top 2 %)
  • Language
    Python
  • Created about 2 years ago
  • Updated 4 months ago

Reviews

There are no reviews yet. Be the first to send feedback to the community and the maintainers!

Repository Details

Audio Dataset for training CLAP and other models
                 





What is Audio Dataset Project?

This repository is created for Audio Dataset Project, an audio dataset collection initiative announced by LAION. These datasets, each containing enormous amount of audio-text pairs, will be eventually processed and used for training CLAP (Contrastive language-Audio Pretraining) model and other models.

Here is an explicative video introducing you to the project.

Who are we?

Since Audio Dataset is an open source project belongs to LAION, we have a team of open source contributors. They are, along with LAION members, a three-people researchers group including Yusong Wu, Ke Chen and Tianyu Zhang from Mila and UCSD, intern Marianna Nezhurina, previous intern Yuchen Hui, as well as many enthusiastic contributors from all over the world, such as @PiEquals4#1909 in Discord server.

What have we done?

  • We are keeping collecting audio datasets and here is the LIST of all what we found.
  • We define the standard and method to store and process all audio datasets, which is essential in the sense of unifying the final format of datasets to simplify model training. The final dataset format we used for now is webdataset. The concrete data process pipeline is specified here
  • You may also find the processing code for each processed audio datasets, respectively. Dependencies required for testing these scripts are specified in the document environment.txt. Please Note that environment.txt may be an non-exhaustive list. There is also a list with redundant packages environment.yml(i.e. superclass $\supset$ of the exhaustive list), and you can use command conda env create --name envname --file=environment.yml to create the environment and conda activate envname for using it.

Contributing

Contact

  • You could find us on LAION Discord Server CLAP channel (the channel name is clap in lower case).
  • In the CLAP channel, If you have any question about the project, please feel free to talk with intern Marianna Nezhurina(marianna13#7139), Christoph Schuhmann(@spirit-from-germany#1488), Richard(@rvencu#4120), Romain(@rom1504#5008),Yuchen Hui(@Yuchen Hui#8574), Yusong Wu(@Yusong Wu#3047), Ke Chen(@Ke Chen#0709) or Tianyu Zhang(@tianyuzhang#1725). Text in parenthesis is Discord id.
  • Moreover, if you need computation resources during contributing, please go into compute-allocation channel of Discord Server and read the pinned messages for usage of LAION pods. If any problem is encountered, please feel free to ask any question in the channel.
  • 7.14 update: old LAION pods are not accessible any more, so you have to ask Richard(@rvencu#4120) in CLAP channel for access to new LAION cluster.

Project progress

We have created a github project page to keep track of the progress of data collection and data processing. Here follows some descriptions for each board of the project:

  • Todo board : In this board is placed all the datasets in the LIST that is not yet converted to webdataset form and on which nobody is currently working
  • Assigned/In progress/Processing board: We listed datasets that is assigned to someone for processing, i.e. we have already contributors working on these datasets.
  • Review board: Once a certain dataset is converted to webdataset format, the corresponding item should be moved here, indicating that it is ready for further review (e.g. check if there is any format error in order to ensure the quality of model training) by our team.
  • Done board: If there is no problem found at the review stage, the dataset will be archived to “Done” board, meaning it is ready for training the model.

How to contribute?

There are mainly two ways to contribute to our audio dataset project.

  1. Collection of scattered audio sources by means of web scraping technique (and then convert them to webdatset format, i.e. the second point below).

    Example: crawling word-pronunciation pair from Cambridge Dictionary, or scrape videos from youtube, extract the sound and associate then with the title.

    Please join us in Discord if you want to know which scattered audio sources we currently focus on, or if you have suggestion about what we should scrape next.

  2. Process of curated datasets, i.e. convert them to webdataset format according to the pipeline

    Example: Clotho is an curated audio dataset having its own format, thought we ought to convert it to webdataset format with aid of data_preprocess/preprocess_clotho.py and utils/make_tars.py . For more processing details please read the pipeline part.

    For this type of contribution, it is suggested to view the datasets in the Todo board in the github project page and join us in Discord server. Please contact Marianna Nezhurina(marianna13#7139) in CLAP channel after you have chosen one from Todo board to process, so that we can keep track of the progress and avoid the case where many people work simultaneously on one dataset.

  • Last but not least, if you find any interesting curated datasets (e.g. Clotho), you can tell us in LAION Discord server. We will eventually add it to the LIST

Contribution Delivery

Ideally, in both cases mentioned above, we hope receive from you the webdataset format dataset. When you’ve packed up your dataset into webdataset format, upload it to our AWS S3 bucket: aws s3 cp your/webdataset/ s3://s-laion-audio/webdataset_tar/your webdataset/ and contact Marianna Nezhurina(marianna13#7139) so that she could move the dataset to the review board. (If possible, please also add the processed (not yet packed up) dataset to S3://s-laion-audio/processed_dataset).

When it comes to AWS s3 accessibility problem, please see the LAION cluster part in contact entry above, because AWS s3 are accessible if visited from the LAION new cluster.

Nevertheless, for the scrapped dataset, we also receive a CSV file of which the structure is:

url_link_to_the_audio_allowing_us_to_download , text

i.e. each line is an audio_url-text pair, by which we can write a batch file to handle it easily.

The End

Last updated on July 14 0:57 EST, 2022 Last updated on September 5th 11:00 EST, 2022 (Marianna Nezhurina will take over the intern's work of Yuchen Hui) Last updated on November 8th 18:55 EST, 2022 (Release of LAION-Audio-630K dataset)

More Repositories

1

Open-Assistant

OpenAssistant is a chat-based assistant that understands tasks, can interact with third-party systems, and retrieve information dynamically to do so.
Python
36,748
star
2

CLIP_benchmark

CLIP-like model evaluation
Jupyter Notebook
497
star
3

dalle2-laion

Pretrained Dalle2 from laion
Python
496
star
4

CLAP

Contrastive Language-Audio Pretraining
Python
479
star
5

natural_voice_assistant

Python
413
star
6

laion-3d

Collect large 3d dataset and build models
238
star
7

phenaki

A phenaki reproduction using pytorch.
Python
218
star
8

aesthetic-predictor

A linear estimator on top of clip to predict the aesthetic quality of pictures
Jupyter Notebook
199
star
9

Open-Instruction-Generalist

Open Instruction Generalist is an assistant trained on massive synthetic instructions to perform many millions of tasks
Python
195
star
10

ldm-finetune

Home of `erlich` and `ongo`. Finetune latent-diffusion/glid-3-xl text2image on your own data.
Python
169
star
11

CLIP-based-NSFW-Detector

Python
135
star
12

scaling-laws-openclip

Reproducible scaling laws for contrastive language-image learning (https://arxiv.org/abs/2212.07143)
Jupyter Notebook
135
star
13

laion-datasets

Description and pointers of laion datasets
HTML
131
star
14

laion-dreams

Aim for the moon. If you miss, you may hit a star.
121
star
15

laion.ai

HTML
99
star
16

LAION-5B-WatermarkDetection

Python
88
star
17

Open-GIA

O-GIA is an umbrella for research, infrastructure and projects ecosystem that should provide open source, reproducible datasets, models, applications & safety tools for Open Generalist Interactive Agents (O-GIA). O-GIA systems will act in collaboration with human or autonomously, supporting various kind of validated decision making and assistance.
88
star
18

video-clip

Let's make a video clip
85
star
19

General-GPT

Jupyter Notebook
64
star
20

Text-to-speech

Python
59
star
21

Big-Interleaved-Dataset

Big-Interleaved-Dataset
Python
56
star
22

riverbed

Tools for content datamining and NLP at scale
Python
40
star
23

Discord-Scrapers

Implementation of a discord channel scraper to generate datasets.
Python
40
star
24

OCR-ensemble

Jupyter Notebook
37
star
25

Conditional-Pretraining-of-Large-Language-Models

Python
36
star
26

interesting-text-datasets

33
star
27

blade2blade

Adversarial Training and SFT for Bot Safety Models
Python
32
star
28

deep-image-diffusion-prior

Inverts CLIP text embeds to image embeds and visualizes with deep-image-prior.
Jupyter Notebook
31
star
29

watermark-detection

A repository containing datasets and tools to train a watermark classifier.
Python
31
star
30

temporal-embedding-aggregation

Aggregating embeddings over time
Python
30
star
31

medical

This repository will be a summary and outlook on all our open, medical, AI advancements.
Jupyter Notebook
28
star
32

Anh

Anh - LAION's multilingual assistant datasets and models
Python
27
star
33

laion50BU

Un-*** 50 billions multimodality dataset
24
star
34

conditioned-prior

(wip) Use LAION-AI's CLIP "conditoned prior" to generate CLIP image embeds from CLIP text embeds.
Python
18
star
35

LAION-SAFETY

An open toolbox for NSFW & toxicity detection
Jupyter Notebook
16
star
36

opendream

Frontend (and soon also midleware and backend) for a new, opensource image generation platform.
TypeScript
14
star
37

laion5B-paper

Building the laion5B paper
13
star
38

notebooks

A collection of generative and training notebooks getting mirrored to google colab.
Jupyter Notebook
12
star
39

laionide

This repository contains training code and checkpoitns for finetuning glide.
Python
12
star
40

laion-dedup

Python
12
star
41

super-resolution

This is the LAION repository for creating open super-resolution models with the help of LAION-5B subsets.
11
star
42

dataset-spec

Describe the format of image/text datasets
Python
10
star
43

LAION-PEOPLE

This project provides a data set with bounding boxes, body poses, 3D face meshes & captions of people from our LAION-2.2B. Additionally it provides clusters based on the poses and face meshes and pose-related captions based on these cluster assignments.
10
star
44

image-deduplication-testset

HTML
8
star
45

project-menu

Projects at LAION
8
star
46

laion-ai.github.io

laion github website
Svelte
6
star
47

dataset-usage

This repository is a summary of all systems and scientific papers that use LAION datasets.
6
star
48

repository-overview

This repository will give a quick overview of all projects and repositories from LAION.
5
star
49

KAISER

Knowledge Acquisition and Interlinking via Semantic Embeddings and Reasoning
4
star
50

LionizeR

Experiments with Summarization, Long Context and Retrieval
Python
4
star
51

safety-pipeline

A collection of safety classifiers and models to process image and texts.
Python
3
star
52

decentralized-learning

A basic setup for decentralized-learning that can be used for training future DALLE/CLIP/CLAP models.
3
star
53

diffusion-prior

DALL-E2 diffusion prior
Python
3
star
54

GIF

General / Global Inference Framework
Python
3
star
55

website

This is the development repository of the LAION-AI website.
HTML
3
star
56

lucidrains-projects

A summary of all lucidrains repositores and links to training / research approaches by LAION or other communities.
Jupyter Notebook
3
star
57

NeoGen

3
star
58

laion5b-subsets

Creating subsets from laion5b via embeddings search
Jupyter Notebook
2
star
59

human_artifacts

A repo containing images for artifact annotation.
2
star
60

public-relations

All media / publicity on LAION and related stuff!
2
star
61

dataset-inference

The new repository for the genral inference pipeline.
Python
2
star
62

public-domain-images

A collection of public domain images donated for ML training.
2
star
63

math_problems-step-by-step_solutions

Here we provide and collect many functions to generate math problem and step by step solutions for LLM training
Python
2
star
64

language-models

2
star
65

introduction-resources

Recommended intro resources
2
star
66

balanced-laion5b

This repository shall help finding a good distribution for huge datasets like LAION-5B for more efficient training.
2
star
67

hand-inference

A model to run hand inference on a cluster.
Jupyter Notebook
2
star
68

AIW

Alice in Wonderland code base for experiments and raw experiments data
1
star
69

laion5b-bias

This repository is a collection of found biases in the LAION-5B dataset.
1
star
70

dataset-tasks

datasets that should be downloaded & converted to our standard training formart.
1
star