• Stars
    star
    178
  • Rank 214,989 (Top 5 %)
  • Language
    Jupyter Notebook
  • License
    Other
  • Created over 4 years ago
  • Updated over 1 year ago

Reviews

There are no reviews yet. Be the first to send feedback to the community and the maintainers!

Repository Details

Data repository of Project Coswara

Coswara-Data

Creative Commons License
This work is licensed under a Creative Commons Attribution 4.0 International (CC BY 4.0) License.

UPDATE: The current full version of Coswara data is now published with open access in Nature Scientific Data, 2023. read

Project Coswara by Indian Institute of Science (IISc) Bangalore is an attempt to build a diagnostic tool for COVID-19 detection using the audio recordings such as breathing, cough and speech sounds of an individual. Currently, the project is in the data collection stage through crowdsourcing. To contribute your audio samples, please go to Project Coswara(https://coswara.iisc.ac.in/). The exercise takes 5-7 minutes.

What am I looking at? This github repository contains the raw audio data collected through https://coswara.iisc.ac.in/ . Every participant contributes nine sound samples. You can read the paper: Coswara - A Database of Breathing, Cough, and Voice Sounds for COVID-19 Diagnosis to know more about the dataset. Note that the dataset size has increased since this paper came out. We also maintain a (less frequently updated) blog here.

What is the structure of the repository? Each folder contains metadata and audio recordings corresponding to contributors. The folder is compressed. To download and extract the data, you can run the script extract_data.py

What are the different sound samples? Sound samples collected include breathing sounds (fast and slow), cough sounds (deep and shallow), phonation of sustained vowels (/a/ as in made, /i/,/o/), and counting numbers at slow and fast pace. Metadata information collected includes the participant's age, gender, location (country, state/ province), current health status (healthy/ exposed/ positive/recovered) and the presence of comorbidities (pre-existing medical conditions).

Can I see the metadata before downloading whole repository? Yes. The file combined_data.csv contains a summary of metadata. The file csv_labels_legend.json contains information about the columns present in combined_data.csv.

Is there any audio quality check? Yes. The audio files are manually listened and labeled as one of the three categories: 2(excellent), 1(good), 0(bad). The labels are present in the annotations folder.

How to cite this dataset in your work? Great to know you found it useful. You can cite the paper: Coswara - A Database of Breathing, Cough, and Voice Sounds for COVID-19 Diagnosis (https://arxiv.org/abs/2005.10548)

Is there any web application for COVID-19 screening based on respiratory acoustics? Yes! One can record his/her respiratory sounds at Coswara web application and obtain a COVID-19 probability score in few seconds. Demo: here, paper: here

What is the count of participants in each folder?

  • 2020-04-13 contains 76 samples.
  • 2020-04-15 contains 161 samples.
  • 2020-04-16 contains 197 samples.
  • 2020-04-17 contains 168 samples.
  • 2020-04-18 contains 46 samples.
  • 2020-04-19 contains 32 samples.
  • 2020-04-24 contains 28 samples.
  • 2020-04-30 contains 23 samples.
  • 2020-05-02 contains 155 samples.
  • 2020-05-04 contains 81 samples.
  • 2020-05-05 contains 14 samples.
  • 2020-05-25 contains 54 samples.
  • 2020-06-04 contains 20 samples.
  • 2020-07-07 contains 42 samples.
  • 2020-07-20 contains 21 samples.
  • 2020-08-03 contains 29 samples.
  • 2020-08-14 contains 83 samples.
  • 2020-08-20 contains 48 samples.
  • 2020-08-24 contains 19 samples.
  • 2020-09-01 contains 24 samples.
  • 2020-09-11 contains 16 samples.
  • 2020-09-19 contains 32 samples.
  • 2020-09-30 contains 26 samples.
  • 2020-10-12 contains 18 samples.
  • 2020-10-31 contains 29 samples.
  • 2020-11-30 contains 17 samples.
  • 2020-12-21 contains 27 samples.
  • 2021-02-06 contains 18 samples.
  • 2021-04-06 contains 66 samples.
  • 2021-04-19 contains 35 samples.
  • 2021-04-26 contains 41 samples.
  • 2021-05-07 contains 54 samples.
  • 2021-05-23 contains 31 samples.
  • 2021-06-03 contains 42 samples.
  • 2021-06-18 contains 56 samples.
  • 2021-06-30 contains 67 samples.
  • 2021-07-14 contains 52 samples.
  • 2021-08-16 contains 82 samples.
  • 2021-08-30 contains 64 samples.
  • 2021-09-14 contains 37 samples.
  • 2021-09-30 contains 103 samples.
  • 2022-01-16 contains 141 samples.
  • 2022-02-24 contains 372 samples.

Each folder also has a CSV file which contains metadata of each sample (that is, participant).

Can I know the individuals maintaining this project? Yes, we are a team of Professors, PostDocs, Engineers, and Research Scholars affiliated with the Indian Institute of Science, Bangalore (India). Sriram Ganapathy, Assistant Professor, Dept. Electrical Engineering, IISc is the Principal Investigator of this project.

Current Members: Debarpan Bhattacharya, Neeraj Kumar Sharma, Prasanta Kumar Ghosh, Srikanth Raj Chetupalli, Sriram Ganapathy

Past Members: Anand Mohan, Ananya Muguli, Debottam Dutta, Prashant Krishnan, Pravin Mote, Rohit Kumar, Shreyas Ramoji

(arranged in alphabetical order)

More Repositories

1

NeuralPlda

Implementation of Neural PLDA (NPLDA) model (A discriminative backend for Speaker Verification)
Python
98
star
2

DIHARD_2019_baseline_alltracks

Perl
37
star
3

ZEST

Zero-Shot Emotion Style Transfer
Python
30
star
4

NISP-Dataset

Shell
26
star
5

E2E-NPLDA

End-To-End Speaker Verification based on X-vector and Neural PLDA - A PyTorch implementation
Python
23
star
6

multimodal_emotion_recognition

Implementation of the paper "Multimodal Transformer With Learnable Frontend and Self Attention for Emotion Recognition" submitted to ICASSP 2022
Python
22
star
7

DIHARD-2019-baseline

Shell
16
star
8

self_supervised_AHC

Contains code for Deep Self Supervised Heirarchical Clustering for Speaker Diarization
Python
16
star
9

LEAP_Diarization

LEAP Diarization System for the Second DIHARD Challenge
Perl
8
star
10

deep-cca-for-audio-EEG

Python
7
star
11

MuDiCov

Python
7
star
12

FeatureExtractionUsingFDLP

Feature Extraction using FDLP
MATLAB
4
star
13

CANAVER

Code for Multimodal Cross Attention Network for Audio Visual Emotion Recognition
Python
4
star
14

Coswara-Exp

Repo containing Kaldi Data Prep, etc. for Project Coswara
Python
3
star
15

coswara-blog

Jupyter Notebook
3
star
16

SSC

Python
3
star
17

e2e-lid-hgru

End to End Language Language Identification with HGRU - ICASSP Paper
Python
2
star
18

SignalAnalysisUsingAm-FM

Signal Analysis Using AM - FM Model
MATLAB
2
star
19

tcd_jasa2019

The repository contains the data and codes used in the paper.
MATLAB
2
star
20

EEGspeech-MatchMismatch

Code files of the INTERSPEECH 2023 paper titled "Enhancing the EEG Speech Match Mismatch Tasks With Word Boundaries"
Python
2
star
21

CPC_DeepCluster

This is the implementation of "SELF SUPERVISED REPRESENTATION LEARNING WITH DEEP CLUSTERING FOR ACOUSTIC UNIT DISCOVERY FROM RAW SPEECH" submitted to ICASSP 2022
Python
2
star
22

Joint_FDLP_envelope_dereverberation_E2E_ASR

Shell
1
star
23

SelfSup_PLDA

Python
1
star
24

codes-in-tool-release

Jupyter Notebook
1
star
25

CVAE_FilterLearning

Modulation filter learning using CVAE
Python
1
star
26

lre-relevance-weighting

Contains code for paper submitted to IEEE TALSP
Python
1
star
27

langtcd_demo

Experimental setup for testing the effects of language on Talker Change Detection
Jupyter Notebook
1
star
28

V2S_Samples

Sample files for Voice to Singing Conversion.
1
star