This open source project serves two purposes.
- Collection and evaluation of a Question Answering dataset to improve existing QA/search methods - COVID-QA
- Question matching capabilities: Provide trustworthy answers to questions about COVID-19 via NLP - outdated
COVID-QA
- Link to COVID-QA Dataset
- Accompanying paper on OpenReview
- Annotation guidelines as pdf or videos
- deepset/roberta-base-squad2-covid a QA model trained on COVID-QA
Update 14th April, 2020: We are open sourcing the first batch of SQuAD style question answering annotations. Thanks to Tony Reina for managing the process and the many professional annotators who spend valuable time looking through Covid related research papers.
FAQ matching
Update 17th June, 2020: As the pandemic is thankfully slowing down and other information sources have catched up, we decided to take our hosted API and UI offline. We will keep the repository here as an inspiration for other projects and to share the COVID-QA dataset.
β‘ Problem
- People have many questions about COVID-19
- Answers are scattered on different websites
- Finding the right answers takes a lot of time
- Trustworthiness of answers is hard to judge
- Many answers get outdated soon
π‘ Idea
- Aggregate FAQs and texts from trustworthy data sources (WHO, CDC ...)
- Provide a UI where people can ask questions
- Use NLP to match incoming questions of users with meaningful answers
- Users can provide feedback about answers to improve the NLP model and flag outdated or wrong answers
- Display most common queries without good answers to guide data collection and model improvements
βοΈ Tech
- Scrapers to collect data
- Elasticsearch to store texts, FAQs, embeddings
- NLP Models implemented via Haystack to find answers via a) detecting similar question in FAQs b) detect answers in free texts (extractive QA)
- React Frontend