• Stars
    star
    1,293
  • Rank 36,124 (Top 0.8 %)
  • Language
    Python
  • License
    MIT License
  • Created about 5 years ago
  • Updated about 1 year ago

Reviews

There are no reviews yet. Be the first to send feedback to the community and the maintainers!

Repository Details

A curated list of Multimodal Related Research.

Awesome Multimodal Research Awesome

build license prs

This repo is reorganized from Paul Liang's repo: Reading List for Topics in Multimodal Machine Learning, feel free to raise pull requests!

News

[03/2023] OpenAI: ChatGPT plugins are tools designed specifically for language models with safety as a core principle, and help ChatGPT access up-to-date information, run computations, or use third-party services. https://openai.com/blog/chatgpt-plugins

"We’re also hosting two plugins ourselves, a web browser and code interpreter. We’ve also open-sourced the code for a knowledge base retrieval plugin, to be self-hosted by any developer with information with which they’d like to augment ChatGPT."

[03/2023] Google Research: Bard is an early experiment that lets you collaborate with generative AI, powered by a research large language model (LLM), specifically a lightweight and optimized version of LaMDA. https://bard.google.com/

[03/2023] OpenAI: GPT-4 is a large multimodal model (accepting image and text inputs, emitting text outputs) that, while less capable than humans in many real-world scenarios, exhibits human-level performance on various professional and academic benchmarks. https://openai.com/research/gpt-4

[03/2023] Google Research: PaLM-E is a new generalist robotics model that overcomes these issues by transferring knowledge from varied visual and language domains to a robotics system. https://ai.googleblog.com/2023/03/palm-e-embodied-multimodal-language.html

[03/2023] OpenAI: ChatGPT and Whisper APIs, developers can now integrate ChatGPT and Whisper models into their apps and products through API. https://openai.com/blog/introducing-chatgpt-and-whisper-apis

[02/2023] MSR: Kosmos-1 is a multimodal large language model (MLLM) that is capable of perceiving multimodal input, following instructions, and performing in-context learning for not only language tasks but also multimodal tasks. https://github.com/microsoft/unilm#llm--mllm-multimodal-llm

[01/2023] Google Research: 2022 & beyond: Language, vision and generative models, a post of a series in which researchers across Google will highlight some exciting progress in 2022 and present the vision for 2023 and beyond. https://ai.googleblog.com/2023/01/google-research-2022-beyond-language.html

[11/2022] OpenAI: ChatGPT is a sibling model to InstructGPT, which is trained to follow an instruction in a prompt and provide a detailed response. https://openai.com/blog/chatgpt

[08/2022] MSR: Multimodal Pretraining: BEiT-3 is a general-purpose multimodal foundation model, which achieves state-of-the-art transfer performance on both vision and vision-language tasks. https://github.com/microsoft/unilm/tree/master/beit

[04/2022] OpenAI: DALL·E 2 is a new AI system that can create realistic images and art from a description in natural language. https://openai.com/dall-e-2/

[05/2021] Google: MuM, a new AI milestone for understanding information. https://blog.google/products/search/introducing-mum/

[03/2021] OpenAI: Multimodal Neurons in Artificial Neural Networks, which may explain CLIP’s accuracy in classifying surprising visual renditions of concepts, and is also an important step toward understanding the associations and biases that CLIP and similar models learn. https://openai.com/blog/multimodal-neurons/

[01/2021] OpenAI: CLIP maps images into categories described in text, and DALL-E creates new images from text. A step toward systems with deeper understanding of the world. https://openai.com/multimodal/

Research Papers

Recent Workshop

Social Intelligence in Humans and Robots, ICRA 2021

LANTERN 2021: The Third Workshop Beyond Vision and LANguage: inTEgrating Real-world kNowledge, EACL 2021

Multimodal workshops: Multimodal Learning and Applications, Sight and Sound, Visual Question Answering, Embodied AI, Language for 3D Scenes, CVPR 2021

Advances in Language and Vision Research (ALVR), NAACL 2021

Visually Grounded Interaction and Language (ViGIL), NAACL 2021

Wordplay: When Language Meets Games, NeurIPS 2020

NLP Beyond Text, EMNLP 2020

International Challenge on Compositional and Multimodal Perception, ECCV 2020

Multimodal Video Analysis Workshop and Moments in Time Challenge, ECCV 2020

Video Turing Test: Toward Human-Level Video Story Understanding, ECCV 2020

Grand Challenge and Workshop on Human Multimodal Language, ACL 2020

Workshop on Multimodal Learning, CVPR 2020

Language & Vision with applications to Video Understanding, CVPR 2020

International Challenge on Activity Recognition (ActivityNet), CVPR 2020

The End-of-End-to-End A Video Understanding Pentathlon, CVPR 2020

Towards Human-Centric Image/Video Synthesis, and the 4th Look Into Person (LIP) Challenge, CVPR 2020

Visual Question Answering and Dialog, CVPR 2020

Recent Tutorial

Tutorials on Multimodal Machine Learning, CVPR 2022 && NAACL 2022

Multi-modal Information Extraction from Text, Semi-structured, and Tabular Data on the Web (Cutting-edge), ACL 2020

Achieving Common Ground in Multi-modal Dialogue (Cutting-edge), ACL 2020

Recent Advances in Vision-and-Language Research, CVPR 2020

Neuro-Symbolic Visual Reasoning and Program Synthesis, CVPR 2020

Large Scale Holistic Video Understanding, CVPR 2020

A Comprehensive Tutorial on Video Modeling, CVPR 2020

More Repositories

1

LIS-YNP

🔮 Life is short, you need PyTorch.
Jupyter Notebook
138
star
2

Mathematical_Modeling

🎊 Mathematical Modeling Algorithms and Applications
MATLAB
123
star
3

Research_Papers

Record some papers I have read and paper notes I have taken, also including some awesome papers reading lists and academic blog posts.
TeX
65
star
4

CMU11-785

💫 11-785 Introduction to Deep Learning Fall 2018
Jupyter Notebook
39
star
5

MulT

[Reproduce] Code for the ACL2019 paper "Multimodal Transformer for Unaligned Multimodal Language Sequences".
Python
21
star
6

Research_Trends

Collect the Best Papers from the Top Conferences, also including statistics and visualization keywords of accepted papers from Top Conferences.
Jupyter Notebook
16
star
7

MNMT

Pytorch implementation of Multimodal Neural Machine Translation(MNMT).
Smalltalk
12
star
8

LCED

Recode my Leetcode Solutions and Notes.
Python
11
star
9

CHABCNet

[CHABCNet] ABCNet on the Chinese dataset, building on Detectron2 (Facebook AI Research)
Python
11
star
10

SynthText_CH

[SynthText Chinese] Improved code for generating synthetic text images as described in "Synthetic Data for Text Localisation in Natural Images", Ankush Gupta, Andrea Vedaldi, Andrew Zisserman, CVPR 2016.
Python
11
star
11

VAG-NMT

[Reproduce] Code for the EMNLP2018 paper "A Visual Attention Grounding Neural Model for Multimodal Machine Translation".
Python
11
star
12

Statistics-Base

Code for the book: "Statistical Learning Methods, by Hang Li".
Jupyter Notebook
9
star
13

Pythia-VQA

Baseline for Visual Question Answering.
Jupyter Notebook
8
star
14

Tumor2Graph

Tumor2Graph: a novel Overall-Tumor-Profile-derived virtual graph deep learning for predicting tumor typing and subtyping.
Python
7
star
15

Heterogeneous_Sampling

Jupyter Notebook
6
star
16

MultimodalTCGA

The Cancer Genome Atlas (TCGA), a landmark cancer genomics program, molecularly characterized over 20,000 primary cancer and matched normal samples spanning 33 cancer types.
Python
6
star
17

Quantum-Simulator

Python
4
star
18

Eurus-Holmes

3
star
19

VGCN

Code for Virtual Graph Convolutional Networks.
Python
2
star
20

WaveNet-Demo

WaveNet-Demo
Jupyter Notebook
2
star
21

Multimodal-Attack

Python
1
star
22

DeiT-CIFAR

Jupyter Notebook
1
star