• Stars
    star
    608
  • Rank 73,219 (Top 2 %)
  • Language
    Python
  • License
    MIT License
  • Created over 4 years ago
  • Updated 9 months ago

Reviews

There are no reviews yet. Be the first to send feedback to the community and the maintainers!

Repository Details

MMSA is a unified framework for Multimodal Sentiment Analysis.

MMSA

MMSA is a unified framework for Multimodal Sentiment Analysis.

Features

  • Train, test and compare multiple MSA models in a unified framework.
  • Supports 15 MSA models, including recent works.
  • Supports 3 MSA datasets: MOSI, MOSEI, and CH-SIMS.
  • Easy to use, provides Python APIs and commandline tools.
  • Experiment with fully customized multimodal features extracted by MMSA-FET toolkit.

1. Get Started

Note: From version 2.0, we packaged the project and uploaded it to PyPI in the hope of making it easier to use. If you don't like the new structure, you can always switch back to v_1.0 branch.

1.1 Use Python API

  • Run pip install MMSA in your python virtual environment.

  • Import and use in any python file:

    from MMSA import MMSA_run
    
    # run LMF on MOSI with default hyper parameters
    MMSA_run('lmf', 'mosi', seeds=[1111, 1112, 1113], gpu_ids=[0])
    
    # tune Self_mm on MOSEI with default hyper parameter range
    MMSA_run('self_mm', 'mosei', seeds=[1111], gpu_ids=[1])
    
    # run TFN on SIMS with altered config
    config = get_config_regression('tfn', 'mosi')
    config['post_fusion_dim'] = 32
    config['featurePath'] = '~/feature.pkl'
    MMSA_run('tfn', 'mosi', config=config, seeds=[1111])
    
    # run MTFN on SIMS with custom config file
    MMSA_run('mtfn', 'sims', config_file='./config.json')
  • For more detailed usage, please refer to APIs.

1.2 Use Commandline Tool

  • Run pip install MMSA in your python virtual environment.

  • Use from command line:

    # show usage
    $ python -m MMSA -h
    
    # train & test LMF on MOSI with default parameters
    $ python -m MMSA -d mosi -m lmf -s 1111 -s 1112
    
    # tune 50 times of TFN on MOSEI with custom config file & custom save dir
    $ python -m MMSA -d mosei -m tfn -t -tt 30 --model-save-dir ./models --res-save-dir ./results
    
    # train & test self_mm on SIMS with custom audio features & use gpu2
    $ python -m MMSA -d sims -m self_mm -Fa ./Features/Feature-A.pkl --gpu-ids 2
  • For more detailed usage, please refer to Commandline Arguments.

1.3 Clone & Edit the Code

  • Clone this repo and install requirements.
    $ git clone https://github.com/thuiar/MMSA
  • Edit the codes to your needs. See Code Structure for a basic review of our code structure.
  • After editing, run the following commands:
    $ cd MMSA-master # make sure you're in the top directory
    $ pip install .
  • Then run the code like above sections.
  • To further change the code, you need to re-install the package:
    $ pip uninstall MMSA
    $ pip install .
  • If you'd rather run the code without installation(like in v_1.0), please refer to Run Code without Installation.

2. Datasets

MMSA currently supports MOSI, MOSEI, and CH-SIMS dataset. Use the following links to download raw videos, feature files and label files. You don't need to download raw videos if you're not planning to run end-to-end tasks.

SHA-256 for feature files:

`MOSI/Processed/unaligned_50.pkl`:  `78e0f8b5ef8ff71558e7307848fc1fa929ecb078203f565ab22b9daab2e02524`
`MOSI/Processed/aligned_50.pkl`:    `d3994fd25681f9c7ad6e9c6596a6fe9b4beb85ff7d478ba978b124139002e5f9`
`MOSEI/Processed/unaligned_50.pkl`: `ad8b23d50557045e7d47959ce6c5b955d8d983f2979c7d9b7b9226f6dd6fec1f`
`MOSEI/Processed/aligned_50.pkl`:   `45eccfb748a87c80ecab9bfac29582e7b1466bf6605ff29d3b338a75120bf791`
`SIMS/Processed/unaligned_39.pkl`:  `c9e20c13ec0454d98bb9c1e520e490c75146bfa2dfeeea78d84de047dbdd442f`

MMSA uses feature files that are organized as follows:

{
    "train": {
        "raw_text": [],              # raw text
        "audio": [],                 # audio feature
        "vision": [],                # video feature
        "id": [],                    # [video_id$_$clip_id, ..., ...]
        "text": [],                  # bert feature
        "text_bert": [],             # word ids for bert
        "audio_lengths": [],         # audio feature lenth(over time) for every sample
        "vision_lengths": [],        # same as audio_lengths
        "annotations": [],           # strings
        "classification_labels": [], # Negative(0), Neutral(1), Positive(2). Deprecated in v_2.0
        "regression_labels": []      # Negative(<0), Neutral(0), Positive(>0)
    },
    "valid": {***},                  # same as "train"
    "test": {***},                   # same as "train"
}

Note: For MOSI and MOSEI, the pre-extracted text features are from BERT, different from the original glove features in the CMU-Multimodal-SDK.

Note: If you wish to extract customized multimodal features, please try out our MMSA-FET

3. Supported MSA Models

Type Model Name From Published
Single-Task TFN Tensor-Fusion-Network EMNLP 2017
Single-Task EF_LSTM MultimodalDNN ACL 2018 Workshop
Single-Task LF_DNN MultimodalDNN ACL 2018 Workshop
Single-Task LMF Low-rank-Multimodal-Fusion ACL 2018
Single-Task MFN Memory-Fusion-Network AAAI 2018
Single-Task Graph-MFN Graph-Memory-Fusion-Network ACL 2018
Single-Task MulT(without CTC) Multimodal-Transformer ACL 2019
Single-Task MFM MFM ICRL 2019
Multi-Task MLF_DNN MMSA ACL 2020
Multi-Task MTFN MMSA ACL 2020
Multi-Task MLMF MMSA ACL 2020
Single-Task BERT-MAG MAG-BERT ACL 2020
Single-Task MISA MISA ACMMM 2020
Single-Task SELF_MM Self-MM AAAI 2021
Single-Task MMIM MMIM EMNLP 2021
Single-Task BBFN (Work in Progress) BBFN ICMI 2021

4. Results

Baseline results are reported in results/result-stat.md

5. Citation

Please cite our paper if you find our work useful for your research:

@inproceedings{yu2020ch,
  title={CH-SIMS: A Chinese Multimodal Sentiment Analysis Dataset with Fine-grained Annotation of Modality},
  author={Yu, Wenmeng and Xu, Hua and Meng, Fanyang and Zhu, Yilin and Ma, Yixiao and Wu, Jiele and Zou, Jiyun and Yang, Kaicheng},
  booktitle={Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics},
  pages={3718--3727},
  year={2020}
}
@inproceedings{yu2021learning,
  title={Learning Modality-Specific Representations with Self-Supervised Multi-Task Learning for Multimodal Sentiment Analysis},
  author={Yu, Wenmeng and Xu, Hua and Yuan, Ziqi and Wu, Jiele},
  booktitle={Proceedings of the AAAI Conference on Artificial Intelligence},
  volume={35},
  number={12},
  pages={10790--10797},
  year={2021}
}

More Repositories

1

GNN-GBDT-Guided-Fast-Optimizing-Framework

GNN&GBDT-Guided Fast Optimizing Framework for Large-scale Integer Programming(Ye et al., ICML 2023): https://openreview.net/pdf?id=tX7ajV69wt
Python
311
star
2

TEXTOIR

TEXTOIR is the first opensource toolkit for text open intent recognition. (ACL 2021)
Python
183
star
3

Self-MM

Codes for paper "Learning Modality-Specific Representations with Self-Supervised Multi-Task Learning for Multimodal Sentiment Analysis"
Python
175
star
4

MMSA-FET

A Tool for extracting multimodal features from videos.
Python
127
star
5

OKD-Reading-List

Papers for Open Knowledge Discovery
TeX
117
star
6

DeepAligned-Clustering

Discovering New Intents with Deep Aligned Clustering (AAAI 2021)
Python
116
star
7

Cross-Modal-BERT

CM-BERT: Cross-Modal BERT for Text-Audio Sentiment Analysis๏ผˆMM2020๏ผ‰
Python
100
star
8

AWESOME-MSA

Paper List for Multimodal Sentiment Analysis
93
star
9

M-SENA

M-SENA: All-in-One Platform for Multimodal Sentiment Analysis
72
star
10

Adaptive-Decision-Boundary

Deep Open Intent Classification with Adaptive Decision Boundary (AAAI 2021)
Python
70
star
11

MIntRec

MIntRec: A New Dataset for Multimodal Intent Recognition (ACM MM 2022)
Python
65
star
12

ch-sims-v2

Make Acoustic and Visual Cues Matter: CH-SIMS v2.0 Dataset and AV-Mixup Consistent Module
Python
48
star
13

TEXTOIR-DEMO

TEXTOIR: An Integrated and Visualized Platform for Text Open Intent Recognition (ACL 2021)
JavaScript
46
star
14

CDAC-plus

Discovering New Intents via Constrained Deep Adaptive Clustering with Cluster Refinement (AAAI2020)
Jupyter Notebook
42
star
15

DeepUnkID

Deep Unknown Intent Detection with Margin Loss (ACL2019)
Jupyter Notebook
34
star
16

CRL

Implementation of the research paper Consistent Representation Learning for Continual Relation Extraction (Findings of ACL 2022)
Python
25
star
17

TFR-Net

This repository contains the official implementation code of the paper Transformer-based Feature Reconstruction Network for Robust Multimodal Sentiment Analysis, accepted at ACMMM 2021.
Python
24
star
18

TCL-MAP

TCL-MAP is a powerful method for multimodal intent recognition (AAAI 2024)
Python
20
star
19

OpenVNA

[ACL 2024 SDT] OpenVNA is an open-source framework designed for analyzing the behavior of multimodal language understanding systems under noisy conditions.
Python
15
star
20

AWESOME-Dialogue

Paper List for Dialogue and Interactive Systems
15
star
21

MIntRec2.0

MIntRec 2.0 is the first large-scale dataset for multimodal intent recognition and out-of-scope detection in multi-party conversations (ICLR 2024)
Python
15
star
22

UMC

Unsupervised Multimodal Clustering for Semantics Discovery in Multimodal Utterances (ACL 2024)
Python
11
star
23

thuiar.github.io

The official website of THUIAR.
HTML
10
star
24

Books

JavaScript
8
star
25

Robust-MSA

JavaScript
7
star
26

CTMWA

Crossmodal Translation based Meta Weight Adaption for Robust Image-Text Sentiment Analysis
Python
5
star
27

Meta-NA

Pytorch implementation for codes in Meta Noise Adaption Framework for Multimodal Sentiment Analysis with Feature Noise (Accepted by IEEE Transactions on Multimedia).
Python
4
star
28

TCM-CAS

Traditional Chinese Medicine Constitution Assessment System
3
star
29

MILPGen

Python
2
star
30

AWESOME-MTL

Paper List for Multi-task Learning
2
star
31

cmcnn

code for paper "Co-attentive multi-task convolutional neural network for facial expression recognition"
Python
2
star
32

Expensive-Multi-objective-Optimization

2
star
33

Adaptive-Batch-ParEGO

This repository contains Matlab implementation of the algorithm framework for adaptive batch-ParEGO
MATLAB
2
star
34

AudioProcess

Related methods and tools for processing audio data
C++
1
star
35

Block-MOBO

This repository contains Matlab implementation of the algorithm framework for Block-MOBO.
MATLAB
1
star
36

ML4MILP

ML4MILP: the first benchmark dataset specifically designed to test ML-based algorithms for solving MILP problems
Python
1
star
37

GAR-Net

GAR-Net: A Graph Attention Reasoning Network for Conversation Understanding
Python
1
star
38

Light-MILPopt

1
star