• Stars
    star
    108
  • Rank 311,186 (Top 7 %)
  • Language
  • Created almost 4 years ago
  • Updated over 3 years ago

Reviews

There are no reviews yet. Be the first to send feedback to the community and the maintainers!

Repository Details

πŸ”† πŸ“ A reading list focused on Multimodal Emotion Recognition (MER) πŸ‘‚πŸ‘„ πŸ‘€ πŸ’¬

AWESOME-MER

πŸ“ A reading list focused on Multimodal Emotion Recognition (MER) πŸ‘‚ πŸ‘„ πŸ‘€ πŸ’¬

(:white_small_square: indicates a specific modalityοΌ‰


πŸ”† Datasets

πŸ”† Challenges

πŸ”† Projects

πŸ”† Related Reviews

πŸ”† Multimodal Emotion Recognition (MER)

Datasets

  • (2018) CMU-MOSEI[:white_small_square:Visual:white_small_square:Audio:white_small_square:Language]
  • (2018) ASCERTAIN Dataset[:white_small_square:Facial activity data:white_small_square:Physiological data]
  • (2017) EMOTIC Dataset[:white_small_square:Face:white_small_square:Context]
  • (2016) Multimodal Spontaneous Emotion Database (BP4D+)[:white_small_square:Face:white_small_square:Thermal data:white_small_square:Physiological data]
  • (2016) EmotiW Database[:white_small_square:Visual:white_small_square:Audio]
  • (2015) LIRIS-ACCEDE Database[:white_small_square:Visual:white_small_square:Audio]
  • (2014) CREMA-D[:white_small_square:Visual:white_small_square:Audio]
  • (2013) SEMAINE Database[:white_small_square:Visual:white_small_square:Audio:white_small_square:Conversation transcripts]
  • (2011) MAHNOB-HCI[:white_small_square:Visual:white_small_square:Eye gaze:white_small_square:Physiological data]
  • (2008) IEMOCAP Database[:white_small_square:Visual:white_small_square:Audio:white_small_square:Text transcripts]
  • (2005) eNTERFACE Dataset[:white_small_square:Visual:white_small_square:Audio]

Challenges

Projects

Related Reviews

  • (IEEE Journal of Selected Topics in Signal Processing20) Multimodal Intelligence: Representation Learning, Information Fusion, and Applications [paper]
  • (Information Fusion20) A snapshot research and implementation of multimodal information fusion for data-driven emotion recognition [paper]
  • (Information Fusion17) A review of affective computing: From unimodal analysis to multimodal fusion [paper]
  • (Image and Vision Computing17) A survey of multimodal sentiment analysis [paper]
  • (ACM Computing Surveys15) A Review and Meta-Analysis of Multimodal Affect Detection Systems [paper]

Multimodal Emotion Recognition

πŸ”Έ CVPR

  • (2020) EmotiCon: Context-Aware Multimodal Emotion Recognition using Frege’s Principle [paper]

    [:white_small_square:Faces/Gaits :white_small_square:Background :white_small_square:Social interactions]

  • (2017) Emotion Recognition in Context [paper]

    [:white_small_square:Face :white_small_square:Context]

πŸ”Έ ICCV

  • (2019) Context-Aware Emotion Recognition Networks [paper]

    [:white_small_square:Faces :white_small_square:Context]

  • (2017) A Multimodal Deep Regression Bayesian Network for Affective Video Content Analyses [paper]

    [:white_small_square:Visual :white_small_square:Audio]

πŸ”Έ AAAI

  • (2020) M3ER: Multiplicative Multimodal Emotion Recognition Using Facial, Textual, and Speech Cues [paper]

    [:white_small_square:Face :white_small_square:Speech :white_small_square:Text ]

  • (2020) An End-to-End Visual-Audio Attention Network for Emotion Recognition in User-Generated Videos [paper]

    [:white_small_square:Visual :white_small_square:Audio ]

  • (2019) Multi-Interactive Memory Network for Aspect Based Multimodal Sentiment Analysis [paper]

    [:white_small_square:Visual :white_small_square:Text ]

  • (2019) VistaNet: Visual Aspect Attention Network for Multimodal Sentiment Analysis [paper]

    [:white_small_square:Visual :white_small_square:Text ]

  • (2019) Cooperative Multimodal Approach to Depression Detection in Twitter [paper]

    [:white_small_square:Visual :white_small_square:Text ]

  • (2014) Predicting Emotions in User-Generated Videos [paper]

    [:white_small_square:Visual :white_small_square:Audio :white_small_square:Attribute ]

πŸ”Έ IJCAI

  • (2019) DeepCU: Integrating both Common and Unique Latent Information for Multimodal Sentiment Analysis [paper]

    [:white_small_square:Face :white_small_square:Audio :white_small_square:Text ]

  • (2019) Adapting BERT for Target-Oriented Multimodal Sentiment Classification [paper]

    [:white_small_square:Image :white_small_square:Text ]

  • (2018) Personality-Aware Personalized Emotion Recognition from Physiological Signals [paper]

    [:white_small_square:Personality :white_small_square: Physiological signals ]

  • (2015) Combining Eye Movements and EEG to Enhance Emotion Recognition [paper]

    [:white_small_square:EEG :white_small_square:Eye movements ]

πŸ”Έ ACM MM

  • (2019) Emotion Recognition using Multimodal Residual LSTM Network [paper]

    [:white_small_square:EEG :white_small_square:Other physiological signals ]

  • (2019) Mutual Correlation Attentive Factors in Dyadic Fusion Networks for Speech Emotion Recognition [paper]

    [:white_small_square:Audio:white_small_square: Text]

  • (2019) Multimodal Deep Denoise Framework for Affective Video Content Analysis [paper]

    [:white_small_square:Face :white_small_square:Body gesture:white_small_square:Voice:white_small_square: Physiological signals]

πŸ”Έ WACV

  • (2016) Multimodal emotion recognition using deep learning architectures [paper]

    [:white_small_square:Visual :white_small_square:Audio]

πŸ”Έ FG

  • (2020) Multimodal Deep Learning Framework for Mental Disorder Recognition [paper]

    [:white_small_square:Visual :white_small_square:Audio :white_small_square:Text]

  • (2019) Multi-Attention Fusion Network for Video-based Emotion Recognition [paper]

    [:white_small_square:Visual :white_small_square:Audio]

  • (2019) Audio-Visual Emotion Forecasting: Characterizing and Predicting Future Emotion Using Deep Learning [paper]

    [:white_small_square:Face :white_small_square:Speech]

πŸ”Έ ICMI

  • (2018) Multimodal Local-Global Ranking Fusion for Emotion Recognition [paper]

    [:white_small_square:Visual :white_small_square:Audio ]

  • (2017) Emotion recognition with multimodal features and temporal models [paper]

    [:white_small_square:Visual :white_small_square:Audio ]

  • (2017) Modeling Multimodal Cues in a Deep Learning-Based Framework for Emotion Recognition in the Wild [paper]

    [:white_small_square:Visual :white_small_square:Audio ]

πŸ”Έ IEEE Transactions on Pattern Analysis and Machine Intelligence (TPAMI)

  • (2020) Context Based Emotion Recognition using EMOTIC Dataset [paper]

    [:white_small_square:Face :white_small_square:Context]

IEEE Transactions on Circuits and Systems for Video Technology

  • (2018) Learning Affective Features With a Hybrid Deep Model for Audio–Visual Emotion Recognition [paper]

    [:white_small_square:Visual :white_small_square:Audio ]

πŸ”Έ IEEE Transactions on Cybernetics

  • (2020) Emotion Recognition From Multimodal Physiological Signals Using a Regularized Deep Fusion of Kernel Machine [paper]

    [:white_small_square:EEG :white_small_square:Other physiological signals ]

  • (2019) EmotionMeter: A Multimodal Framework for Recognizing Human Emotions [paper]

    [:white_small_square:EEG :white_small_square:Eye movements]

  • (2015) Temporal Bayesian Fusion for Affect Sensing: Combining Video, Audio, and Lexical Modalities [paper]

    [:white_small_square:Face :white_small_square:Audio:white_small_square:Lexical features]

πŸ”Έ IEEE Transactions on Multimedia

  • (2020) Visual-Texual Emotion Analysis With Deep Coupled Video and Danmu Neural Networks [paper]

    [:white_small_square:Visual:white_small_square:Text]

  • (2020) Locally Confined Modality Fusion Network With a Global Perspective for Multimodal Human Affective Computing [paper]

    [:white_small_square:Visual:white_small_square:Audio:white_small_square:Language]

  • (2019) Metric Learning-Based Multimodal Audio-Visual Emotion Recognition [paper]

    [:white_small_square:Visual:white_small_square:Audio]

  • (2019) Knowledge-Augmented Multimodal Deep Regression Bayesian Networks for Emotion Video Tagging [paper]

    [:white_small_square:Visual:white_small_square:Audio:white_small_square:Attribute]

  • (2018) Multimodal Framework for Analyzing the Affect of a Group of People [paper]

    [:white_small_square:Face:white_small_square:Upper body:white_small_square: Scene]

  • (2012) Kernel Cross-Modal Factor Analysis for Information Fusion With Application to Bimodal Emotion Recognition [paper]

    [:white_small_square:Visual:white_small_square:Audio]

πŸ”Έ IEEE Transactions on Affective Computing

  • (2019) Audio-Visual Emotion Recognition in Video Clips [paper]

    [:white_small_square:Visual :white_small_square:Audio]

  • (2019) Recognizing Induced Emotions of Movie Audiences From Multimodal Information [paper]

    [:white_small_square:Visual :white_small_square:Audio :white_small_square:Dialogue:white_small_square:Attribute]

  • (2019) EmoBed: Strengthening Monomodal Emotion Recognition via Training with Crossmodal Emotion Embeddings [paper]

    [:white_small_square:Face :white_small_square:Audio]

  • (2018) Combining Facial Expression and Touch for Perceiving Emotional Valence [paper]

    [:white_small_square:Face :white_small_square:Touch stimuli]

  • (2018) A Combined Rule-Based & Machine Learning Audio-Visual Emotion Recognition Approach [paper]

    [:white_small_square:Visual :white_small_square:Audio]

  • (2016) Analysis of EEG Signals and Facial Expressions for Continuous Emotion Detection [paper]

    [:white_small_square:Face :white_small_square:EEG signals]

  • (2013) Exploring Cross-Modality Affective Reactions for Audiovisual Emotion Recognition [paper]

    [:white_small_square:Face :white_small_square:Audio]

  • (2012) Multimodal Emotion Recognition in Response to Videos [paper]

    [:white_small_square:Eye gaze :white_small_square:EEG signals]

  • (2012) Context-Sensitive Learning for Enhanced Audiovisual Emotion Classification [paper]

    [:white_small_square:Visual :white_small_square:Audio :white_small_square:Utterance]

  • (2011) Continuous Prediction of Spontaneous Affect from Multiple Cues and Modalities in Valence-Arousal Space [paper]

    [:white_small_square:Face :white_small_square: Shoulder gesture:white_small_square:Audio]

πŸ”Έ Neurocomputing

  • (2020) Joint low rank embedded multiple features learning for audio–visual emotion recognition [paper]

    [:white_small_square:Visual :white_small_square:Audio]

  • (2018) Multi-cue fusion for emotion recognition in the wild [paper]

    [:white_small_square:Visual :white_small_square:Audio]

  • (2018) Multi-modality weakly labeled sentiment learning based on Explicit Emotion Signal for Chinese microblog [paper]

    [:white_small_square:Visual :white_small_square:Text]

  • (2016) Fusing audio, visual and textual clues for sentiment analysis from multimodal content [paper]

    [:white_small_square:Visual :white_small_square:Audio:white_small_square: Text]

πŸ”Έ ​Information Fusion

  • (2019) Affective video content analysis based on multimodal data fusion in heterogeneous networks [paper]

    [:white_small_square:Visual :white_small_square:Audio]

  • (2019) Audio-visual emotion fusion (AVEF): A deep efficient weighted approach [paper]

    [:white_small_square:Visual :white_small_square:Audio]

πŸ”Έ Neural Networks

  • (2015) Towards an intelligent framework for multimodal affective data analysis [paper]

    [:white_small_square:Visual :white_small_square:Audio:white_small_square: Text]

  • (2015) Multimodal emotional state recognition using sequence-dependent deep hierarchical features [paper]

    [:white_small_square:Face :white_small_square:Upper-body]

πŸ”Έ Others

  • (Knowledge-Based Systems 2018) Multimodal sentiment analysis using hierarchical fusion with context modeling [paper]

    [:white_small_square:Visual :white_small_square:Audio:white_small_square: Text]

  • (IEEE Journal of Selected Topics in Signal Processing 2017) End-to-End Multimodal Emotion Recognition Using Deep Neural Networks [paper]

    [:white_small_square:Visual :white_small_square:Audio]

  • (Computer Vision and Image Understanding 2016) Multi-modal emotion analysis from facial expressions and electroencephalogram [paper]

    [:white_small_square:Face :white_small_square:EEG]