Select-additive learning: Improving generalization in multimodal sentiment analysis

Type: Article

Publication Date: 2017-07-01

Citations: 181

DOI: https://doi.org/10.1109/icme.2017.8019301

Abstract

Multimodal sentiment analysis is drawing an increasing amount of attention these days. It enables mining of opinions in video reviews which are now available aplenty on online platforms. However, multimodal sentiment analysis has only a few high-quality data sets annotated for training machine learning algorithms. These limited resources restrict the generalizability of models, where, for example, the unique characteristics of a few speakers (e.g., wearing glasses) may become a confounding factor for the sentiment classification task. In this paper, we propose a Select-Additive Learning (SAL) procedure that improves the generalizability of trained neural networks for multimodal sentiment analysis. In our experiments, we show that our SAL approach improves prediction accuracy significantly in all three modalities (verbal, acoustic, visual), as well as in their fusion. Our results show that SAL, even when trained on one dataset, achieves good generalization across two new test datasets.

Locations

  • arXiv (Cornell University) - View - PDF
  • 2022 IEEE International Conference on Multimedia and Expo (ICME) - View

Similar Works

Action Title Year Authors
+ Select-Additive Learning: Improving Generalization in Multimodal Sentiment Analysis 2016 Haohan Wang
Aaksha Meghawat
Louis‐Philippe Morency
Eric P. Xing
+ Select-Additive Learning: Improving Cross-individual Generalization in Multimodal Sentiment Analysis. 2016 Haohan Wang
Aaksha Meghawat
Louis‐Philippe Morency
Eric P. Xing
+ Multimodal Sentiment Analysis with Missing Modality: A Knowledge-Transfer Approach 2024 Weide Liu
Huijing Zhan
Hao Chen
Fengmao Lv
+ Improving Multimodal Sentiment Analysis: Supervised Angular margin-based Contrastive Learning for Enhanced Fusion Representation 2023 Cong-Duy Nguyen
ThĂ´ng Nguyen
Duc‐Ly Vu
Anh Luu
+ Improving Multimodal Sentiment Analysis: Supervised Angular Margin-based Contrastive Learning for Enhanced Fusion Representation 2023 Cong-Duy Nguyen
ThĂ´ng Nguyen
Duc Anh Vu
Luu Anh Tuan
+ M-SENA: An Integrated Platform for Multimodal Sentiment Analysis 2022 Huisheng Mao
Ziqi Yuan
Hua Xu
Wenmeng Yu
Yihe Liu
Kai Gao
+ M-SENA: An Integrated Platform for Multimodal Sentiment Analysis 2022 Huisheng Mao
Ziqi Yuan
Hua Xu
Wenmeng Yu
Yihe Liu
Kai Gao
+ Multimodal Routing: Improving Local and Global Interpretability of Multimodal Language Analysis 2020 Yao-Hung Hubert Tsai
Martin Ma
Muqiao Yang
Ruslan Salakhutdinov
Louis‐Philippe Morency
+ PDF Chat Learning Language-guided Adaptive Hyper-modality Representation for Multimodal Sentiment Analysis 2023 Haoyu Zhang
Yu Wang
Guanghao Yin
Kejun Liu
Yuanyuan Liu
Tianshu Yu
+ ConKI: Contrastive Knowledge Injection for Multimodal Sentiment Analysis 2023 Yakun Yu
Mingjun Zhao
Shi-ang Qi
Feiran Sun
Baoxun Wang
Weidong Guo
Xiaoli Wang
Lei Yang
Di Niu
+ PDF Chat ConKI: Contrastive Knowledge Injection for Multimodal Sentiment Analysis 2023 Yakun Yu
Mingjun Zhao
Shi-ang Qi
Feiran Sun
Baoxun Wang
Weidong Guo
Xiao Li Wang
Lei Yang
Di Niu
+ Gated Mechanism for Attention Based Multimodal Sentiment Analysis 2020 Ayush Kumar
Jithendra Vepa
+ Gated Mechanism for Attention Based Multimodal Sentiment Analysis 2020 Ayush Kumar
Jithendra Vepa
+ PDF Chat Trustworthy Multimodal Fusion for Sentiment Analysis in Ordinal Sentiment Space 2024 Zhuyang Xie
Yan Yang
Jie Wang
Xiaorong Liu
Xiaofan Li
+ MInD: Improving Multimodal Sentiment Analysis via Multimodal Information Disentanglement 2024 Weichen Dai
Xingyu Li
Pengbo Hu
Zeyu Wang
Ji Qi
Jianlin Peng
Yi Zhou
+ Interpretable multimodal sentiment analysis based on textual modality descriptions by using large-scale language models 2023 Sixia Li
Shogo Okada
+ Multimodal Routing: Improving Local and Global Interpretability of Multimodal Language Analysis 2020 Yao-Hung Hubert Tsai
QuintĂ­n MartĂ­n MartĂ­n
Muqiao Yang
Ruslan Salakhutdinov
Louis‐Philippe Morency
+ Multimodal Routing: Improving Local and Global Interpretability of Multimodal Language Analysis 2020 Yao-Hung Hubert Tsai
Martin Q. Ma
Muqiao Yang
Ruslan Salakhutdinov
Louis‐Philippe Morency
+ Multimodal Sentiment Analysis: A Survey 2023 Songning Lai
Haoxuan Xu
Xifeng Hu
Zhaoxia Ren
Zhi Liu
+ RethinkingTMSC: An Empirical Study for Target-Oriented Multimodal Sentiment Classification 2023 Junjie Ye
Jie Zhou
Junfeng Tian
Rui Wang
Qi Zhang
Tao Gui
Xuanjing Huang

Works That Cite This (41)

Action Title Year Authors
+ PDF Chat FV2ES: A Fully End2End Multimodal System for Fast Yet Effective Video Emotion Recognition Inference 2022 Qinglan Wei
Xuling Huang
Yuan Zhang
+ Improving Multimodal Accuracy Through Modality Pre-training and Attention 2020 Aya Abdelsalam Ismail
Md. Mahmudul Hasan
Faisal Ishtiaq
+ Will Multi-modal Data Improves Few-shot Learning? 2021 Zilun Zhang
Shihao Ma
Yichun Zhang
+ Fair Deep Learning Prediction for Healthcare Applications with Confounder Filtering. 2018 Zhenglin Wu
Haohan Wang
Mingze Cao
Yin Chen
Eric P. Xing
+ Nonlinear Invariant Risk Minimization: A Causal Approach 2021 Chaochao Lu
Yuhuai Wu
JosĂŠ Miguel HernĂĄndez-Lobato
Bernhard SchĂślkopf
+ PDF Chat Hybrid Multimodal Fusion for Humor Detection 2022 Haojie Xu
Weifeng Liu
Jiangwei Liu
Mingzheng Li
Yu Feng
Yasi Peng
Yunwei Shi
Xiao Sun
Meng Wang
+ Improving Multimodal Fusion with Hierarchical Mutual Information Maximization for Multimodal Sentiment Analysis 2021 Wei Han
Hui Chen
Soujanya Poria
+ PDF Chat Removing Confounding Factors Associated Weights in Deep Neural Networks Improves the Prediction Accuracy for Healthcare Applications 2018 Haohan Wang
Zhenglin Wu
Eric P. Xing
+ PDF Chat Correlative Channel-Aware Fusion for Multi-View Time Series Classification 2021 Yue Bai
Lichen Wang
Zhiqiang Tao
Sheng Li
Yun Fu
+ Learning Language and Multimodal Privacy-Preserving Markers of Mood from Mobile Data 2021 Paul Pu Liang
Terrance Liu
Anna Cai
Michał Muszyński
Ryo Ishii
Nicholas B. Allen
Randy P. Auerbach
David A. Brent
Ruslan Salakhutdinov
Louis‐Philippe Morency