Current Issue Cover
多模态情感识别与理解发展现状及趋势

陶建华1, 范存航2, 连政3, 吕钊2, 沈莹4, 梁山5(1.清华大学自动化系, 北京 100084;2.安徽大学多模态认知计算安徽省重点实验室, 合肥 230601;3.中国科学院自动化研究所, 北京 100190;4.同济大学软件学院, 上海 457001;5.西安交大利物浦大学智能工程学院, 苏州 215123)

摘 要
情感计算是人工智能领域的一个重要分支,在交互、教育、安全和金融等众多领域应用广泛。单纯依靠语音、视频单一模态的情感识别并不符合人类对情感的感知模式,在受到干扰的情况下识别准确率会迅速下降。为了充分挖掘不同模态数据的互补性,多模态融合的情感识别研究正日益受到研究人员的广泛重视。本文分别从多模态情感识别概述、多模态情感识别与理解、抑郁症情感障碍检测及干预 3 个维度介绍多模态情感计算研究现状。本文认为具备可扩展性的情感特征设计、基于大模型迁移学习的识别方法将是未来的发展方向,并在解决抑郁、焦虑等情感障碍方面的作用日益凸显。
关键词
Development of multimodal sentiment recognition and understanding

Tao Jianhua1, Fan Cunhang2, Lian Zheng3, Lyu Zhao2, Shen Ying4, Liang Shan5(1.Department of Automation, Tsinghua University, Beijing 100084, China;2.Anhui Province Key Laboratory of Multimodal Cognitive Computation, Anhui University, Hefei 230601, China;3.Institute of Automation, Chinese Academy of Sciences, Beijing 100190, China;4.School of Software Engineering, Tongji University, Shanghai 457001, China;5.School of Advanced Technology, Xi'an Jiaotong-Liverpool University, Suzhou 215123, China)

Abstract
Affective computing is an important branch in the field of artificial intelligence(AI). It aims to build a computational system that can automatically perceive,recognize,understand,and provide feedback on human emotions. It involves the intersection of multiple disciplines such as computer science,neuroscience,psychology,and social science. Deep emotional understanding and interaction can enable computers to better understand and respond to human emotional needs. It can also provide personalized interactions and feedback based on emotional states,which enhances the humancomputer interaction experience. It has various applications in areas such as intelligent assistants,virtual reality,and smart healthcare. Relying solely on single-modal information,such as speech signal or video,does not align with the way humans perceive emotions. The accuracy of recognition rapidly decreases when faced with interference. Multimodal emotion understanding and interaction technologies aim to fully model multidimensional information from audio,video,and physiological signals to achieve more accurate emotion understanding. This technology is fundamental and an important prerequisite for achieving natural,human-like,and personalized human-computer interaction. It holds significant value for ushering in the era of intelligence and digitalization. Multimodal fusion for sentiment recognition receives increasing attention from researchers in fully exploiting the complementary nature of different modalities. This study introduces the current research status of multimodal sentiment computation from three dimensions:an overview of multimodal sentiment recognition,multimodal sentiment understanding,and detection and assessment of emotional disorders such as depression. The overview of emotion recognition is elaborated from the aspects of academic definition,mainstream datasets,and international competitions. In recent years,large language models(LLMs)have demonstrated excellent modeling capabilities and achieved great success in the field of natural language processing with their outstanding language understanding and reasoning abilities. LLMs have garnered widespread attention because of their ability to handle various complex tasks by understanding prompts with minimal or zero-shot learning. Through methods such as self-supervised learning or contrastive learning,LLMs can learn more expressive multimodal representations,which can capture the correlations between different modalities and emotional information. Multimodal sentiment recognition and understanding are discussed in terms of emotion feature extraction,multimodal fusion,and the representation and models involved in sentiment recognition under the background of pre-trained large models. With the rapid development of society,people are facing increasing pressure, which can lead to feelings of depression,anxiety,and other negative emotions. Those who are in a prolonged state of depression and anxiety are more likely to develop mental illnesses. Depression is a common and serious condition,with symptoms including low mood,poor sleep quality,loss of appetite,fatigue,and difficulty concentrating. Depression not only harms individuals and families but also causes significant economic losses to society. The detection of emotional disorders starts from specific applications,which selects depression as the most common emotional disorder. We analyze its latest developments and trends from the perspectives of assessment and intervention. In addition,this study provides a detailed comparison of the research status of affective computation domestically,and prospects for future development trends are offered. We believe that scalable emotion feature design and large-scale model transfer learning based methods will be the future directions of development. The main challenge in multimodal emotion recognition lies in data scarcity, which means that data available to build and explore complex models are insufficient. This insufficiency causes difficulty in creating robust models based on deep neural network methods. The above mentioned issues can be addressed by constructing large-scale multimodal emotion databases and exploring transfer learning methods based on large models. By transferring knowledge learned from unsupervised tasks or other tasks to emotion recognition tasks,the problem of limited data resources can be alleviated. The use of explicit discrete and dimensional labels to represent ambiguous emotional states has limitations due to the inherent fuzziness of emotions. Enhancing the interpretability of prediction results to improve the reliability of recognition results is also an important research direction for the future. The role of multimodal emotion computing in addressing emotional disorders such as depression and anxiety is increasingly prominent. Future research can be conducted in the following three areas. First,research and construction of multimodal emotion disorder datasets can provide a solid foundation for the automatic recognition of emotional disorders. However,this field still needs to address challenges such as data privacy and ethics. In addition,considerations such as designing targeted interview questions,ensuring patient safety during data collection,and sample augmentation through algorithms are still worth exploring. Second,more effective algorithms should be developed. Emotional disorders fall within the psychological domain,and they can also affect the physiological features of patients,such as voice and body movements. This psychological-physiological correlation is worthy of comprehensive exploration. Therefore,improving the accuracy of algorithms for multimodal emotion disorder recognition is a pressing research issue. Finally,intelligent psychological intervention systems should be designed and implemented. The following issues can be further studied:effectively simulating the counseling process of a psychologist, promptly receiving user emotional feedback,and generating empathetic conversations.
Keywords

订阅号|日报