面向多模态自监督特征融合的音视频对抗对比学习
盛振涛1,2, 陈雁翔1,2, 齐国君3(1.合肥工业大学计算机与信息学院, 合肥 230601;2.智能互联系统安徽省实验室(合肥工业大学), 合肥 230601;3.机器感知与学习实验室(美国中佛罗里达大学), 美国奥兰多 32816) 摘 要
目的 同一视频中的视觉与听觉是两个共生模态,二者相辅相成,同时发生,从而形成一种自监督模式。随着对比学习在视觉领域取得很好的效果,将对比学习这一自监督表示学习范式应用于音视频多模态领域引起了研究人员的极大兴趣。本文专注于构建一个高效的音视频负样本空间,提高对比学习的音视频特征融合能力。方法 提出了面向多模态自监督特征融合的音视频对抗对比学习方法:1)创新性地引入了视觉、听觉对抗性负样本集合来构建音视频负样本空间;2)在模态间与模态内进行对抗对比学习,使得音视频负样本空间中的视觉和听觉对抗性负样本可以不断跟踪难以区分的视听觉样本,有效地促进了音视频自监督特征融合。在上述两点基础上,进一步简化了音视频对抗对比学习框架。结果 本文方法在Kinetics-400数据集的子集上进行训练,得到音视频特征。这一音视频特征用于指导动作识别和音频分类任务,取得了很好的效果。具体来说,在动作识别数据集UCF-101和HMDB-51(human metabolome database)上,本文方法相较于Cross-AVID(cross-audio visual instance discrimination)模型,视频级别的TOP1准确率分别高出了0.35%和0.83%;在环境声音数据集ECS-50上,本文方法相较于Cross-AVID模型,音频级别的TOP1准确率高出了2.88%。结论 音视频对抗对比学习方法创新性地引入了视觉和听觉对抗性负样本集合,该方法可以很好地融合视觉特征和听觉特征,得到包含视听觉信息的音视频特征,得到的特征可以提高动作识别、音频分类任务的准确率。
关键词
Audio-visual adversarial contrastive learning-based multi-modal self-supervised feature fusion
Sheng Zhentao1,2, Chen Yanxiang1,2, Qi Guojun3(1.School of Computer Science and Information Engineering, Hefei University of Technology, Hefei 230601, China;2.Intelligent Interconnection System Anhui Provincial Laboratory (Hefei University of Technology), Hefei 230601, China;3.Laboratory for Machine Perception and Learning (University of Central Florida), Orlando 32816, USA) Abstract
Objective Video clip-based vision and audition are two kind of interactive and synchronized symbiotic modalities to develop a self-supervised mode. Current researches demonstrate that human-perception is derived from visual auditory vision to understand dynamic events. Therefore, the feature extracted from audio-visual clips contains richer information. In recent years, data feature-based contrastive learning has promoted visual domain dramatically via the mutual information prediction between pairs of samples. Much more concerns are related to the application of contrastive learning, a self-supervised representation learning paradigm for the audio-visual multi-modal domain. It is essential to deal with the issue of an audio-visual negative sample space construction, where contrastive learning can extract negative samples. To improve the audio-visual feature fusion capability of contrastive learning, our research is focused on building up an efficient audio-visual negative sample space. Method We develop a method of audio-visual adversarial contrastive learning for multi-modal self-supervised feature fusion. Visual and auditory negative sample sets are initialized as standard normal distribution, which can construct the audio-visual negative sample space. In order to ensure the scaled audio-visual negative sample space, the number of visual and auditory adversarial negative samples is defined as 65 536. The path of cross-modal adversarial contrastive learning is described as following: 1) we used the paired visual feature and auditory feature extracted from the same video clip as the positive sample, while the auditory adversarial negative samples are used to construct the negative sample space, the visual feature will be close to the corresponding auditory positive sample during the training of cross-modal contrastive learning, while discretes from the auditory adversarial negative samples farther. 2) Auditory adversarial negative samples are updated during cross-modal adversarial learning, which makes them closer to the visual feature. If there is just cross-modal adversarial contrastive learning there, the modal can be actually degenerated into the inner-modal adversarial contrastive learning. The visual and auditory negative samples sets are initialized as standard normal distribution without visual or auditory information, so inner-modal adversarial contrastive learning is also required. We used a pair of visual features in different view as the positive sample further. The negative sample space is still constructed by the visual adversarial negative samples. 3) Visual and auditory feature is composed of inner-modality and cross-modality information both, which can be used to guide downstream tasks like action recognition and audio classification. Specifically, (1)to construct audio-visual negative sample space, visual and audio adversarial negative samples are introduced; (2) to track the indistinguishable audio and visual samples in consistency, the combination of inner-modality and cross-modality adversarial contrastive learning is adopted, which can improve the proposed method effectively to fuse audio-visual self-supervised feature. On the basis of (1) and (2) mentioned above, the audio-visual adversarial contrastive learning framework is simplified further. Result The subset of Kinetics-400 dataset is selected for pre-training to obtain audio-visual feature. 1) The audio-visual feature is analyzed qualitatively. The visual feature is applied to guide the supervised network of action recognition. After fine-tuning the supervised network, we visualized the final convolutional layer of the network. Comparing with Cross-cross-audio visual instance discrimination(AVID) method, our visual feature makes the supervised network pay more attention to the various body parts of the person-targeted, which is an effective information source to recognize action.2) The quality of the audio-visual adversarial negative samples are analyzed qualitatively via visualizing the t-distributed stochastic neighbor embedding(t-SNE) figure about the audio-visual feature and the audio-visual adversarial negative samples. The audio-visual adversarial negative sample distribution of our method is looped and similar to an oval shape, while the audio-visual negative sample distribution of Cross-AVID method has small clusters and deletions. It demonstratess that the proposed audio-visual adversarial negative samples can track the audio-visual feature in the iterative process closely, and build a more efficient audio-visual negative sample space. The audio-visual feature is analyzed in quantitative as well. This feature is applied to motion recognition and audio classification. In particular, 1)visual-based Cross-AVID model comparison: our analysis achieves 0.35% and 0.83% of each on the UCF-101 and human metabolome database(HMDB-51) action recognition datasets; 2) audio-based Cross-AVID model comparison: our analysis achieves 2.88% on the ECS-50 environmental sound classification dataset. Conclusion Audio-visual adversarial contrastive learning method can introduce visual and audio adversarial negative samples effectively. To obtain audio-visual feature information, qualitative and quantitative experiments show that the proposed method can well fuse visual and auditory feature. This feature can be implied to improve the accuracy of action recognition and audio classification tasks.
Keywords
self-supervised feature fusion adversarial contrastive learning audio-visual cross-modality audio-visual adversarial negative sample pre-training
|