Current Issue Cover
多样性负实例生成的跨域人脸伪造检测

张晶, 许盼, 刘文君, 郭晓萱, 孙芳(辽宁师范大学)

摘 要
深度伪造检测(Deepfake Detection)通过训练复杂深度神经网络,挖掘更具辨别性的人脸图像表示,获得更高精度的检测结果,其是一项确保人脸信息真实、可靠、安全的重要技术。然而,目前流行的模型存在过度依赖训练数据,使模型仅在相同域内表现出令人满意的检测性能,在跨领域场景中表现出较低泛化性,甚至使模型失效。因此,如何在有限的训练数据下实现跨域环境中的高效伪造人脸检测模型构建,成为亟待解决的问题。基于此,本文提出多样性负实例生成的跨域人脸伪造检测模型(Negative Instance Generation-FFD, NIG-FFD)。方法 首先,通过构建孪生自编码网络,获得标签一致的潜在多视图融合特征,引入对比约束提高难样本特征可判别性;其次,在高效训练的同时利用构造规则生成更具多样性的负实例融合特征,提高模型泛化性;最后,构建自适应重要性权值矩阵,避免因负实例生成导致类别分布不平衡使正类别样本欠学习。结果 本文在两个流行的跨域数据集上验证本文模型的有效性,与其他先进方法相比AUC(Area Under the Receiver Operating Characteristic Curve, AUC)值提升了10%。同时,在本域检测中ACC(Accuracy Score, ACC)与AUC值相比其他方法均提升了近10%与5%。结论 与对比方法相比,本文提出的方法在跨域和本域的人脸伪造检测上都取得了优越的性能。本文所提出模型代码已开源至:https://github.com/LNNU-computer-research-526/NIG-FFD.
关键词
Negative instance generation for cross- domain face forgery detection

Zhang Jing, Xu Pan, Liu Wenjun, Guo Xiaoxuan, Sun Fang(Liaoning Normal University)

Abstract
Object With the rapid development of multimedia, mobile internet, and artificial intelligence technologies, face recognition has achieved tremendous success in areas such as identity verification and security monitoring. However, with its widespread application, the risk of Face Forgery Attacks (FFA) is gradually increasing. These attacks leverage deep learning models to create fraudulent digital content, including images, videos, audio, etc., posing a potential threat to societal stability and national security. Therefore, achieving Deepfake Detection is crucial for maintaining individual and organizational interests, ensuring public safety, and promoting the sustainable development of cutting-edge technologies. According to different modes of image representation, Deepfake Detection methods can generally be divided into two categories. First, methods based on traditional image feature description typically involve image processing and feature extraction based on signal transformation models. Second, methods based on deep learning strategies for forged face detection employ complex deep neural networks to obtain more discriminative high-dimensional nonlinear face feature descriptions, thereby improving forgery detection accuracy. Both of these methods have achieved satisfactory results in Deepfake Detection experiments. However, most training and testing samples for these models are collected from the same data domain, resulting in excellent performance under such conditions. It becomes challenging to obtain testing samples consistent with the distribution of the original training samples in practical applications, which may limit the application of these models in free-scene forgery detection tasks and even lead to complete model failure. Therefore, some scholars have proposed a data augmentation framework based on structural feature mining to enhance the performance of convolutional neural network detectors. However, when faces are seamlessly integrated with backgrounds at the pixel level, recognition accuracy will significantly decrease. Consequently, some scholars have utilized Transformer network architectures to construct deep forgery detection frameworks. Although this model achieves satisfactory generalization by deeply understanding the manipulated regions, it lacks descriptions of local tampering representations, and its detection efficiency is also quite low. Based on the above, the main challenges faced in constructing Deepfake Detection models in cross-domain scenarios can be summarized as follows: 1) How to extract more discriminative representations of forged face images. The forgery process of face images typically involves tampering or replacing local features of the image, posing challenges for obtaining discriminative features. 2) How to improve the generalization of detection models. Over-reliance on current domain data during model training will reduce the generalization of recognition to other domain data, and when facing more challenging free-forgery detection scenarios, it may lead to model failure. Addressing these challenges, this paper proposes a cross-domain detection model based on diverse negative instance generation. Method The model achieves feature augmentation of forged negative instances and enhances the cross-domain recognition accuracy and generalization by constructing a Siamese autoencoder network architecture with multi-view feature fusion. It mainly consists of the following three parts: 1) The model implements discriminative multi-view feature fusion under contrastive constraints. Firstly, a Siamese autoencoder network is constructed to extract different view features. Secondly, contrastive constraints are employed to achieve multi-view feature fusion. Since typical face forgery image manipulation involves only small-scale replacements and tampering, the global features of forged face images are very similar to those of genuine faces. By utilizing contrastive loss, it becomes feasible to better differentiate weakly discriminative hard samples. It maximizes the similarity of intra-class features while minimizing the similarity of inter-class features. Finally, to facilitate more comprehensive learning, the supervised feature extraction network is guided to retain important feature information of the original input, thus emphasizing the learning of discriminative feature representations. This paper proposes utilizing reconstruction loss to constrain the feature network by computing the difference between the decoder output and the original input. 2) The model achieves diversity in negative instance feature augmentation to enhance model generalization, ensuring satisfactory recognition performance on cross-domain datasets. Firstly, the rules for generating fused samples are defined. This paper statistically visualizes the network output feature histograms of constructed samples with different labels through feature visualization, analyzes the statistical patterns of negative samples, and defines feature-level sample generation rules: except when both view features are from positive samples, all other combinations of feature samples are generated as negative samples. Secondly, diverse forged feature sets are constructed using selected samples to enable the network to learn more discriminative features. Finally, by connecting the original training samples and augmented samples, a global training sample set is obtained. 3) The model implements a discriminator construction with importance sample weighting. By achieving feature augmentation of negative instances as described above, the number of original negative instances is significantly increased. This paper introduces an importance weighting mechanism to avoid model overfitting on negative samples and underfitting on positive samples. Initially, the matrix is initialized to set different weights for each class sample, allowing negative samples to be weighted according to their predicted probabilities while keeping positive samples unchanged, thereby approximately achieving class balance during loss calculation. Through negative sample weighting, the model is guided to pay more attention to positive sample features and prevent the classification decision boundary from biasing towards negative samples. Secondly, to measure the distance between the predicted probability distribution and the true probability distribution, this paper uses cross-entropy loss as the classification loss function to supervise the classification results. Ultimately, the total loss function for model training is obtained. Result To Verify the effectiveness of the proposed method in a cross-domain environment, experiments were conducted on three publicly available datasets, comparing and analyzing against currently popular methods, namely: Faceforensics++ (FF++), Celeb-DFv2, and the Deepfake Detection Challenge. The Faceforensics++ dataset comprises three versions based on different compression levels: c0 (original), c23 (high quality), and c40 (low quality). This study utilized c23 and c40 versions for experimentation. The Celeb-DFv2 dataset is widely employed to test the models" generalization capabilities, as its forged images lack obvious visual artifacts characteristic of deepfake manipulation, posing significant challenges in generalization detection. In the experiments, 100 genuine videos and 100 forged videos were randomly selected, with one image extracted every 30 frames. For the DFDC dataset, 140 videos were randomly selected, with 20 frames extracted from each video for testing. According to the experimental results, the proposed model exhibited a 10% improvement in AUC (Area Under the Receiver Operating Characteristic Curve) compared to other state-of-the-art methods. Additionally, the model"s detection results in the native domain environment were validated, showing an approximate 10% and 5% enhancement in ACC (Accuracy Score) and AUC values, respectively, compared to other methods. Conclusion The method proposed in this paper has achieved superior performance in both cross-domain and in-domain Deepfake Detection.
Keywords

订阅号|日报