Current Issue Cover
  • 发布时间: 2024-10-29
  • 摘要点击次数:  40
  • 全文下载次数: 7
  • DOI:
  •  | Volume  | Number
针对视觉深度学习模型的物理对抗攻击研究综述

彭振邦, 张瑜, 党一, 陈剑奇, 史振威, 邹征夏(北京航空航天大学)

摘 要
基于深度学习模型的计算机视觉技术经过十余年的研究目前已经取得较大的进步,大量成熟的深度学习模型因其领先于传统模型的高精度、快速性特点被广泛用于计算机视觉相关的各类关键领域中。然而,研究者发现,向原始图像样本中添加精心设计的微小扰动可显著地干扰深度学习模型的决策结果。这种精心设计的对抗攻击引发了人们对于深度学习模型鲁棒性和可信赖程度的担忧。值得注意的是,一些研究者以日常生活中常见的实体或自然现象为载体,设计了可于实际应用场景中实施的物理对抗攻击。这种具备较高实用性的对抗攻击不仅能够较好地欺骗人类观察者,同时对深度学习模型产生显著的干扰作用,因而具备更实际的威胁性。为充分认识物理对抗攻击对基于深度学习模型的计算机视觉技术的实际应用带来的挑战,本文依据物理对抗攻击设计的一般性流程,对所整理的104篇论文设计的物理对抗攻击方法进行了归纳总结。具体而言,本文首先依据物理对抗攻击的建模方法对现有工作进行归纳总结。随后对物理对抗攻击优化约束和增强方法进行概述,并对现有工作的物理对抗攻击实施与评估方案进行总结。最后,本文对现有物理对抗攻击所面临的挑战和具备较大潜力的研究方向进行了分析与展望。我们希望能为高质量的物理对抗样本生成方法设计和可信赖的深度学习模型研究提供有参考意义的启发,综述主页将展示在https://github.com/Arknightpzb/Survey-of-Physical-adversarial-attack。
关键词
Review of physical adversarial attacks against visual deep learning models

pengzhenbang, zhangyu, dangyi, chenjianqi, shizhenwei, zouzhengxia(Beihang University)

Abstract
Deep learning has revolutionized the field of computer vision over the past two decades, bringing unprecedented advancements in both accuracy and speed. These developments are vividly reflected in fundamental tasks like image classification and object detection, where deep learning models have consistently outperformed traditional machine learning techniques. The superior performance of these models has led to their widespread adoption across various critical applications, including facial recognition, pedestrian detection, and remote sensing for earth observation. As a result, deep learning-based computer vision technologies are increasingly becoming indispensable for the continuous evolution and enhancement of intelligent vision systems. However, despite these remarkable achievements, the robustness and reliability of deep learning models have come under scrutiny due to their vulnerability to adversarial attacks. Researchers have discovered that by introducing carefully designed perturbations—subtle modifications that may be imperceptible to the human eye—it is possible to significantly disrupt the decision-making processes of these models. These adversarial attacks are not just theoretical constructs. They have practical implications that could potentially undermine the trustworthiness of deep learning systems deployed in real-world scenarios. One of the most concerning developments in this area is the emergence of physical adversarial attacks. Unlike their digital counterparts, physical adversarial attacks involve perturbations that can be applied in the real world using common objects or natural phenomena encountered in daily life. For instance, a strategically placed sticker on a road sign might cause an autonomous vehicle’s vision system to misinterpret the sign, leading to potentially dangerous consequences. These attacks are particularly worrisome because they can deceive not only deep learning models but also human observers, thus posing a more realistic and severe threat to the integrity of computer vision systems. In light of the growing significance of physical adversarial attacks, this paper aims to provide a comprehensive review of the state-of-the-art in this field. By analyzing 114 selected papers, we seek to offer a detailed summary of the methods used to design physical adversarial attacks, focusing on the general designing process that researchers follow. This process can be broadly divided into three stages: the mathematical modeling of physical adversarial attacks, the design of performance optimization processes, and the development of implementation and evaluation schemes. In the first stage, mathematical modeling, researchers aim to define the problem and establish a framework for generating adversarial examples in the physical world. This involves understanding the underlying principles that make these attacks effective and exploring how physical characteristics, such as texture, lighting, and perspective, can be manipulated to create adversarial examples. Within this stage, we categorize existing attacks into three main types based on their application forms: 2D adversarial examples, 3D adversarial examples, and adversarial light and shadow projection. 2D adversarial examples typically involve altering the surface of an object, such as applying a printed pattern or sticker, to fool a computer vision model. These attacks are often used in scenarios like natural image recognition and facial recognition, where the goal is to create perturbations that are inconspicuous in real-world settings but highly disruptive to machine learning algorithms. 3D adversarial examples take this concept further by considering the three-dimensional structure of objects. For example, modifying the shape or surface of a physical object can create adversarial examples that remain effective from multiple angles and under varying lighting conditions. Adversarial light and shadow projection represents another innovative approach, where the manipulation of light sources or shadows is used to create perturbations. These attacks are often more challenging to detect and defend against because they do not require any physical alteration of the object itself. Instead, they exploit the way light interacts with surfaces to generate adversarial effects. This method has shown potential in both indoor and outdoor scenarios. We also introduce their applications in five major scenarios: natural image recognition, facial image recognition, autonomous driving, pedestrian detection, and remote sensing. In the performance optimization process design phase, we believe that existing adversarial attacks mainly face two core problems: reality bias and the high degree of freedom observation. We have introduced some solutions and key technologies for these two core problems in existing work. In the design of implementation and evaluation schemes, we introduced the platforms and indicators used in existing work to evaluate the interference performance of physical adversarial examples. Finally, we discussed the highly promising research directions in physical adversarial attacks, particularly in the context of intelligent systems based on large models and embodied intelligence. This area of exploration could reveal critical insights into how these sophisticated systems, which combine extensive data processing capabilities with interactive and adaptive behaviors, can be compromised by physical adversarial attacks. Additionally, there is significant potential in studying physical adversarial attacks on hierarchical detection systems that integrate data from multiple sources and platforms. Understanding the vulnerabilities of such complex, layered systems could lead to more robust and resilient designs. Finally, the prospects of advancing defense technology against physical adversarial attacks are crucial. Developing comprehensive and effective defense mechanisms will be essential for ensuring the security and reliability of intelligent systems in real-world applications. We hope to provide meaningful insights for the design of high-quality physical adversarial example generation methods and the research of reliable deep learning models. The review homepage is available at https://github.com/Arknightpzb/Survey-of-Physical-adversarial-attack.
Keywords

订阅号|日报