结合感受野模块与并联RPN网络的火焰检测
摘 要
目的 准确快速的火焰检测技术在早期火灾预警中具有重要的实际应用价值。为了降低伪火类物体引起的误警率以及早期小火焰的漏检率,本文设计了一种结合感受野(receptive field,RF)模块与并联区域建议网络(parallel region proposal network,PRPN)的卷积神经网络(receptive field and parallel region proposal convolutional neural network,R-PRPNet)用于火焰检测。方法 R-PRPNet主要由特征提取模块、并联区域建议网络和分类器3部分组成。特征提取模块在MobileNet卷积层的基础上,通过嵌入感受野RF模块扩大感受野捕获更丰富的上下文信息,从而提取更具鉴别性的火焰特征,降低伪火类物体引起的误警率;并联区域建议网络与特征提取模块后端的多尺度采样层连接,使用3×3和5×5的全卷积进一步拓宽多尺度锚点的感受野宽度,提升PRPN对不同尺度火焰的检测能力,解决火灾发生初期的小火焰漏检问题;分类器由softmax和smooth L1分别实现分类与回归。在R-PRPNet训练过程中,将伪火类物体作为负样本进行负样本微调,以更好区分伪火类物体。结果 在包括室内、建筑物、森林和夜晚等场景火焰数据以及包括灯光、晚霞、火烧云和阳光等伪火类数据的自建数据集上对所提方法进行测试,在火焰检测任务中,准确度为98.07%,误警率为4.2%,漏检率为1.4%。消融实验结果表明,R-PRPNet较基线网络在漏检率和误警率上分别降低了4.9%和21.72%。与传统火焰检测方法相比,R-PRPNet在各项指标上均优于边缘梯度信息和聚类等方法。性能较几种目标检测算法有所提升,其中相较于YOLOX-L,误警率和漏检率分别降低了22.2%和5.2%。此外,本文在不同场景火焰下进行测试,都有较稳定的表现。结论 本文方法有效降低了火焰检测中的误警率和漏检率,并可以满足火焰检测的实时性和准确性需求。
关键词
Flame detection combined with receptive field and parallel RPN
Bao Wenxia1, Sun Qiang1, Liang Dong1, Hu Gensheng1, Yang Xianjun2(1.School of Electronics and Information Engineering, Anhui University, Hefei 230601, China;2.Hefei Institutes of Physical Science, Chinese Academy of Sciences, Hefei 230031, China) Abstract
Objective Early flame detection is essential for quick response event through minimizing casualties and damage. Smoky and flaming alarms have been using in indoor-scenario in common. However, the challenging issue of most traditional physical sensors is limited to the fire source-near problem and cannot be meet the requirement of outdoor-scene flame detection. Real-time image detection has been developing in terms of image processing and machine learning technique. However, the flame shape, size, and color can be varied intensively, and there are a plenty of pseudo-fire objects (very similar to the features of the flame color) in the natural environment. To distinguish real flames from pseudo flames precisely, the detection model has been developing dramatically. Image processing and machine learning methods can be divided into three categories:1) traditional image processing, 2) machine learning, and 3) deep learning. Traditional image processing and machine learning is often concerned of design-manual of flame features, which is not quantitative and poor matching to complex background images. Thanks to the self-learning and deep learning techniques, current flame-based detection and interpretation has been facilitating. First, convolution-depth can be used to interpret small-scale areas less than 32×32 missing information on the feature map. Second, deep learning models can be applied to detect color features similar to the object-targeted and the misjudgment-caused, while it is restricted of small target and color-similar feature in flame-detection interpretation. In order to alleviate the pseudo-fire-objects-derived false alarm rate and the missed detection rate of early small flames, we develop a receptive field module based (RF-module-based) convolutional neural network (CNN) and the parallel region proposal network (PRPN) is designed for flame detection, called R-PRPNet. Method The R-PRPNet is mainly composed of three parts:1) feature extraction module, 2) region-parallel network, and 3) classifier. The feature extraction module is focused on convolutional layers of lightweight MobileNet, which makes our algorithm-proposed run faster without the performance loss of flame detection. To extract more discriminative flame features and alleviate the high pseudo-fire-objects-derived false alarm rate, the RF module is embedded into this module for receptive field-expanded and richer context information-captured. The features of multi-scale flames in flaming duration are combined with the region-parallel network. To connect the PRPN, a multi-scale sampling layer is established at the back of the feature extraction module. Furthermore, we use 3×3 and 5×5 full convolution to broaden the receptive field width of multi-scale anchor points, which can improve multi-scale flames detection ability, and resolve the problem of detection-missed of small flames in the early stage of a fire. To achieve classification and regression, the classifier is implemented through softmax and smooth L1, and the final flame category and position information is produced in the image. Result Our method is tested on the multiple datasets-self-built, such as indoor, building, forest and night scene flame data and scenario-based pseudo fire data like light, sunset glow, burning cloud and sunshine. The Faster region CNN (R-CNN) with MobileNet as the backbone network is used as the benchmark. The RF module-based network is compared to the network-benched, which can be used to learn more flame features-discriminative, and the detection-missed rate and alarm-false rate are lower by 1.1% and 0.43% of each. The network-benched is melted into parallel RPN (PRPN) on the basis of the RF module, which improves the networks recognition rate of multi-scale flames effectively. The recall rate is increased by 1.7%, and the detection-missed rate is decreased by 1.7%. The RF module is compared via the negative sample fine-tuning strategy. The pseudo-fire features are enriched through negative sample fine-tuning strategy, and the network classification performance is interpreted and improved for real flames and pseudo-fire objects. The false alarm rate can be decreased by 21% as well based on the two components mentioned above. The comparative analysis is carried out with three detection methods as following:1) for traditional flame detection methods:R-PRPNet is better than edge gradient information and clustering methods in all indexes. 2) For classical target detection algorithms:the performance is improved as well. 3) For YOLOX-L, the false alarm rate and detection-missed rate are reduced by 22.2% and 5.2%, respectively. The final results are reached to 98.07% (accuracy), 4.2% (alarm-false rate) and 1.4% (detection-missed rate) of each. Conclusion We design a CNN for flame detection in related to a receptive field module and the parallel RPN. To expand the receptive field and extract more contextual information-captured discriminative flame features, the RF module is embedded into the feature extraction module of the network, and flame features are melted into through splicing, downsampling and elements-added. Comparing the proposed network with some classic convolutional neural networks and traditional methods, the experimental results show that our network-proposed can extract complex image flame features for multiple scenarios automatically, and it can be filtered pseudo-fire objects accurately.
Keywords
flame detection deep learning receptive field (RF) parallel region proposal network (PRPN) negative sample fine-tuning
|