RTDNet:面向高分辨率卫星影像的赤潮探测网络
摘 要
目的 赤潮是一种常见的海洋生态灾害,严重威胁海洋生态系统安全。及时准确获取赤潮的发生和分布信息可以为赤潮的预警和防治提供有力支撑。然而,受混合像元和水环境要素影响,赤潮分布精细探测仍是挑战。针对赤潮边缘探测的难点,结合赤潮边缘高频特征学习与位置语义,提出了一种计算量小、精度高的网络模型RTD-Net (red tide detection network)。方法 针对赤潮边缘探测不准确的问题,设计了基于RIR (residual-in-residual)结构的网络,以提取赤潮边缘水体的高频特征;利用多感受野结构和坐标注意力机制捕获赤潮水体的位置语义信息,增强赤潮边缘水体的细节信息并抑制无用的特征。结果 在GF1-WFV (Gaofen1 wide field of view)赤潮数据集上的实验结果表明,所提出的RTDNet模型赤潮探测效果不仅优于支持向量机(support vector machine,SVM)、U-Net、Deep-Labv3+及HRNet (high-resolution network)等通用机器学习和深度学习模型,而且也优于赤潮指数法GF1_RI (Gaofen1 red tide index )以及赤潮探测专用深度学习模型RDU-Net (red tide detection U-Net),赤潮误提取、漏提取现象明显减少,F1分数在两幅测试图像上分别达到了0.905和0.898,相较于性能第2的模型DeepLabv3+提升了2%以上。而且,所提出的模型参数量小,仅有2.65 MB,约为DeepLabv3+的13%。结论 面向赤潮探测提出一种基于RIR结构的赤潮深度学习探测模型,通过融合多感受野结构和注意力机制提升了赤潮边缘探测的精度和稳定性,同时有效降低了计算量。本文方法展现了较好的应用效果,可适用于不同高分辨率卫星影像的赤潮探测。
关键词
RTDNet: red tide detection network for high-resolution satellite images
Cui Binge1, Fang Xi1, Lu Yan1, Huang Ling1, Liu Rongjie2(1.School of Computer Science and Technology, Shandong University of Science and Technology, Qingdao 266590, China;2.First Institute of Oceanography, Ministry of National Resource, Qingdao 266061, China) Abstract
Objective Red tide is a harmful ecological phenomenon in the marine ecosystem that seriously threatens the safety of the marine economy. The accurate detection of the occurrence and distribution area of a small-scale red tide can provide basic information for the prediction and early warning of this phenomenon. Red tide has a short duration and rapid change, and on-site observations can hardly meet the requirements for its timely and accurate detection. By contrast, remote sensing has become an important technology for red tide monitoring. However, the traditional method of exponential extraction based on spectral features is easily influenced by ocean background noise, and the threshold cannot be easily determined because the marginal watercolor of the red tide is not obvious. Deep-learning-based methods can extract red tide information end to end without setting the threshold manually yet treat low and high-frequency red tide information equally, thus hindering the representation ability of the convolutional neural network. To solve the problem of positioning and identifying small-scale red tide marginal waters, a semantic segmentation method for the remote sensing detection of small-scale red tide is proposed in this paper by combining the high-frequency feature learning of red tide with position semantics. Method The residual-in-residual(RIR)structure is used to extract the high-frequency characteristics of red tide marginal waters, and the residual branch is alternately composed of multiple residual groups and receptive fields. The residual group uses the coordinate attention and dynamic weight mechanisms to capture the position semantic information of red tide water bodies, while multi-receptive field structures are used to capture multi-scale information. A small-scale red tide detection network called RTDNet is then constructed to enhance the detailed information of red tide marginal waters and suppress useless features. In order to verify the validity of the model, experiments are conducted on the red tide dataset of GF1-WFV. Due to limitations in computing resources, the remote sensing images are cropped to 64 × 64 pixels, and data enhancement operations, such as flipping, translating, and rotating, are performed on the data. Through the above processing steps, a total of 1 050 samples are obtained. Adam is selected as the model optimizer with 0. 000 1 learning rate, 2 batch size, 100 epoch rounds, and a binary cross-entropy loss function. The experiment is carried out under the Ubuntu 18. 04 operating system with NVIDIA GeForce RTX 2080Ti GPU, and the network model is realized in Python 3. 6 with the Keras 2. 4. 0 framework. The precision(P), recall(R), F1-score(F1), and intersection over union(IoU)of the model are comprehensively evaluated to quantitatively analyze its effects. Result Experimental results on the GF1-WFV red tide dataset show that RTDNet is superior to SVM, U-Net, DeepLabv3+, HRNet, the red tide band exponential method GF1_RI, RDU-Net, and other general or special red tide detection models in both the qualitative and quantitative aspects. Results from RTDNet are similar to the ground truth, and its red tide marginal water extraction effect is better than that of other models. This model also has much less instances of false extraction and missing extraction compared to the other models. For the quantitative results, the F1-score and IoU of RTDNet reach 0. 905 and 0. 898 and 0. 827 and 0. 815 on the two test images, respectively. Compared with those of the second-best-performing model DeepLabv3+, the F1-score of RTDNet is increased by more than 0. 02, while its IoU is increased by more than 0. 05. However, the number of model parameters in DeepLabv3+ is only 2. 65 MB, which is 13% of RTDNet. An ablation experiment is also carried out, and the results verify that each module in RTDNet helps improve the effect of red tide detection. The visualization results of some feature maps across different stages of the network show the gradual refining process of the network to extract red tide. Conclusion This paper proposes a red tide small-scale remote sensing detection network model called RTDNet based on the residual-inresidual structure, multi-receptive field structure, and attention mechanism. This model effectively addresses the false and missing extractions caused by the inconspicuous watercolor at the edge of red tide, improves the accuracy and stability of red tide marginal water detection, and effectively reduces the calculation load. Experimental results show that RTDNet is superior to other methods and models in detecting small-scale red tide in remote sensing images. This method is suitable for remote sensing the accurate location and area extraction of early marine disasters(e. g., red tide, green tide, and golden tide)and has certain reference significance and applicability for other semantic segmentation tasks with fuzzy edges.
Keywords
red tide detection GF-1 WFV remote sensing image semantic segmentation residual network attentional mechanisms
|