几何联合分段亮度的线阵图像配准
房磊1,2,3,4, 史泽林1,2,3,4, 刘云鹏2,3,4, 李晨曦2,3,4, 赵恩波2,3,4, 张英迪2,3,4(1.东北大学机器人科学与工程学院, 沈阳 110169;2.中国科学院光电信息处理重点研究室, 沈阳 110016;3.中国科学院沈阳自动化研究所, 沈阳 110016;4.中国科学院机器人与智能制造创新研究院, 沈阳 110169) 摘 要
目的 以非平行于目标的姿态成像时,线阵相机采集的图像的几何变换规律与面阵相机不同,这导致面阵图像的几何变换模型及其直接配准方法无法实现线阵图像的配准;同时,亮度恒常假设无法解决大视场镜头引起的图像亮度衰减问题。因此,提出了一种几何联合分段亮度的线阵图像直接配准方法。方法 根据线阵图像的几何变换模型和分段增益—偏置亮度模型,将线阵图像的配准问题表示为一个非线性最小二乘问题。采用高斯—牛顿法对配准问题中的几何变换参数和亮度变换参数联合进行优化;此外,针对以单位变换为初始值时配准图像存在较大几何误差致使优化不收敛,设计了一种初始值快速搜索策略。结果 实验数据包含本文采集的线阵图像数据集和真实列车线阵图像。配准结果表明,采用本文方法配准后的标注点坐标均方根误差均小于1个像素,优于采用面阵图像几何变换模型的直接配准方法。算法对亮度变化具有更强的鲁棒性,提高了线阵图像配准的成功率。结论 本文提出的几何联合分段亮度线阵图像配准方法可以精确、鲁棒地对齐非平行姿态线阵相机所采集的图像。
关键词
Joint geometric and piecewise photometric line-scan image registration
Fang Lei1,2,3,4, Shi Zelin1,2,3,4, Liu Yunpeng2,3,4, Li Chenxi2,3,4, Zhao Enbo2,3,4, Zhang Yingdi2,3,4(1.Faculty of Robot Science and Engineering, Northeastern University, Shenyang 110169, China;2.Key Laboratory of Opto-Electronic Information Processing, Chinese Academy of Sciences, Shenyang 110016, China;3.Shenyang Institute of Automation, Chinese Academy of Sciences, Shenyang 110016, China;4.Institute for Robotics and Intelligent Manufacturing, Chinese Academy of Sciences, Shenyang 110169, China) Abstract
Objective Image registration is a fundamental problem in computer vision and image processing. It aims to eliminate the geometric difference of an object in an image collected by different cameras at various times and poses. Image registration has been widely used in several visual applications,such as image tracking,image fusion,image analysis,and anomaly detection. Image registration methods can be classified into feature-based and direct registration methods. The former calculates the parameters in a geometric transformation model by extracting and matching features,such as corners or edges,while the latter directly uses image intensity to infer the parameters. Evidently,choosing a reasonable geometric transformation model is the key to image alignment. The principles of line-scan and area-scan cameras are identical,and both cameras conform to the principle of pin-hole imaging. However,the imaging model of a line-scan camera is different from that of an area-scan camera due to the characteristics of its sensor. With the same change in camera pose,the locations of the same 3D world points mapped to the two types of images are different. That is,the geometric transformation law of an object in the images caused by the pose change of the two types of cameras is different. When the image plane of a line-scan camera is nonparallel to the object plane,geometric transformation models commonly used for area-scan image registration,such as the rigid,affine,and projection transformation models,cannot conform to the geometric transformation law of line-scan images. The direct registration method based on the geometric transformation model of an area-scan image cannot realize the geometric alignment of a line-scan image. Moreover,most existing direct image registration methods for solving the image alignment problem is based on the brightness constancy assumption and only geometric transformation is considered. In real-world applications,the variation of brightness is unavoidable and the brightness constancy assumption cannot address the problem of brightness attenuation when capturing images with a large-angle lens. Therefore, the line-scan image registration problem,which estimates geometric and photometric transformations between two images, is considered. Moreover,a direct registration method for line-scan images based on geometric and piecewise photometric transformations is proposed in this study. Method First,the optimization objective function of line-scan image registration is constructed by using the sum of squares difference of image intensity. In accordance with the geometric transformation model of line-scan images and the piecewise gain-bias photometric transformation model,the registration problem of a linescan image is expressed as a nonlinear least squares problem. Second,the Gauss-Newton method is used to optimize the geometric and photometric transformation parameters in the registration problem. The nonlinear optimization objective function is linearized by performing a first-order Taylor expansion. The Jacobian of the warp and photometric transformation is derived on the basis of the geometric transformation model of a line-scan image and the gain-bias model. Finally,to obtain the optimal geometric and photometric transformation parameters,the increments of the warp and photometric transformation are repeatedly computed until they are below the threshold in accordance with the normal equation. As the initial value,the identity warp cannot be guaranteed near the optimal solution,and the iteration does not converge during registration. This problem is solved by designing an initial value fast matching method that provides an initial solution closer to the optimal one. The process of the initial value fast matching method is as follows:fixed-size areas are selected from the four corners of the template image and then matched to the target image in the corresponding position. The minimum and maximum coordinates of the optimal matching position in the horizontal and vertical directions are selected. Then,the scale and translation factors in the horizontal and vertical directions are solved,and the result is regarded as the initial value for the iteration. The initial value provided by the initial value fast matching method reduces geometric difference between the template and target images,and the success rate of the registration method is improved. Result To verify the proposed linescan image registration method,a line-scan image acquisition system was built to obtain line-scan images of a planar object under different imaging poses and illumination variations. The experimental data also included electric multiple units (EMU)train line-scan images,which were collected by a line-scan camera in a natural environment. The images collected by the line-scan image acquisition system and the EMU train line-scan images were annotated separately,and the rootmean-square error(RMSE)of the annotated point coordinates was used as the evaluation index of the geometric error. The performance of the initial value fast matching method was verified on the line-scan image dataset collected in this study. The geometric error between the template image and the warped target image based on the initial value provided by the fast template block matching method was smaller than that based on the identity warp. This finding indicates that the initial value provided by the initial value fast matching method is closer to the optimal solution of the geometric transformation. Through the registration experiments on the collected dataset and the EMU train line-scan image,the results show that the RMSE of the annotated point coordinates is less than 1 pixel,and registration accuracy is excellent. Conclusion Our algorithm is more robust to lighting changes,and it improves the success rate of line-scan image registration. The joint geometric and piecewise photometric line-scan image registration method proposed in this study can accurately align the images collected in practical application scenes. This condition is also a foundation for train anomaly detection based on line-scan images. Therefore,the direct registration method proposed in this study can accurately and robustly align line-scan images collected under nonparallel poses.
Keywords
line-scan camera line-scan image direct registration method geometric transformation photometric transformation
|