Current Issue Cover
混合现实飞行模拟器的神经重光照方法

祁佳晨, 解利军, 阮文凯, 王孝强(浙江大学)

摘 要
目的 混合现实技术通过混合现实场景和虚拟场景,为飞行模拟器提供了沉浸式体验。由于现实场景和虚拟场景的光照条件不一致,混合结果往往使用户产生较强的违和感,从而降低体验沉浸感。本文使用虚拟场景的光照条件对机舱现实图像场景进行重光照,解决光照不一致问题。方法 受计算机图形学中重要的渲染方法——预计算辐射传输法的启发,首次提出一种基于辐射传输函数估计的神经重光照方法。首先使用卷积神经网络估计输入图像中每个渲染点的辐射传输函数在球谐函数上的系数形式表达,同时将虚拟环境中提供光照信息的环境光贴图投影到球谐函数上,最后将对应球谐系数向量进行点乘,获得重光照渲染结果。结果 目视评测,生成的重光照图像与目标光照条件匹配程度良好,同时保留原图中细节,未出现伪影等异常渲染结果。以本文生成的重光照数据集为基准进行测试,本文方法生成结果峰值信噪比达到28.48,比相似方法高出7.5%。结论 成功在多款战斗机模型中应用了上述方法,可以根据给定虚拟飞行场景中的光照条件,对现实机舱内部图像进行重光照,实现机舱内外图像光照条件一致,提升了应用混合现实的飞行模拟器的用户沉浸感。
关键词
Neural Relighting Methods for Mixed Reality Flight Simulators

qijiachen, Xie Lijun, Ruan Wenkai, Wang Xiaoqiang(Zhejiang University)

Abstract
Objective The application of Mixed Reality (MR) in training environments, particularly in the field of aviation, marks a significant leap from traditional simulation models. This innovative technology overlays virtual elements onto the real world, creating a seamless interactive experience that is critical in simulating high-risk scenarios for pilots. Despite its advances, the integration of real and virtual elements often suffers from inconsistencies in lighting, which can disrupt the user"s sense of presence and diminish the effectiveness of training sessions. Prior attempts to reconcile these differences have involved static solutions that lack adaptability to the dynamic range of real-world lighting conditions encountered during flight. This study is informed by a comprehensive review of current methodologies, including photometric alignment techniques and the adaptation of CGI elements using standard graphics pipelines. Our analysis identified a gap in real-time dynamic relighting capabilities, which we address through a novel neural network-based approach. Method The methodological core of this research is the development of an advanced neural network architecture designed for the sophisticated task of image relighting. The neural network architecture proposed in this research is a convolutional neural network (CNN) variant, specifically tailored to process high-fidelity images in a manner that retains critical details while adjusting to new lighting conditions. Meanwhile, an integral component of our methodology was the generation of a comprehensive dataset specifically tailored for the relighting of fighter jet cockpit environments. To ensure a high degree of realism, we synthesized photorealistic renderings of the cockpit interior under a wide array of atmospheric conditions, times of day, and geolocations across different latitudes and longitudes. This was achieved by integrating our image capture process with an advanced weather simulation system, which allowed us to replicate the intricate effects of natural and artificial lighting as experienced within the cockpit. The resultant dataset presents a rich variety of lighting scenarios, ranging from the low-angle illumination of a sunrise to the diffuse lighting of an overcast sky, providing our neural network with the nuanced training required to accurately emulate real-world lighting dynamics. The neural network is trained with this dataset to understand and dissect the complex interplay of lighting and material properties within a scene. The first step of the network involves a detailed decomposition of input images to separate and analyze the components affected by lighting—such as shadows, highlights, and color temperature. It is important to deduce the geometry of the scene, the textures, and how objects occlude or reflect light, extracting these elements into a format that can be manipulated independently of the original lighting conditions. To actualize the target lighting effect, the study leverages a concept adapted from the domain of precomputed radiance transfer—a technique traditionally used for rendering scenes with complex light interactions. By estimating radiance transfer functions at each pixel and representing these as coefficients over a series of spherical harmonic basis functions, the method facilitates a rapid and accurate recalculation of lighting across the scene. The environmental lighting conditions, captured through high dynamic range imaging techniques, are also projected onto these spherical harmonic functions. This approach allows for the real-time adjustment of lighting by simply recalculating the dot product of these coefficients, corresponding to the new lighting environment. This step is a computational breakthrough, as it circumvents the need for extensive ray tracing or radiosity calculations, which are computationally expensive and often impractical for real-time applications. This method stands out for its low computational overhead, enabling near real-time relighting that can adjust dynamically as the simulated conditions change. Results The empirical results achieved through this method are substantiated through a series of rigorous tests and comparative analyses. The neural network"s performance was benchmarked against traditional and contemporary relighting methods across several scenarios reflecting diverse lighting conditions and complexities. The model consistently demonstrated superior performance, not only in the accuracy of light replication but also in maintaining the fidelity of the original textures and material properties. The visual quality of the relighting was assessed through objective performance metrics, including comparison of luminance distribution, color fidelity, and texture preservation against ground truth datasets. These metrics consistently indicated a significant improvement in visual coherence and a reduction in artifacts, ensuring a more immersive experience without the reliance on subjective user studies. Conclusion The implemented method effectively resolves the challenge of inconsistent lighting conditions in MR flight simulators. It contributes to the field by enabling dynamic adaptation of real-world images to the lighting conditions of virtual environments. This research not only provides a valuable tool for enhancing the realism and immersion of flight simulators but also offers insights that could benefit future theoretical and practical advancements in MR technology. The study utilized spherical harmonic coefficients of environmental light maps to convey lighting condition information and pioneered the extraction of scene radiance lighting functions" spherical harmonic coefficients from real image data. This validated the feasibility of predicting scene radiance transfer functions from real images using neural networks. The limitations and potential improvements of the current method are discussed, outlining directions for future research. For example, considering the temporal continuity present in the relighted images, future efforts could exploit this characteristic to optimize the neural network architecture, integrating modules that enhance the stability of the prediction results.
Keywords

订阅号|日报