Learning Latent Transmission and Glare Maps for Lens Veiling Glare Removal

1Zhejiang University, 2INSAIT, Sofia University “St. Kliment Ohridski”, 3Hunan University, 4University of California, Merced, 5Google DeepMind
* Equal contribution † Corresponding author
Formation of aberration and veiling glare plus restoration comparison
Compact optical systems suffer from residual aberrations and veiling glare (a), caused by design-induced blur and stray-light scattering from non-ideal surfaces and coatings. (b) Existing methods fail under this compound degradation: a Computational Aberration Correction (CAC) model retrained on aberration-only data fails on unseen veiling glare, while a cascaded state-of-the-art dehazing model introduces inconsistent artifacts. Our DeVeiler restores a clean image by jointly correcting the compound degradations.

Abstract

Beyond the commonly recognized optical aberrations, the imaging performance of compact optical systems—including single-lens and metalens designs—is often further degraded by veiling glare caused by stray-light scattering from non-ideal optical surfaces and coatings, particularly in complex real-world environments. This compound degradation undermines traditional lens aberration correction yet remains underexplored. A major challenge is that conventional scattering models (e.g., for dehazing) fail to fit veiling glare due to its spatial-varying and depth-independent nature. Consequently, paired high-quality data are difficult to prepare via simulation, hindering application of data-driven veiling glare removal models. To this end, we propose VeilGen, a generative model that learns to simulate veiling glare by estimating its underlying optical transmission and glare maps in an unsupervised manner from target images, regularized by Stable Diffusion (SD)-based priors. VeilGen enables paired dataset generation with realistic compound degradation of optical aberrations and veiling glare, while also providing the estimated latent optical transmission and glare maps to guide the veiling glare removal process. We further introduce DeVeiler, a restoration network trained with a reversibility constraint, which utilizes the predicted latent maps to guide an inverse process of the learned scattering model. Extensive experiments on challenging compact optical systems demonstrate that our approach delivers superior restoration quality and physical fidelity compared with existing methods. These suggest that VeilGen reliably synthesizes realistic veiling glare, and its learned latent maps effectively guide the restoration process in DeVeiler. All code and datasets will be publicly released at https://github.com/XiaolongQian/DeVeiler.

Method

Our pipeline contains a generative degradation module, VeilGen, that synthesizes compound aberrations and veiling glare, and a restoration network, DeVeiler, that leverages the learned priors to invert the degradation. The figures below summarize both components.

Architecture of VeilGen
Overall architecture of the proposed VeilGen. In Stage I, VeilGen is trained to synthesize compound degradations by using a Latent Optical Transmission and Glare Map Predictor (LOTGMP) to estimate latent maps. These maps then guide the diffusion process via the Veiling Glare Imposition Module (VGIM). ZdeT denotes the target degraded latent representation, Zt denotes the noisy latent representation at timestep t of the forward diffusion process, and Znull represents an all-zero latent representation. txtS/T denotes the text prompts for the source and target domains, respectively.
Distillation and restoration networks in DeVeiler
The distillation and restoration networks. (a) The Distilled Degradation Net (DDN), trained in Stage II, models the forward degradation, using VGIM to apply the veiling glare prior. (b) The DeVeiler, trained in Stage III, is designed to reverse this process: it first removes the veiling glare using the Veiling Glare Compensation Module (VGCM), and then feeds the intermediate result into its main bottleneck to correct the aberrations.

Results

Visual results on Realworld-Compound domain captured by the SL system
Visual results on Realworld-Compound domain captured by the SL system.
Visual results on Realworld-Compound domain captured by the MRL system
Visual results on Realworld-Compound domain captured by the MRL system.

BibTeX

@misc{qian2025learninglatenttransmissionglare,
      title={Learning Latent Transmission and Glare Maps for Lens Veiling Glare Removal}, 
      author={Xiaolong Qian and Qi Jiang and Lei Sun and Zongxi Yu and Kailun Yang and Peixuan Wu and Jiacheng Zhou and Yao Gao and Yaoguang Ma and Ming-Hsuan Yang and Kaiwei Wang},
      year={2025},
      eprint={2511.17353},
      archivePrefix={arXiv},
      primaryClass={eess.IV},
      url={https://arxiv.org/abs/2511.17353}, 
}