Enhanced U-Net with Multi-Module Integration for High-Exposure-Difference Image Restoration
Bo-Lin Jian, Hong-Li Chang, Chieh-Li ChenMachine vision systems have become key unmanned vehicle (UAV) sensing systems. However, under different weather conditions, the lighting direction and the selection of exposure parameters often lead to insufficient or missing object features in images, which could fail to perform various tasks. As a result, images need to be restored to secure information that is accessible when facing a light exposure difference environment. Many applications require real-time and high-quality images; therefore, efficiently restoring images is also important for subsequent tasks. This study adopts supervised learning to solve the problem of images under lighting discrepancies using a U-Net as our main architecture of the network and adding suitable modules to its encoder and decoder, such as inception-like blocks, dual attention units, selective kernel feature fusion, and denoising blocks. In addition to the ablation study, we also compared the quality of image light restoration with other network models using BAID and considered the overall trainable parameters of the model to construct a lightweight, high-exposure-difference image restoration model. The performance of the proposed network was demonstrated by enhancing image detection and recognition.