DOI: 10.1155/2024/5575787 ISSN: 1687-7268

A Deep Learning Method for Building Extraction from Remote Sensing Images by Fuzing Local and Global Features

Yitong Wang, Shumin Wang, Jing Yuan, Aixia Dou, Ziying Gu
  • Electrical and Electronic Engineering
  • Instrumentation
  • Control and Systems Engineering

As important disaster-bearing bodies, buildings are the focus of attention in seismic disaster risk assessment and emergency rescue. It is of great practical significance to extract buildings quickly and accurately with complex textures and variable scales and shapes from high-resolution remote sensing images. We proposed an improved TransUnet model based on multiscale grouped convolution and attention named MATUnet to retain more local detail features and enhance the representation ability of global features, while reducing the network parameters. We designed the multiscale grouped convolutional feature extraction module with attention (GAM) to enhance the representation of detailed features. The convolutional positional encoding module (PEG) was added to redetermine the number of transformer, it solved the problem of local feature information loss and the difficulty of convergence of the network. The channel attention module (CAM) of the decoder enhanced the salient information of the features and solved the problem of information redundancy after feature fusion. We experimented through MATUnet on the WHU building dataset and Massachusetts dataset. MATUnet achieved the best IOU results of 92.14% and 83.22%, respectively, and achieved better than the other generalized and state-of-the-art networks under the same conditions. We also have achieved good segmentation results on the GF2 Xichang building dataset.

More from our Archive