0
  • DE
  • EN
  • FR
  • Internationale Datenbank und Galerie für Ingenieurbauwerke

Anzeige

A Prior-Guided Dual Branch Multi-Feature Fusion Network for Building Segmentation in Remote Sensing Images

Autor(en): ORCID





Medium: Fachartikel
Sprache(n): Englisch
Veröffentlicht in: Buildings, , n. 7, v. 14
Seite(n): 2006
DOI: 10.3390/buildings14072006
Abstrakt:

The domain of remote sensing image processing has witnessed remarkable advancements in recent years, with deep convolutional neural networks (CNNs) establishing themselves as a prominent approach for building segmentation. Despite the progress, traditional CNNs, which rely on convolution and pooling for feature extraction during the encoding phase, often fail to precisely delineate global pixel interactions, potentially leading to the loss of vital semantic details. Moreover, conventional CNN-based segmentation models frequently neglect the nuanced semantic differences between shallow and deep features during the decoding phase, which can result in subpar feature integration through rudimentary addition or concatenation techniques. Additionally, the unique boundary characteristics of buildings in remote sensing images, which offer a rich vein of prior information, have not been fully harnessed by traditional CNNs. This paper introduces an innovative approach to building segmentation in remote sensing images through a prior-guided dual branch multi-feature fusion network (PDBMFN). The network is composed of a prior-guided branch network (PBN) in the encoding process, a parallel dilated convolution module (PDCM) designed to incorporate prior information, and a multi-feature aggregation module (MAM) in the decoding process. The PBN leverages prior region and edge information derived from superpixels and edge maps to enhance edge detection accuracy during the encoding phase. The PDCM integrates features from both branches and applies dilated convolution across various scales to expand the receptive field and capture a more comprehensive semantic context. During the decoding phase, the MAM utilizes deep semantic information to direct the fusion of features, thereby optimizing segmentation efficacy. Through a sequence of aggregations, the MAM gradually merges deep and shallow semantic information, culminating in a more enriched and holistic feature representation. Extensive experiments are conducted across diverse datasets, such as WHU, Inria Aerial, and Massachusetts, revealing that PDBMFN outperforms other sophisticated methods in terms of segmentation accuracy. In the key segmentation metrics, including mIoU, precision, recall, and F1 score, PDBMFN shows a marked superiority over contemporary techniques. The ablation studies further substantiate the performance improvements conferred by the PBN’s prior information guidance and the efficacy of the PDCM and MAM modules.

Copyright: © 2024 by the authors; licensee MDPI, Basel, Switzerland.
Lizenz:

Dieses Werk wurde unter der Creative-Commons-Lizenz Namensnennung 4.0 International (CC-BY 4.0) veröffentlicht und darf unter den Lizenzbedinungen vervielfältigt, verbreitet, öffentlich zugänglich gemacht, sowie abgewandelt und bearbeitet werden. Dabei muss der Urheber bzw. Rechteinhaber genannt und die Lizenzbedingungen eingehalten werden.

  • Über diese
    Datenseite
  • Reference-ID
    10795238
  • Veröffentlicht am:
    01.09.2024
  • Geändert am:
    01.09.2024
 
Structurae kooperiert mit
International Association for Bridge and Structural Engineering (IABSE)
e-mosty Magazine
e-BrIM Magazine