Integrating Contextual Information and Attention Mechanisms with Sparse Convolution for the Extraction of Internal Objects within Buildings from Three-Dimensional Point Clouds
Autor(en): |
Mingyang Yu
Zhongxu Li Qiuxiao Xu Fei Su Xin Chen Weikang Cui Qingrui Ji |
---|---|
Medium: | Fachartikel |
Sprache(n): | Englisch |
Veröffentlicht in: | Buildings, 21 Februar 2024, n. 3, v. 14 |
Seite(n): | 636 |
DOI: | 10.3390/buildings14030636 |
Abstrakt: |
Deep learning-based point cloud semantic segmentation has gained popularity over time, with sparse convolution being the most prominent example. Although sparse convolution is more efficient than regular convolution, it comes with the drawback of sacrificing global context information. To solve this problem, this paper proposes the OcspareNet network, which uses sparse convolution as the backbone and captures global contextual information using the offset attention module and context aggregation module. The offset attention module improves the network’s capacity to obtain global contextual information about the point cloud. The context aggregation module utilizes contextual information in the training and testing phases, which increases the network’s capacity to discern the overall structure and successfully improves the network’s capacity and the accuracy of the difficult-scene segmentation category. Compared to the state-of-the-art (SOTA) models, our model has a smaller parameter count and achieves higher accuracy on challenging segmentation categories such as ‘pictures’, ‘counters’, and ‘desks’ in the ScanNetV2 dataset, with IoU scores of 41.1%, 70.3%, and 72.5%, respectively. Furthermore, ablation experiments confirmed the efficacy of our designed modules. |
Copyright: | © 2024 by the authors; licensee MDPI, Basel, Switzerland. |
Lizenz: | Dieses Werk wurde unter der Creative-Commons-Lizenz Namensnennung 4.0 International (CC-BY 4.0) veröffentlicht und darf unter den Lizenzbedinungen vervielfältigt, verbreitet, öffentlich zugänglich gemacht, sowie abgewandelt und bearbeitet werden. Dabei muss der Urheber bzw. Rechteinhaber genannt und die Lizenzbedingungen eingehalten werden. |
4.17 MB
- Über diese
Datenseite - Reference-ID
10773806 - Veröffentlicht am:
29.04.2024 - Geändert am:
05.06.2024