Deep learning–based roadway crack classification with heterogeneous image data fusion
Auteur(s): |
Shanglian Zhou
Wei Song |
---|---|
Médium: | article de revue |
Langue(s): | anglais |
Publié dans: | Structural Health Monitoring, janvier 2021, n. 3, v. 20 |
Page(s): | 147592172094843 |
DOI: | 10.1177/1475921720948434 |
Abstrait: |
By providing accurate and efficient crack detection and localization, image-based crack detection methodologies can facilitate the decision-making and rehabilitation of the roadway infrastructure. Deep convolutional neural network, as one of the most prevailing image-based methodologies on object recognition, has been extensively adopted for crack classification tasks in the recent decade. For most of the current deep convolutional neural network–based techniques, either intensity or range image data are utilized to interpret the crack presence. However, the complexities in real-world data may impair the robustness of deep convolutional neural network architecture in its ability to analyze image data with various types of disturbances, such as low contrast in intensity images and shallow cracks in range images. The detection performance under these disturbances is important to protect the investment in infrastructure, as it can reveal the trend of crack evolution and provide information at an early stage to promote precautionary measures. This article proposes novel deep convolutional neural network–based roadway classification tools and investigates their performance from the perspective of using heterogeneous image fusion. A vehicle-mounted laser imaging system is adopted for data acquisition (DAQ) on concrete roadways with a depth resolution of 0.1 mm and an accuracy of 0.4 mm. In total, four types of image data including raw intensity, raw range, filtered range, and fused raw image data are utilized to train and test the deep convolutional neural network architectures proposed in this study. The experimental cases demonstrate that the proposed data fusion approach can reduce false detections and thus results in an improvement of 4.5%, 1.2%, and 0.7% in the F-measure value, respectively, compared to utilizing the raw intensity, raw range, and filtered range image data for analysis. Furthermore, in another experimental case, two novel deep convolutional neural network architectures proposed in this study are compared to exploit the fused raw image data, and the one leading to better classification performance is determined. |
- Informations
sur cette fiche - Reference-ID
10562501 - Publié(e) le:
11.02.2021 - Modifié(e) le:
03.05.2021