• DE
  • EN
  • FR
  • International Database and Gallery of Structures


A Deep Learning-Based Approach to Enable Action Recognition for Construction Equipment


Medium: journal article
Language(s): English
Published in: Advances in Civil Engineering, , v. 2020
Page(s): 1-14
DOI: 10.1155/2020/8812928

In order to support smart construction, digital twin has been a well-recognized concept for virtually representing the physical facility. It is equally important to recognize human actions and the movement of construction equipment in virtual construction scenes. Compared to the extensive research on human action recognition (HAR) that can be applied to identify construction workers, research in the field of construction equipment action recognition (CEAR) is very limited, mainly due to the lack of available datasets with videos showing the actions of construction equipment. The contributions of this research are as follows: (1) the development of a comprehensive video dataset of 2,064 clips with five action types for excavators and dump trucks; (2) a new deep learning-based CEAR approach (known as a simplified temporal convolutional network or STCN) that combines a convolutional neural network (CNN) with long short_term memory (LSTM, an artificial recurrent neural network), where CNN is used to extract image features and LSTM is used to extract temporal features from video frame sequences; and (3) the comparison between this proposed new approach and a similar CEAR method and two of the best-performing HAR approaches, namely, three-dimensional (3D) convolutional networks (ConvNets) and two-stream ConvNets, to evaluate the performance of STCN and investigate the possibility of directly transferring HAR approaches to the field of CEAR.

Copyright: © Jinyue Zhang et al.

This creative work has been published under the Creative Commons Attribution 4.0 International (CC-BY 4.0) license which allows copying, and redistribution as well as adaptation of the original work provided appropriate credit is given to the original author and the conditions of the license are met.

  • About this
    data sheet
  • Reference-ID
  • Published on:
  • Last updated on: