Article
Open Access
Human–machine interaction controlling system for teleoperation of robotic arms in prefabrication assembly
1 National Center of Technology Innovation for Digital Construction, Huazhong University of Science and Technology, Wuhan, China
2 School of Civil and Hydraulic Engineering, Huazhong University of Science and Technology, Wuhan, China
3 Department of Civil and Environmental Engineering, University of Maryland, College Park, USA
4 Faculty of Informatics and Communication, University of Economics in Katowice, Katowice, Poland
5 Institute of Artificial Intelligence, Huazhong University of Science & Technology, Wuhan, China
  • Volume
  • Citation
    Zhou C, Chen R, Sekula P, Tang B, Qu Y. Human–machine interaction controlling system for teleoperation of robotic arms in prefabrication assembly. Smart Constr. 2024(1):0005, https://doi.org/10.55092/sc20240005. 
  • DOI
    10.55092/sc20240005
  • Copyright
    Copyright2024 by the authors. Published by ELSP.
Abstract

Prefabrication assembly has been a widely used method in the construction industry in recent years. A controlling system for teleoperation of robotic arms in the prefabrication assembly with hand gesture recognition based on transfer learning is described in this study. A deep convolutional neural network with Xception model was used to recognize 13 different hand gesture types in the prefabrication assembly process with robotic arm in a laboratory setting. The proposed system provides safety and convenience to operators in construction sites. Results demonstrated that the proposed system has satisfactory performance and the developed algorithm can be used for teleoperation of robotic arms in prefabrication assemblies to provide feasible support for prefabricated construction.

Keywords

prefabrication assembly; hand gesture recognition; teleoperation; human–machine interaction; transfer learning

Preview
References
  • [1]Barlow J, Childerhouse P, Gann D, Hong-Minh S, Naim M, et al. Choice and delivery in housebuilding: lessons from Japan for UK housebuilders. Build. Res. Inf. 2003, 31(2):134–145.
  • [2]Li Z, Shen GQ, Alshawi M. Measuring the impact of prefabrication on construction waste reduction: An empirical study in China. Resour. Conserv. Recycl. 2014, 91:27–39.
  • [3]Zhang W, Lee MW, Jaillon L, Poon CS. The hindrance to using prefabrication in Hong Kong's building industry. J. Clean. Prod. 2018, 204:70–81.
  • [4]Sacks R, Eastman CM, Lee G. Process model perspectives on management and engineering procedures in the precast/prestressed concrete industry. J. Constr. Eng. Manag. 2004, 130(2):206–215.
  • [5]Jaillon L, Poon CS, Chiang YH. Quantifying the waste reduction potential of using prefabrication in building construction in Hong Kong. Waste Manag. 2009, 29(1):309– 320.
  • [6]Mollet N, Chellali R. Virtual and augmented reality with head-tracking for efficient teleoperation of groups of robots. In Proceedings of 2008 International Conference on Cyberworlds, Hangzhou, China, September 22–24, 2008, pp. 102–108.
  • [7]Fang B, Sun F, Liu H, Guo D. A novel data glove using inertial and magnetic sensors for motion capture and robotic arm-hand teleoperation. Ind. Robot. 2017, 44(2):155–165.
  • [8]Fujii T, Lee JH, Okamoto S. Gesture recognition system for human-robot interaction and its application to robotic service task. In Proceedings of the International MultiConference of Engineers and Computer Scientists (IMECS 2014), Hong Kong, China, October 20–22, 2014, pp. 63–6
  • [9]Yin X, Xie M. Finger identification and hand posture recognition for human–robot interaction. Image Vis. Comput. 2007, 25(8):1291–300.
  • [10]Wen R, Tay WL, Nguyen BP, Chng CB, Chui CK. Hand gesture guided robot-assisted surgery based on a direct augmented reality interface. Comput. Methods Programs Biomed. 2014, 116(2):68–80.
  • [11]Noda K, Yamaguchi Y, Nakadai K, Okuno HG, Ogata T. Audio-visual speech recognition using deep learning. Appl. Intell. 2015, 42:722–737.
  • [12]Budiharto W, Gunawan AA. Development of coffee maker service robot using speech and face recognition systems using POMDP. In Proceedings of First International Workshop on Pattern Recognition, Tokyo, Japan, May 11–13, 2016.
  • [13]Shan X, Yang EH, Zhou J, Chang VW. Human-building interaction under various indoor temperatures through neural-signal electroencephalogram (EEG) methods. Build. Environ. 2018, 129:46–53.
  • [14]Hong J, Song S, Kang H, Choi J, Hong T, et al. Influence of visual environments on struck-by hazards for construction equipment operators through virtual eye-tracking. Autom. Constr. 2024, 161:105341.
  • [15]Wang Y, Yang C, Wu X, Xu S, Li H. Kinect based dynamic hand gesture recognition algorithm research. In Proceedings of 2012 4th International Conference on Intelligent Human-Machine Systems and Cybernetics, Nanchang, China, August 26–27, 2012, pp. 274–279.
  • [16]Pedersoli F, Benini S, Adami N, Leonardi R. XKin: an open source framework for hand pose and gesture recognition using kinect. Vis. Comput. 2014, 30:1107–1122.
  • [17]Marin G, Dominio F, Zanuttigh P. Hand gesture recognition with leap motion and kinect devices. In Proceedings of 2014 IEEE International Conference on Image Processing (ICIP), Paris, France, October 27–30, 2014, pp. 1565–1569.
  • [18]Atitallah AB, Said Y, Atitallah MA, Albekairi M, Kaaniche K, et al. An effective obstacle detection system using deep learning advantages to aid blind and visually impaired navigation. Ain Shams Eng. J. 2024, 15(2):102387.
  • [19]Fang Q, Li H, Luo X, Ding L, Rose TM, et al. A deep learning-based method for detecting non-certified work on construction sites. Adv. Eng. Informatics. 2018, 35:56– 68.
  • [20]Zhong B, Pan X, Love PE, Ding L, Fang W. Deep learning and network analysis: Classifying and visualizing accident narratives in construction. Autom. Constr. 2020, 113:103089.
  • [21]Bar Y, Diamant I, Wolf L, Lieberman S, Konen E, et al. Chest pathology detection using deep learning with non-medical training. In Proceedings of 2015 IEEE 12th International Symposium on Biomedical Imaging (ISBI), New York, USA, April 16–19, 2015, pp. 294–297.
  • [22]Gopalakrishnan K, Khaitan SK, Choudhary A, Agrawal A. Deep convolutional neural networks with transfer learning for computer vision-based data-driven pavement distress detection. Constr. Build. Mater. 2017, 157:322–330.
  • [23]Szegedy C, Liu W, Jia Y, Sermanet P, Reed SE, et al. Going Deeper with Convolution. IEEE Computer Society. In Proceedings of 2015 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Boston, MA, USA, June 7–12, 2015, pp. 1–9.
  • [24]Lytle AM, Saidi KS, Bostelman RV, Stone WC, Scott NA. Adapting a teleoperated device for autonomous control using three-dimensional positioning sensors: experiences with the NIST RoboCrane. Autom. Constr. 2004, 13(1):101–118.
  • [25]Doggett W. Robotic assembly of truss structures for space systems and future research plans. In Proceedings of IEEE Aerospace Conference, Big Sky, MT, USA, March 9–16, 2002, pp. 3589–3598.
  • [26]Gambao E, Balaguer C, Gebhart F. Robot assembly system for computer-integrated construction. Autom. Constr. 2000, 9(5-6):479–487.
  • [27]Wing R, Atkin B. FutureHome-a prototype for factory housing. In Proceedings of the 19th International Symposium on Robotics and Automation in Construction, Gaithersburg, Maryland, September 23–25, 2002, pp. 173–179.
  • [28]Dörfler K, Sandy T, Giftthaler M, Gramazio F, Kohler M, et al. Mobile robotic brickwork: automation of a discrete robotic fabrication process using an autonomous mobile robot. In Robotic Fabrication in Architecture, Art and Design 2016. Cham: Springer, 2016, pp. 204–217.
  • [29]Neto P, Pires JN, Moreira AP. Accelerometer-based control of an industrial robotic arm. In Proceedings of RO-MAN 2009 - The 18th IEEE International Symposium on Robot and Human Interactive Communication, Toyama, Japan, September 27–October 2, 2009, pp. 1192–1197.
  • [30]Wang Z, Chen D, Xiao P. Design of a voice control 6DoF grasping robotic arm based on ultrasonic sensor, computer vision and Alexa voice assistance. In Proceedings of 2019 10th International Conference on Information Technology in Medicine and Education (ITME), Qingdao, China, August 23–25, 2019, pp. 649–654.
  • [31]Poulos M, Rangoussi M, Chrissikopoulos V, Evangelou A. Person identification based on parametric processing of the EEG. In Proceedings of ICECS '99. 6th IEEE International Conference on Electronics, Circuits and Systems, Pafos, Cyprus, September 5–8, 1999, pp. 283–286.
  • [32]Yazdani A, Roodaki A, Rezatofighi SH, Misaghian K, Setarehdan SK. Fisher linear discriminant based person identification using visual evoked potentials. In Proceedings of 2008 9th International Conference on Signal Processing, Beijing, China, October 26–29, 2008, pp. 1677–1680.
  • [33]Palaniappan R. Electroencephalogram signals from imagined activities: A novel biometric identifier for a small population. In Proceedings of Intelligent Data Engineering and Automated Learning, Burgos, Spain, September 20–23, 2006, pp. 604– 611.
  • [34]Okogbaa OG, Shell RL, Filipusic D. On the investigation of the neurophysiological correlates of knowledge worker mental fatigue using the EEG signal. Appl. Ergon. 1994, 25(6):355–365.
  • [35]Chen J, Song X, Lin Z. Revealing the “Invisible Gorilla” in construction: Estimating construction safety through mental workload assessment. Autom. Constr. 2016, 63:173– 183.
  • [36]Jeelani I, Han K, Albert A. Automating and scaling personalized safety training using eye-tracking data. Autom. Constr. 2018, 93:63–77.
  • [37]Bretzner L, Laptev I, Lindeberg T. Hand gesture recognition using multi-scale colour features, hierarchical models and particle filtering. In Proceedings of Fifth IEEE International Conference on Automatic Face Gesture Recognition, Washington, D.C., USA, May 20–21, 2002, pp. 423–428.
  • [38]Li Y. Hand gesture recognition using Kinect. In Proceedings of 2012 IEEE International Conference on Computer Science and Automation Engineering, Beijing, China, June 22–24, 2012, pp. 196–199.
  • [39]Ren Z, Yuan J, Meng J, Zhang Z. Robust part-based hand gesture recognition using kinect sensor. IEEE Trans. Multimed. 2013, 15(5):1110–1120.
  • [40]John V, Boyali A, Mita S, Imanishi M, Sanma N. Deep learning-based fast hand gesture recognition using representative frames. In Proceedings of 2016 International Conference on Digital Image Computing: Techniques and Applications (DICTA), Gold Coast, Australia, November 30–December 2, 2016, pp. 1–8.
  • [41]Wang W, Yang J, Xiao J, Li S, Zhou D. Face recognition based on deep learning. In Proceedings of International Conference on Human Centered Computing (HCC 2014), Phnom Penh, Cambodia, November 27–29, 2014, pp. 812–820.
  • [42]Collobert R, Weston J, Bottou L, Karlen M, Kavukcuoglu K, et al. Natural language processing (almost) from scratch. J. Mach. Learn. Res. 2011, 12:2493–2537.
  • [43]Kruger N, Janssen P, Kalkan S, Lappe M, Leonardis A, et al. Deep hierarchies in the primate visual cortex: What can we learn for computer vision? IEEE Trans. Pattern Anal. Mach. Intell. 2012, 35(8):1847–1871.
  • [44]Sanchez-Riera J, Hsiao YS, Lim T, Hua KL, Cheng WH. A robust tracking algorithm for 3d hand gesture with rapid hand motion through deep learning. In Proceedings of IEEE International Conference on Multimedia and Expo Workshops (ICMEW), Sichuan, China, July 14–18, 2014, pp. 1–6.
  • [45]Oyedotun OK, Khashman A. Deep learning in vision-based static hand gesture recognition. Neural Comput. Appl. 2017, 28(12):3941–3951.
  • [46]Kolar Z, Chen H, Luo X. Transfer learning and deep convolutional neural networks for safety guardrail detection in 2D images. Autom. Constr. 2018, 89:58–70.