$\require{mediawiki-texvc}$

연합인증

연합인증 가입 기관의 연구자들은 소속기관의 인증정보(ID와 암호)를 이용해 다른 대학, 연구기관, 서비스 공급자의 다양한 온라인 자원과 연구 데이터를 이용할 수 있습니다.

이는 여행자가 자국에서 발행 받은 여권으로 세계 각국을 자유롭게 여행할 수 있는 것과 같습니다.

연합인증으로 이용이 가능한 서비스는 NTIS, DataON, Edison, Kafe, Webinar 등이 있습니다.

한번의 인증절차만으로 연합인증 가입 서비스에 추가 로그인 없이 이용이 가능합니다.

다만, 연합인증을 위해서는 최초 1회만 인증 절차가 필요합니다. (회원이 아닐 경우 회원 가입이 필요합니다.)

연합인증 절차는 다음과 같습니다.

최초이용시에는
ScienceON에 로그인 → 연합인증 서비스 접속 → 로그인 (본인 확인 또는 회원가입) → 서비스 이용

그 이후에는
ScienceON 로그인 → 연합인증 서비스 접속 → 서비스 이용

연합인증을 활용하시면 KISTI가 제공하는 다양한 서비스를 편리하게 이용하실 수 있습니다.

건설 현장 CCTV 영상을 이용한 작업자와 중장비 추출 및 다중 객체 추적
Extraction of Workers and Heavy Equipment and Muliti-Object Tracking using Surveillance System in Construction Sites 원문보기

한국건축시공학회지 = Journal of the Korea Institute of Building Construction, v.21 no.5, 2021년, pp.397 - 408  

조영운 (Construction Engineering and Management Institute, Sahmyook University) ,  강경수 (Construction Engineering and Management Institute, Sahmyook University) ,  손보식 (Department of Architectural Engineering, Namseoul University) ,  류한국 (Department of Architectural, Sahmyook University)

초록
AI-Helper 아이콘AI-Helper

건설업은 업무상 재해 발생빈도와 사망자 수가 다른 산업군에 비해 높아 가장 위험한 산업군으로 불린다. 정부는 건설 현장에서 발생하는 산업 재해를 줄이고 예방하기 위해 CCTV 설치 의무화를 발표했다. 건설 현장의 안전 관리자는 CCTV 관제를 통해 현장의 잠재된 위험성을 찾아 제거하고 재해를 예방한다. 하지만 장시간 관제 업무는 피로도가 매우 높아 중요한 상황을 놓치는 경우가 많다. 따라서 본 연구는 딥러닝 기반 컴퓨터 비전 모형 중 개체 분할인 YOLACT와 다중 객체 추적 기법인 SORT을 적용하여 다중 클래스 다중 객체 추적 시스템을 개발하였다. 건설 현장에서 촬영한 영상으로 제안한 방법론의 성능을 MS COCO와 MOT 평가지표로 평가하였다. SORT는 YOLACT의 의존성이 높아서 작은 객체가 적은 데이터셋을 학습한 모형의 성능으로 먼 거리의 물체를 추적하는 성능이 떨어지지만, 크기가 큰 객체에서 뛰어난 성능을 나타냈다. 본 연구로 인해 딥러닝 기반 컴퓨터 비전 기법들의 안전 관제 업무에 보조 역할로 업무상 재해를 예방할 수 있을 것으로 판단된다.

Abstract AI-Helper 아이콘AI-Helper

The construction industry has the highest occupational accidents/injuries and has experienced the most fatalities among entire industries. Korean government installed surveillance camera systems at construction sites to reduce occupational accident rates. Construction safety managers are monitoring ...

주제어

표/그림 (9)

참고문헌 (39)

  1. Kim D. Occupational accident/injury analysis 2009. Ulsan (Korea): Korea Occupational Safety and Health Agency; 2021 Jan;15-22. Grant No.: 118006 Supported by KOSTAT. 

  2. Kim H. Construction safety innovation plan: Reinforcement of management of vulnerable construction, etc [Internet]. Sejong (Korea): Ministry of Land, Infrastructure and Transport. 2020 Apr 24 [cited 2021 Apr 7]. Available from: http://www.molit.go.kr/USR/NEWS/m_71/dtl.jsp?id95083805 

  3. Heejung. Women who "watch the monitor" [Internet]. Seoul (Korea): Ildaro. 2019 Aug 30 [cited 2021 Apr 7]. Available from: https://ildaro.com/8536 

  4. Park Y. Only one person monitors 438 CCTVs [Internet]. Seoul (Korea): Munhwa Ilbo. 2017 Nov 28 [cited 2021 Apr 7]. Available from: http://www.munhwa.com/news/view.html?no2017112801031627109001 

  5. Choi M, Choi J. CCTV integrated control center operation status and improvement plan legislative policy report. Seoul, Korea: National Assembly Research Service, NARS; 2019. p. 1-33. 

  6. LeCun Y, Boser B, Denker JS, Henderson D, Howard RE, Hubbard W, Jackel LD. Backpropagation applied to handwritten zip code recognition. Neural computation. 1989 Dec;1(4):541-51. https://doi.org/10.1162/neco.1989.1.4.541 

  7. Lee YJ, Park MW. 3D tracking of multiple onsite workers based on stereo vision. Automation in Construction. 2019 Feb;98:146-59. https://doi.org/10.1016/j.autcon.2018.11.017 

  8. Dalal N, Triggs B. Histograms of oriented gradients for human detection. 2005 IEEE Computer Society Conference on Computer Vision and Pattern Recognition; 2005 Jun 20-25; San Diego, CA. NJ: Institute of Electrical and Electronics Engineers; 2005. p. 886-93. https://doi.org/10.1109/CVPR.2005.177 

  9. Park MW, Brilakis I. Continuous localization of construction workers via integration of detection and tracking. Automation in Construction. 2016 Dec;72:129-42. https://doi.org/10.1016/j.autcon.2016.08.039 

  10. Zhang Z. Determining the epipolar geometry and its uncertainty: A review. International journal of computer vision. 1998 Mar;27(2):161-95. https://doi.org/10.1023/A:1007941100561 

  11. Zhao Y, Chen Q, Cao W, Yang J, Xiong J, Gui G. Deep learning for risk detection and trajectory tracking at construction sites. IEEE Access; 2019 Mar;7:30905-12. https://doi.org/10.1109/ACCESS.2019.2902658 

  12. Redmon J, Farhadi A. Yolov3: An incremental improvement. arXiv:1804:02767 [Preprint]. 2018 [cited 2021 Apr 8]. Available from: https://arxiv.org/abs/1804.02767 

  13. Kalman RE. A new approach to linear filtering and prediction problems. 1960 Mar;82(1):35-45. https://doi.org/10.1115/1.3662552 

  14. Kuhn HW. The Hungarian method for the assignment problem. Naval research logistics quarterly. 1955 Mar;2(1-2):83-97. https://doi.org/10.1002/nav.3800020109 

  15. Ishioka H, Weng X, Man Y, Kitani K. Single camera worker detection, tracking and action recognition in construction site. Proceedings of the 37th International Symposium on Automation and Robotics in Construction (ISARC); 2020 Oct; Kitakyushu, Japan. FL: International Association for Automation and Robotics in Construction (IAARC); 2020. p. 653-60. https://doi.org/10.22260/ISARC2020/0092 

  16. Angah O, Chen AY. Tracking multiple construction workers through deep learning and the gradient based method with rematching based on multi-object tracking accuracy. Automation in Construction. 2020 Nov;119:103308. https://doi.org/10.1016/j.autcon.2020.103308 

  17. He K, Gkioxari G, Dollar P, Girshick R. Mask R-CNN. 2017 IEEE International Conference on Computer Vision (ICCV); 2017 Oct 22-29; Venice, Italy. NJ: Institute of Electrical and Electronics Engineers; 2017. p. 2961-9. https://doi.org/10.1109/ICCV.2017.322 

  18. Nath ND, Behzadan AH, Paal SG. Deep learning for site safety: Real-time detection of personal protective equipment. Automation in Construction. 2020 Apr;112:103085. https://doi.org/10.1016/j.autcon.2020.103085 

  19. Son H, Choi H, Seong H, Kim C. Detection of construction workers under varying poses and changing background in image sequences via very deep residual networks. Automation in Construction. 2019 Mar;99:27-38. https://doi.org/10.1016/j.autcon.2018.11.033 

  20. Ren S, He K, Girshick R, Sun J. Faster R-CNN: towards real-time object detection with region proposal networks. IEEE transactions on pattern analysis and machine intelligence. 2017 Jun;39(6):1137-49. https://doi.org/10.1109/TPAMI.2016.2577031 

  21. Guo Y, Xu Y, Li S. Dense construction vehicle detection based on orientation-aware feature fusion convolutional neural network. Automation in Construction. 2020 Apr;112:103124. https://doi.org/10.1016/j.autcon.2020.103124 

  22. Li Z, Zhou F. FSSD: feature fusion single shot multibox detector. arXiv:1712.00960 [Preprint]. 2017 [cited 2021 Apr 12]. Available from: https://arxiv.org/abs/1712.00960 

  23. Ronneberger O, Fischer P, Brox T. U-net: Convolutional networks for biomedical image segmentation. International Conference on Medical Image Computing and Computer-Assisted Intervention. 2015 Oct 5; Munich, Germany. MN: The Medical Image Computing and Computer Assisted Intervention Society; 2015. p. 234-41. https://doi.org/10.1007/978-3-319-24574-4_28 

  24. Long J, Shelhamer E, Darrell T. Fully convolutional networks for semantic segmentation. IEEE Transactions on Pattern Analysis and Machine Intelligence. 2017 Apr;39(4): 640-51. https://doi.org/10.1109/TPAMI.2016.2572683 

  25. Truong T, Bhatt A, Queiroz L, Lai K, Yanushkevich S. Instance segmentation of personal protective equipment using a multi-stage transfer learning process. 2020 IEEE International Conference on Systems, Man, and Cybernetics (SMC). 2020 Oct 11-14; Toronto, Canada. NJ: Institute of Electrical and Electronics Engineers; 2017. p.1181-6. https://doi.org/10.1109/SMC42975.2020.9283427 

  26. Yang Z, Yuan Y, Zhang M, Zhao X, Zhang Y, Tian B. Safety distance identification for crane drivers based on mask R-CNN. Sensors. 2019 Jan;19(12):2789. https://doi.org/10.3390/s19122789 

  27. GitHub: Where the world builds software [Internet]. Image Polygonal Annotation with Python: GitHub, Inc. 2008 - [cited 2021 Apr 7]. Available from: https://github.com/wkentaro/labelme 

  28. Bolya D, Zhou C, Xiao F, Lee YJ. Yolact: Real-time instance segmentation. 2019 IEEE/CVF International Conference on Computer Vision(ICCV). 2019 Oct 27-Nov 2; Seoul, Korea. NJ: Institute of Electrical and Electronics Engineers; 2020. p.9157-66. https://doi.org/10.1109/ICCV.2019.00925 

  29. Lin TY, Dollar P, Girshick R, He K, Hariharan B, Belongie S. Feature pyramid networks for object detection. 2017 IEEE Conference on Computer Vision and Pattern Recognition(CVPR). 2017 Jul 21-26; Honolulu, HI. NJ: Institute of Electrical and Electronics Engineers; 2017. p. 2117-25. https://doi.org/10.1109/CVPR.2017.106 

  30. Voigtlaender P, Krause M, Osep A, Luiten J, Sekar BB, Geiger A, Leibe B. Mots: Multi-object tracking and segmentation. 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition(CVPR). 2019 June 15-20; Long Beach, CA. NJ: Institute of Electrical and Electronics Engineers; 2020. p. 7942-51. https://doi.org/10.1109/CVPR.2019.00813 

  31. Luo W, Xing J, Milan A, Zhang X, Liu W, Kim TK. Multiple object tracking: A literature review. Artificial Intelligence. 2021 Apr 293:103448. https://doi.org/10.1016/j.artint.2020.103448 

  32. Bewley A, Ge Z, Ott L, Ramos F, Upcroft B. Simple online and realtime tracking. 2016 IEEE international conference on image processing(ICIP). 2016 Sept 25-28; Phoenix, AZ. NJ: Institute of Electrical and Electronics Engineers; 2016. p. 3464-8. https://doi.org/10.1109/ICIP.2016.7533003 

  33. Everingham M, Van Gool L, Williams CK, Winn J, Zisserman A. The pascal visual object classes (voc) challenge. International journal of computer vision. 2010 Jun;88(2):303-38. https://doi.org/10.1007/s11263-009-0275-4 

  34. Lin TY, Maire M, Belongie S, Hays J, Perona P, Ramanan D, Dollar P, Zitnick CL. Microsoft coco: Common objects in context. European conference on computer vision. 2014 Sep;8693:740-55. https://doi.org/10.1007/978-3-319-10602-1_48 

  35. Wojke N, Bewley A, Paulus D. Simple online and realtime tracking with a deep association metric. 2017 IEEE international conference on image processing(ICIP). 2017 Sep 17-20; Beijing, China. NJ: Institute of Electrical and Electronics Engineers; 2018. p. 3645-9. https://doi.org/10.1109/ICIP.2017.8296962 

  36. Leal-Taixe L, Milan A, Reid I, Roth S, Schindler K. Motchallenge 2015: Towards a benchmark for multi-target tracking. arXiv:1504.01942 [Preprint]. 2015 [cited 2021 Apr 8]. Available from: https://arxiv.org/abs/1504.01942 

  37. Milan A, Leal-Taixe L, Reid I, Roth S, Schindler K. MOT16: A benchmark for multi-object tracking. arXiv:1603.00831 [Preprint]. 2016 [cited 2021 Apr 8]. Available from: https://arxiv.org/abs/1603.00831 

  38. GitHub: Where the world builds software [Internet]. Deep learning-based Computer Vision Models for PyTorch: GitHub, Inc. 2008 - [cited 2021 Apr 7]. Available from: https://github.com/unerue/boda 

  39. Xuehui A, Li Z, Zuguang L, Chengzhi W, Pengfei L, Zhiwei L. Dataset and benchmark for detecting moving objects in construction sites. Automation in Construction. 2021 Feb;122:103482. https://doi.org/10.1016/j.autcon.2020.103482 

저자의 다른 논문 :

섹션별 컨텐츠 바로가기

AI-Helper ※ AI-Helper는 오픈소스 모델을 사용합니다.

AI-Helper 아이콘
AI-Helper
안녕하세요, AI-Helper입니다. 좌측 "선택된 텍스트"에서 텍스트를 선택하여 요약, 번역, 용어설명을 실행하세요.
※ AI-Helper는 부적절한 답변을 할 수 있습니다.

선택된 텍스트

맨위로