최소 단어 이상 선택하여야 합니다.
최대 10 단어까지만 선택 가능합니다.
다음과 같은 기능을 한번의 로그인으로 사용 할 수 있습니다.
NTIS 바로가기한국산업융합학회 논문집 = Journal of the Korean Society of Industry Convergence, v.25 no.4/1, 2022년, pp.687 - 697
오상진 (부산대학교 조선해양공학과) , 윤광호 (부산대학교 조선해양공학과) , 임채옥 (부산대학교 조선해양공학과) , 신성철 (부산대학교 조선해양공학과)
An automated system is needed for the effectiveness of non-destructive testing. In order to utilize the radiographic testing data accumulated in the film, the types of welding defects were classified into 9 and the shape of defects were analyzed. Data was preprocessed to use deep learning with high ...
Kim, Y., Kim, J., & Kang, S. (2019). A study on welding deformation prediction for ship blocks using the equivalent strain method based on inherent strain. Applied Sciences, 9(22), 4906.
Vilar, R., Zapata, J., & Ruiz, R. (2009). An automatic system of classification of weld defects in radiographic images. Ndt & E International, 42(5), 467-476.
Girshick, R., Donahue, J., Darrell, T., & Malik, J. (2014). Rich feature hierarchies for accurate object detection and semantic segmentation. In Proceedings of the IEEE conference on computer vision and pattern recognition (pp. 580-587).
Datal, N. (2005). Histograms of oriented gradients for human detection. In Proc. 2005 International Conference on Computer Vision and Pattern Recognition (Vol. 2, pp. 886-893). IEEE Computer Society.
Lowe, D. G. (2004). Distinctive image features from scale-invariant keypoints. International journal of computer vision, 60(2), 91-110.
Sizyakin, R., Voronin, V., Gapon, N., Zelensky, A., & Pizurica, A. (2019, June). Automatic detection of welding defects using the convolutional neural network. In Automated Visual Inspection and Machine Vision III (Vol. 11061, pp. 93-101). SPIE.
Tang, Y. X., Tang, Y. B., Peng, Y., Yan, K., Bagheri, M., Redd, B. A., ... & Summers, R. M. (2020). Automated abnormality classification of chest radiographs using deep convolutional neural networks. NPJ digital medicine, 3(1), 1-8.
Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A. N., ... & Polosukhin, I. (2017). Attention is all you need. Advances in neural information processing systems, 30.
Dosovitskiy, A., Beyer, L., Kolesnikov, A., Weissenborn, D., Zhai, X., Unterthiner, T., ... & Houlsby, N. (2020). An image is worth 16x16 words: Transformers for image recognition at scale. arXiv preprint arXiv:2010.11929.
Zhu, X., Su, W., Lu, L., Li, B., Wang, X., & Dai, J. (2020). Deformable detr: Deformable transformers for end-to-end object detection. arXiv preprint arXiv:2010.04159.
Chen, X., Hsieh, C. J., & Gong, B. (2021). When vision transformers outperform ResNets without pre-training or strong data augmentations. arXiv preprint arXiv:2106.01548.
Liu, Z., Lin, Y., Cao, Y., Hu, H., Wei, Y., Zhang, Z., ... & Guo, B. (2021). Swin transformer: Hierarchical vision transformer using shifted windows. In Proceedings of the IEEE/CVF International Conference on Computer Vision (pp. 10012-10022).
He, K., Zhang, X., Ren, S., & Sun, J. (2016). Deep residual learning for image recognition. In Proceedings of the IEEE conference on computer vision and pattern recognition (pp. 770-778).
Lin, T. Y., Goyal, P., Girshick, R., He, K., & Dollar, P. (2017). Focal loss for dense object detection. In Proceedings of the IEEE international conference on computer vision (pp. 2980-2988).
Cai, Z., & Vasconcelos, N. (2018). Cascade r-cnn: Delving into high quality object detection. In Proceedings of the IEEE conference on computer vision and pattern recognition (pp. 6154-6162).
Ahn, H. (2013). Digital Radiography. Journal of the Korean Society for Nondestructive Testing, 33(1), 80-95.
Lin, T. Y., Maire, M., Belongie, S., Hays, J., Perona, P., Ramanan, D., ... & Zitnick, C. L. (2014, September). Microsoft coco: Common objects in context. In European conference on computer vision (pp. 740-755). Springer, Cham.
Eggert, C., Zecha, D., Brehm, S., & Lienhart, R. (2017, June). Improving small object proposals for company logo detection. In Proceedings of the 2017 ACM on international conference on multimedia retrieval (pp. 167-174).
Law, H., & Deng, J. (2018). Cornernet: Detecting objects as paired keypoints. In Proceedings of the European conference on computer vision (ECCV) (pp. 734-750).
Kornblith, S., Shlens, J., & Le, Q. V. (2019). Do better imagenet models transfer better?. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition (pp. 2661-2671).
Deng, J., Dong, W., Socher, R., Li, L. J., Li, K., & Fei-Fei, L. (2009, June). Imagenet: A large-scale hierarchical image database. In 2009 IEEE conference on computer vision and pattern recognition (pp. 248-255). Ieee.
Lin, T. Y., Dollar, P., Girshick, R., He, K., Hariharan, B., & Belongie, S. (2017). Feature pyramid networks for object detection. In Proceedings of the IEEE conference on computer vision and pattern recognition (pp. 2117-2125).
Yun, G. H., Oh, S. J., & Shin, S. C. (2021). Image Preprocessing Method in Radiographic Inspection for Automatic Detection of Ship Welding Defects. Applied Sciences, 12(1), 123.
Loshchilov, I., & Hutter, F. (2017). Decoupled weight decay regularization. arXiv preprint arXiv:1711.05101.
Bodla, N., Singh, B., Chellappa, R., & Davis, L. S. (2017). Soft-NMS--improving object detection with one line of code. In Proceedings of the IEEE international conference on computer vision (pp. 5561-5569).
*원문 PDF 파일 및 링크정보가 존재하지 않을 경우 KISTI DDS 시스템에서 제공하는 원문복사서비스를 사용할 수 있습니다.
Free Access. 출판사/학술단체 등이 허락한 무료 공개 사이트를 통해 자유로운 이용이 가능한 논문
※ AI-Helper는 부적절한 답변을 할 수 있습니다.