$\require{mediawiki-texvc}$

연합인증

연합인증 가입 기관의 연구자들은 소속기관의 인증정보(ID와 암호)를 이용해 다른 대학, 연구기관, 서비스 공급자의 다양한 온라인 자원과 연구 데이터를 이용할 수 있습니다.

이는 여행자가 자국에서 발행 받은 여권으로 세계 각국을 자유롭게 여행할 수 있는 것과 같습니다.

연합인증으로 이용이 가능한 서비스는 NTIS, DataON, Edison, Kafe, Webinar 등이 있습니다.

한번의 인증절차만으로 연합인증 가입 서비스에 추가 로그인 없이 이용이 가능합니다.

다만, 연합인증을 위해서는 최초 1회만 인증 절차가 필요합니다. (회원이 아닐 경우 회원 가입이 필요합니다.)

연합인증 절차는 다음과 같습니다.

최초이용시에는
ScienceON에 로그인 → 연합인증 서비스 접속 → 로그인 (본인 확인 또는 회원가입) → 서비스 이용

그 이후에는
ScienceON 로그인 → 연합인증 서비스 접속 → 서비스 이용

연합인증을 활용하시면 KISTI가 제공하는 다양한 서비스를 편리하게 이용하실 수 있습니다.

합성곱 신경망을 적용한 Optical Camera Communication 시스템 성능 분석
Performance Analysis of Optical Camera Communication with Applied Convolutional Neural Network 원문보기

스마트미디어저널 = Smart media journal, v.12 no.3, 2023년, pp.49 - 59  

김종인 (한국광기술원 마이크로LED디스플레이연구센터) ,  박현선 (한국광기술원 마이크로LED디스플레이연구센터) ,  김정현 (한국광기술원 마이크로LED디스플레이연구센터)

초록
AI-Helper 아이콘AI-Helper

차세대 무선 통신기술로 알려져 있는 Optical Camera Communication(OCC)은 많은 연구가 진행 되고 있다. 이러한 OCC 기술은 통신 환경에 의해 성능이 좌우되며 이를 개선하기 위해 다양한 전략이 연구되고 있다. 그중 가장 두각을 나타내고 있는 방법은 딥러닝 기술을 사용하여 OCC의 수신기에 CNN을 적용하는 방법이다. 하지만 대부분의 연구에서는 CNN을 단순히 송신기를 검출하는데 사용하고 있다. 본 논문에서는 CNN을 송신기 검출 뿐만 아니라 Rx 복조 시스템에 적용하여 실험한다. 그리고 OCC 시스템의 데이터 이미지는 다른 이미지 데이터셋과는 다르게 비교적 분류가 간단하기 때문에 대부분의 CNN 모델에서 높은 정확도의 결과가 나타날 것이라는 가설을 세웠다. 가설을 증명하기 위해 OCC 시스템을 설계 및 구현하여 데이터를 수집하였고 12가지의 다양한 CNN 모델에 적용하여 실험했다. 실험 결과 파라미터수가 많은 고성능의 CNN 모델 뿐만 아니라 경량화 CNN 모델에서도 99% 이상의 정확도를 달성하였고 이를 통해 스마트폰과 같은 저성능 계산 장치에 OCC 시스템 적용이 가능함을 확인했다.

Abstract AI-Helper 아이콘AI-Helper

Optical Camera Communication (OCC), known as the next-generation wireless communication technology, is currently under extensive research. The performance of OCC technology is affected by the communication environment, and various strategies are being studied to improve it. Among them, the most prom...

주제어

참고문헌 (51)

  1. pureLiFi: The LiFi Technology. Available online: http://purelifi.com/ (accessed September, 10, 2021).? 

  2. IEEE P802.11-Light Communication (LC) Task Group (TG). https://www.ieee802.org/11/Reports/tgbb_update.htm (accessed March, 3, 2023).? 

  3. IEEE 802.15 WPAN Task Group 7 (TG7) Visible Light Communication, https://www.ieee802.org/15/pub/TG7.html (accessed March, 3, 2023).? 

  4. Ahmed A, Trichy Viswanathan S, Rahman MR and Ashok A, "An Empirical Study of Deep Learning Models for LED Signal Demodulation in Optical Camera Communication," Network, vol. 1, no. 3, pp. 261-278, Oct. 2021.? 

  5. P. H. Pathak, X. Feng, P. Hu and P. Mohapatra, "Visible Light Communication, Networking, and Sensing: A Survey, Potential and Challenges," IEEE Communications Surveys & Tutorials, vol.17, no.4, pp. 2047-2077, Sep. 2015.? 

  6. Ghassemlooy, Zabih, et al., eds, Visible light communications: theory and applications, CRC press, 2017.? 

  7. What Is a Photodiode? Working, Characteristics, Applications(2021). https://www.electronicshub.org/photodiode-working-characteristics-applications/ (accessed March, 3, 2023).? 

  8. Cahyadi, Willy Anugrah, Yeon Ho Chung, Zabih Ghassemlooy and Navid Bani Hassan, "Optical Camera Communications: Principles, Modulations, Potential and Challenges," Electronics, vol.9, no. 9, 1339, Aug. 2020.? 

  9. T. Nguyen, A. Islam, T. Hossan and Y. M. Jang, "Current Status and Performance Analysis of Optical Camera Communication Technologies for 5G Networks," IEEE Access, vol. 5, pp. 4574-4594, 2017.? 

  10. Saha, Nirzhar, et al., "Survey on optical camera communications: challenges and opportunities," Iet Optoelectronics, vol.9, no.5, pp.172-183, 2015.? 

  11. Nam-Tuan Le, "Invisible watermarking optical camera communication and compatibility issues of IEEE 802.15.7r1 specification," Optics Communications, vol.390, pp.144-155, 2017.? 

  12. T. Yamazato, et al., "Vehicle Motion and Pixel Illumination Modeling for Image Sensor Based Visible Light Communication," IEEE Journal on Selected Areas in Communications, vol.33, no.9, pp. 1793-1805, Sept. 2015.? 

  13. I. Takai, T. Harada, M. Andoh, K. Yasutomi, K. Kagawa and S. Kawahito, "Optical Vehicle-to-Vehicle Communication System Using LED Transmitter and Camera Receiver," IEEE Photonics Journal, vol.6, no.5, pp. 1-14, Oct. 2014.? 

  14. B. Lin, Z. Ghassemlooy, C. Lin, X. Tang, Y. Li and S. Zhang, "An Indoor Visible Light Positioning System Based on Optical Camera Communications," IEEE Photonics Technology Letters, vol.29, no.7, pp. 579-582, April1, 2017.? 

  15. J. Armstrong, Y. A. Sekercioglu and A. Neild, "Visible light positioning: a roadmap for international standardization," IEEE Communications Magazine, vol.51, no.12, pp. 68-73, December 2013.? 

  16. Shahjalal, Md, et al., "An implementation approach and performance analysis of image sensor based multilateral indoor localization and navigation system," Wireless Communications and Mobile Computing, vol.2018, 2018.? 

  17. P. Luo, Z. Ghassemlooy, H. Le Minh, X. Tang and H. -M. Tsai, "Undersampled phase shift ON-OFF keying for camera communication," 2014 Sixth International Conference on Wireless Communications and Signal Processing (WCSP), Hefei, China, pp.1-6, 2014.? 

  18. Hasan M.K., Chowdhury M.Z., Shahjalal M., Nguyen V.T. and Jang, Y.M. "Performance Analysis and Improvement of Optical Camera Communication," Applided Sciences, vol.8, no.12, pp. 2527, Dec. 2018.? 

  19. H. Lee, S. H. Lee, T. Q. S. Quek and I. Lee, "Deep Learning Framework for Wireless Systems: Applications to Optical Wireless Communications," IEEE Communications Magazine, vol.57, no.3, pp.35-41, March 2019.? 

  20. Choi, D.N.; Jin, S.Y.; Lee, J.; Kim, B.W. "Deep Learning Technique for Improving Data Reception in Optical Camera Communication-Based V2I," Proceedings of the 28th International Conference on Computer Communication and Networks (ICCCN), pp.1-2, Aug. 2019.? 

  21. P. G. Pachpande, M. H. Khadr, A. F. Hussein and H. Elgala, "Visible Light Communication Using Deep Learning Techniques," 2018 IEEE 39th Sarnof Symposium, pp.1-6, Sep. 2018.? 

  22. He, W.; Zhang, M.; Wang, X.; Zhou, H.; Ren, X. "Design and Implementation of Adaptive Filtering Algorithm for VLC Based on Convolutional Neural Network," Proceedings of the 2019 IEEE 5th International Conference on Computer and Communications (ICCC), pp.317-321, Dec. 2019.? 

  23. Langer, K.; Grubor, J. "Recent Developments in Optical Wireless Communications using Infrared and Visible Light," Proceedings of the 2007 9th International Conference on Transparent Optical Networks, Vol.3, pp.146-151, Jul. 2007.? 

  24. Lee, Hoon, et al., "Binary signaling design for visible light communication: a deep learning framework," Optics express, vol.26, no.14, pp.18131-18142, 2018.? 

  25. Wang, Chunxi, Guofeng Wu and Zhiyong Du, "Reinforcement learning based network selection for hybrid VLC and RF systems," MATEC Web of Conferences, Vol. 173, 2018.? 

  26. B. Turan and S. Coleri, "Machine Learning Based Channel Modeling for Vehicular Visible Light Communication," IEEE Transactions on Vehicular Technology, vol.70, no.10, pp.9659-9672, Oct. 2021.? 

  27. Wu, Xi, Zhitong Huang and Yuefeng Ji, "Deep neural network method for channel estimation in visible light communication," Optics communications, vol.462, 125272, 2020.? 

  28. A. KRIZHEVSKY, I. SUTSKEVER and G. E. HINTON, "Imagenet classification with deep convolutional neural networks," Communications of the ACM, vol.60, no.6, pp.84-90, 2017.? 

  29. Y. Lecun, L. Bottou, Y. Bengio and P. Haffner, "Gradient-based learning applied to document recognition," Proc. IEEE, vol.86, no.11, pp.2278-2324, Nov. 1998.? 

  30. K. SIMONYAN and A. ZISSERMAN, "Very deep convolutional networks for large-scale image recognition," arXiv preprint arXiv:1409.1556, 2014.? 

  31. C. SZEGEDY and et al., "Going deeper with convolutions," Proceedings of the IEEE conference on computer vision and pattern recognition, pp.1-9, 2015.? 

  32. K. HE and et al., "Deep residual learning for image recognition," Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 770-778, 2016.? 

  33. G. HUANG and et al., "Densely connected convolutional networks," Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 4700-4708, 2017.? 

  34. A. VASWANI and et al., "Attention is all you need," Advances in neural information processing systems, 2017.? 

  35. A. e. a. DOSOVITSKIY, "An image is worth 16x16 words: Transformers for image recognition at scale," arXiv preprint arXiv:2010.11929, 2020.? 

  36. F. WANG and et al., "Residual attention network for image classification," Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 3156-3164, 2017.? 

  37. J. HU, L. SHEN and G. SUN, "Squeeze-and-excitation networks," Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 7132-7141, 2018.? 

  38. Jongchan, P.; Sanghyun, W.; Joon-Young, L.; So, K.I. "BAM: Bottleneck Attention Module," arXiv:1807.06514, 2018.? 

  39. Sanghyun, W.; Jongchan, P.; Joon-Young, L.; So, K.I. "CBAM: Convolutional Block Attention Module," arXiv:1807.06521, 2018.? 

  40. Alexey, D.; Lucas, B.; Alexander, K.; Dirk, W.; Xiaohua, Z.; Thomas, U.; Mostafa, D.; Matthias, M.; Georg, H.; Sylvain, G.; et al., "An image is worth 16x16 words: Transformers for image recognition at scale," arXiv:2010.11929, 2020.? 

  41. Touvron, H.; Cord, M.; Douze, M.; Massa, F.; Sablayrolles, A.; Jegou, H., "Training data-efficient image transformers & distillation through attention," Proceedings of the International Conference on Machine Learning, Online, 18-24 July 2021.? 

  42. Chollet, F. Xception: "Deep learning with depthwise separable convolutions," Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp.1251-1258, 2017.? 

  43. Iandola, I.N.; Han, S.; Moskewicz, M.W.; Ashraf, K.; Dally, W.J.; Keutzer, K. "SqueezeNet: AlexNet-level accuracy with 50x fewer parameters and< 0.5 MB model size," arXiv:1602.07360, 2016.? 

  44. Howard, A.G.; Zhu, M.; Chen, B; Kalenichenko, D.; Wang, W.; Weyand, T.; Andreetto, M.; Adam, H. "Mobilenets: Efficient convolutional neural networks for mobile vision applications," arXiv:1704.04861, 2017.? 

  45. Xiangyu, Z.; Xinyu, Z.; Mengxiao, L.; Jian, S. Shufflenet: "An extremely efficient convolutional neural network for mobile devices," Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp.6848-6856, 2018.? 

  46. Mark, S.; Andrew, H.; Menglong, Z.; Andrey, Z.; Liang-Chieh, C. MobileNetV2: "Inverted Residuals and Linear Bottlenecks," Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp.4510-4520, 2018.? 

  47. Ningning, M.; Xiangyu, Z.; Hai-Tao, Z.; Jian, S. Shufflenet v2: "Practical guidelines for efficient cnn architecture design," Proceedings of the European Conference on Computer Vision (ECCV), pp.116-131, 2018.? 

  48. Deng, J.; Dong, W.; Socher, R.; Li, L.J.; Li, K.; Fei-Fei, L. Imagenet: "A large-scale hierarchical image database," Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 248-255, 18 August 2009.? 

  49. Van Horn, G.; Branson, S.; Farrell, R.; Haber, S.; Barry, J.; Ipeirotis, P.; Perona, P.; Belongie, S. "Building a bird recognition app and large scale dataset with citizen scientists: The fine print in fine-grained dataset collection," CVPR, pp.595-604, June 2015.? 

  50. Gerry. Butterfly & Moths Image Classification 100 Species. Available online: https://www.kaggle.com/datasets/gpiosenka/butter fly-images40-species (accessed Mar., 13, 2023).? 

  51. Kim J-I, and et al., "EffShuffNet: An Efficient Neural Architecture for Adopting a Multi-Model," Applied Sciences, vol.13, 3505, 2023. 

저자의 다른 논문 :

관련 콘텐츠

이 논문과 함께 이용한 콘텐츠

저작권 관리 안내
섹션별 컨텐츠 바로가기

AI-Helper ※ AI-Helper는 오픈소스 모델을 사용합니다.

AI-Helper 아이콘
AI-Helper
안녕하세요, AI-Helper입니다. 좌측 "선택된 텍스트"에서 텍스트를 선택하여 요약, 번역, 용어설명을 실행하세요.
※ AI-Helper는 부적절한 답변을 할 수 있습니다.

선택된 텍스트

맨위로