$\require{mediawiki-texvc}$

연합인증

연합인증 가입 기관의 연구자들은 소속기관의 인증정보(ID와 암호)를 이용해 다른 대학, 연구기관, 서비스 공급자의 다양한 온라인 자원과 연구 데이터를 이용할 수 있습니다.

이는 여행자가 자국에서 발행 받은 여권으로 세계 각국을 자유롭게 여행할 수 있는 것과 같습니다.

연합인증으로 이용이 가능한 서비스는 NTIS, DataON, Edison, Kafe, Webinar 등이 있습니다.

한번의 인증절차만으로 연합인증 가입 서비스에 추가 로그인 없이 이용이 가능합니다.

다만, 연합인증을 위해서는 최초 1회만 인증 절차가 필요합니다. (회원이 아닐 경우 회원 가입이 필요합니다.)

연합인증 절차는 다음과 같습니다.

최초이용시에는
ScienceON에 로그인 → 연합인증 서비스 접속 → 로그인 (본인 확인 또는 회원가입) → 서비스 이용

그 이후에는
ScienceON 로그인 → 연합인증 서비스 접속 → 서비스 이용

연합인증을 활용하시면 KISTI가 제공하는 다양한 서비스를 편리하게 이용하실 수 있습니다.

사용자 특성과 ChatGPT 신뢰의 관계 : 인구통계학적 변수와 AI 경험의 영향
User Factors and Trust in ChatGPT: Investigating the Relationship between Demographic Variables, Experience with AI Systems, and Trust in ChatGPT 원문보기

디지털산업정보학회논문지 = Journal of the Korea Society of Digital Industry and Information Management, v.19 no.4, 2023년, pp.53 - 71  

박예은 (연세대학교 혁신학과) ,  장정훈 (연세대학교 혁신학과)

Abstract AI-Helper 아이콘AI-Helper

This study explores the relationship between various user factors and the level of trust in ChatGPT, a sophisticated language model exhibiting human-like capabilities. Specifically, we considered demographic characteristics such as age, education, gender, and major, along with factors related to pre...

주제어

표/그림 (6)

AI 본문요약
AI-Helper 아이콘 AI-Helper

제안 방법

  • Multiple linear regression analyses were performed to simultaneously examine the impact of demographic factors (age, gender, education level) on trust in ChatGPT. For each of the three trust outcomes (Trust in System, Trust in Information, Relative Trust), we regressed them on the demographic factors, while considering ChatGPT user experience variables as potential additional independent variables to control for their effects. For each model, a stepwise regression approach was employed to select the ChatGPT user experience variables that were highly associated with the outcome and achieve parsimony in modeling.
  • Multiple linear regression analyses were performed to simultaneously examine the impact of demographic factors (age, gender, education level) on trust in ChatGPT. For each of the three trust outcomes (Trust in System, Trust in Information, Relative Trust), we regressed them on the demographic factors, while considering ChatGPT user experience variables as potential additional independent variables to control for their effects.
  • The aims of this research is to examine the relationship between user factors including demographic information and past AI experiences and how it can affect users' trust in ChatGPT
  • To achieve this, quantitative research design was employed. The methodology involves descriptive analysis using box plots to visualize the distributions of variables of interest and Spearman correlation test to quantify relationships of Predictor and primary outcome. To further investigate the research questions, inferential statistics such as t-test and ANOVA were employed to compare means across different groups.
  • The purpose of this study was to investigate the relationship between user variables, including demographic attributes and AI experience, and their corresponding levels of trust. We employed descriptive statistics and inferential statistics to compare whether the differences in the groups were significant.

이론/모형

  • For each of the three trust outcomes (Trust in System, Trust in Information, Relative Trust), we regressed them on the demographic factors, while considering ChatGPT user experience variables as potential additional independent variables to control for their effects. For each model, a stepwise regression approach was employed to select the ChatGPT user experience variables that were highly associated with the outcome and achieve parsimony in modeling. Note that all of the demographic factors were forced in each model.
본문요약 정보가 도움이 되었나요?

참고문헌 (34)

  1. Altman, S, Twitter, https://twitter.com/sama/status/1599668808285028353, 2022.12.15. 

  2. Cramer, H., Evers, V., Ramlal, S., Van Someren,?M., Rutledge, L., Stash, N., & Wielinga, B., "The?effects of transparency on trust in and?acceptance of a content-based art?recommender," User Modeling and?User-Adapted Interaction, Vol.18, No.5, 2008,?pp.455-496. 

  3. Yin, M., Wortman Vaughan, J., & Wallach, H.,?"Understanding the effect of accuracy on trust?in machine learning models," In Proceedings of?the 2019 CHI Conference on Human Factors in?Computing Systems, 2019, pp.1-12. 

  4. Yuan, S., & Lou, N. M. "Trust in artificial?intelligence in customer service: Factors?influencing its formation and development,"?International Journal of Information?Management, Vol.49, 2019, pp.233-242. 

  5. Rousseau, D. M., Sitkin, S. B., Burt, R. S., &?Camerer, C., "Not so different after all: A?cross-discipline view of trust," Academy of?Management Review, Vol.29, No.3, 2004,?pp.393-404. 

  6. Lee, J. D., & See, K. A., "Trust in automation:?Designing for appropriate reliance," Human?Factors, Vol.46, No.1, 2004, pp.50-80. 

  7. Hancock, P. A., Billings, D. R., Schaefer, K. E.,?Chen, J. Y., de Visser, E. J., & Parasuraman, R.,?"A meta-analysis of factors affecting trust in?human-robot interaction," Human Factors,?Vol.53, No.5, 2011, pp.517-527. 

  8. Hoff, K. A., & Bashir, M., "Trust in automation:?Integrating empirical evidence on factors that?influence trust," Human Factors, Vol.57, No.3,?2015, pp.407-434. 

  9. Dzindolet, M. T., Peterson, S. A., Pomranky, R.?A., Pierce, L. G., & Beck, H. P., "The role of?trust in automation reliance," International?Journal of Human-Computer Studies, Vol.58,?No.6, 2003, pp.697-718. 

  10. Brown, T. B., Mann, B., Ryder, N., Subbiah, M.,?Kaplan, J., Dhariwal, P., & Agarwal, S.,?"Language models are few-shot learners," In?Advances in Neural Information Processing?Systems, Vol.33, 2020, pp.1877-1901. 

  11. Radford, A., Wu, J., Child, R., Luan, D.,?Amodei, D., & Sutskever, I., "Language models?are unsupervised multitask learners," OpenAI?Blog, Vol.1, No.8, 2019, p.9 

  12. Luger, E., & Sellen, A., "Like having a really?bad PA: The Gulf between user expectation and?experience of conversational agents," In?Proceedings of the 2016 CHI Conference on?Human Factors in Computing Systems, 2016,?pp. 5286-5297. 

  13. Siau, K., & Wang, W., "Building trust in?artificial intelligence, machine learning, and?robotics," Cutter Business Technology Journal,?Vol.31, No.2, 2018, pp.47-53. 

  14. Folstad, A., & Brandtzaeg, P. B., "Chatbots and?the new world of HCI Interactions," Vol.24,?No.4, 2017, pp.38-42. 

  15. Cave, S., Dignum, V., & Meyer, J., "The ethics of?AI: Issues, developments, and?recommendations," In Proceedings of the 2018?AAAI/ACM Conference on AI, Ethics, and?Society, 2018, pp. 1-7. 

  16. 김효정, "ChatGPT의 특성이 사용의도에 미치는?영향에 관한 연구: 교사의 디지털 기술 조절효과를 중심으로," (사)디지털산업정보학회 논문지, 제19권, 제2호, 2023, pp.135-145. 

  17. Devadas Menon, K Shilpa, "Chatting with?ChatGPT: Analyzing the factors influencing?users' intention to Use the Open AI's ChatGPT?using the UTAUT model," Heliyon, Vol.9,?No.11, 2023, pp.127-143. 

  18. Kreps S, George J, Lushenko P, Rao A.,?"Exploring the artificial intelligence Trust?paradox: Evidence from a survey experiment in?the United States," PLOS ONE, Vol.18, No.7,?2023 

  19. Gillath, O., Ai, T., Branicky, M. S., Keshmiri, S.,?Davison, R. B., & Spaulding, R., "Attachment?and trust in artificial intelligence," Computers in?Human Behavior, Vol.115, 2021, p.106607. 

  20. Czaja, S. J., Charness, N., Fisk, A. D., Hertzog,?C., Nair, S. N., Rogers, W. A., & Sharit, J.,?"Factors predicting the use of technology:?Findings from the Center for Research and?Education on Aging and Technology?Enhancement (CREATE)," Psychology and?Aging, Vol.21, No.2, 2006, pp.333-352. 

  21. Shandilya, E., & Fan, M., "Understanding Older?Adults' Perceptions and Challenges in Using?AI-enabled Everyday Technologies," arXiv, 2022. 

  22. Gillespie, N., Lockey, S., Curtis, C., Pool, J., &?Akbari, A. "Trust in Artificial Intelligence: A?Global Study," The University of Queensland?and KPMG Australia, 2023 

  23. Hillesheim, A. J., Rusnock, C. F., Bindewald, J.?M., & Miller, M. E., "Relationships between?User Demographics and User Trust in an?Autonomous Agent. Proceedings of the Human?Factors and Ergonomics Society Annual?Meeting," Vol.61, No.1, 2017, pp.314-318. 

  24. Kizilcec, R. F. "How much information?: Effects?of transparency on trust in an algorithmic?interface," In Proceedings of the 2016 CHI?Conference on Human Factors in Computing?Systems, 2016, pp. 2390-2395. 

  25. Jackson, K. M., Sharp, W. H., & Shaw, T. H.,?"Modeling the Impacts of Positive Interaction?Frequency on Subjective Trust in an?Autonomous Agent: A Linear Mixed Model?Approach," Proceedings of the Human Factors?and Ergonomics Society Annual Meeting,?Vol.66, No.1, 2022, pp.798-801. 

  26. Bach, T. A., Khan, A., Hallock, H., Beltrao, G., &?Sousa, S. "A Systematic Literature Review of?User Trust in AI-Enabled Systems: An HCI?Perspective," International Journal of Human-Computer Interaction, 2022, pp.1-16. 

  27. Venkatesh, V., Morris, M. G., Davis, G. B., &?Davis, F. D., "User acceptance of information?technology: Toward a unified view," MIS?Quarterly, Vol.27, No.3, 2003, pp.425-478. 

  28. Sheridan, T. B., "Individual differences in?attributes of trust in automation: Measurement?and application to system design," Frontiers in?Psychology, Vol.10, 2019, p.1117. 

  29. Muir, B. M., "Trust in automation: Part I.?Theoretical issues in the study of trust and?human intervention in automated systems,"?Ergonomics, Vol.37, No.11, 1994, pp.1905-1922. 

  30. Parasuraman, R., Sheridan, T. B., & Wickens, C.?D., "Situation awareness, mental workload, and?trust in automation: Viable, empirically?supported cognitive engineering constructs,"?Journal of Cognitive Engineering and Decision?Making, Vol.2, No.2, 2008, pp.140-160. 

  31. Castelfranchi, C., & Falcone, R., "Trust theory: A?socio-cognitive and computational model," John?Wiley & Sons, 2010, p.147-190. 

  32. Jian, J. Y., Bisantz, A. M., and Drury, C. G.,?"Foundations for an empirically determined?scale of trust in automated systems," Int. J.?Cogn. Ergon, Vol.4, 2000, pp.53-71. 

  33. Meinald T. Thielsch & Gerrit Hirschfeld, "Facets?of Website Content, Human-Computer?Interaction," Vol.34, No.4, 2019, pp.279-327. 

  34. 박상우, "인공지능 역량 함양을 위한 경험학습 기반 교육에 관한 고찰," (사)디지털산업정보학회논문지, 제19권, 제1호, 2023, pp.153-172. 

관련 콘텐츠

오픈액세스(OA) 유형

BRONZE

출판사/학술단체 등이 한시적으로 특별한 프로모션 또는 일정기간 경과 후 접근을 허용하여, 출판사/학술단체 등의 사이트에서 이용 가능한 논문

이 논문과 함께 이용한 콘텐츠

저작권 관리 안내
섹션별 컨텐츠 바로가기

AI-Helper ※ AI-Helper는 오픈소스 모델을 사용합니다.

AI-Helper 아이콘
AI-Helper
안녕하세요, AI-Helper입니다. 좌측 "선택된 텍스트"에서 텍스트를 선택하여 요약, 번역, 용어설명을 실행하세요.
※ AI-Helper는 부적절한 답변을 할 수 있습니다.

선택된 텍스트

맨위로