$\require{mediawiki-texvc}$

연합인증

연합인증 가입 기관의 연구자들은 소속기관의 인증정보(ID와 암호)를 이용해 다른 대학, 연구기관, 서비스 공급자의 다양한 온라인 자원과 연구 데이터를 이용할 수 있습니다.

이는 여행자가 자국에서 발행 받은 여권으로 세계 각국을 자유롭게 여행할 수 있는 것과 같습니다.

연합인증으로 이용이 가능한 서비스는 NTIS, DataON, Edison, Kafe, Webinar 등이 있습니다.

한번의 인증절차만으로 연합인증 가입 서비스에 추가 로그인 없이 이용이 가능합니다.

다만, 연합인증을 위해서는 최초 1회만 인증 절차가 필요합니다. (회원이 아닐 경우 회원 가입이 필요합니다.)

연합인증 절차는 다음과 같습니다.

최초이용시에는
ScienceON에 로그인 → 연합인증 서비스 접속 → 로그인 (본인 확인 또는 회원가입) → 서비스 이용

그 이후에는
ScienceON 로그인 → 연합인증 서비스 접속 → 서비스 이용

연합인증을 활용하시면 KISTI가 제공하는 다양한 서비스를 편리하게 이용하실 수 있습니다.

인공지능 기반 사회적 지지를 위한 대형언어모형의 공감적 추론 향상: 심리치료 모형을 중심으로
Enhancing Empathic Reasoning of Large Language Models Based on Psychotherapy Models for AI-assisted Social Support 원문보기

인지과학 = Korean journal of cognitive science, v.35 no.1, 2024년, pp.23 - 48  

이윤경 (서울대학교 심리학과) ,  이인주 (서울대학교 심리학과) ,  신민정 (서울대학교 협동과정 인지과학전공) ,  배서연 (서울대학교 심리학과) ,  한소원 (서울대학교 심리학과)

초록
AI-Helper 아이콘AI-Helper

대형언어모형(LLM)을 현실에 적용하려는 지속적인 노력에도 불구하고, 인공지능이 맥락을 이해하고 사람의 의도에 맞게 사회적 지지를 제공하는 능력은 아직 제한적이다. 본 연구에서는 LLM이 사람의 감정 상태를 추론하도록 유도하기 위해, 심리 치료 이론을 기반으로 한 공감 체인(Chain of Empathy, CoE) 프롬프트 방법을 새로 개발했다. CoE 기반 LLM은 인지-행동 치료(CBT), 변증법적 행동 치료(DBT), 인간 중심 치료(PCT) 및 현실 치료(RT)와 같은 다양한 심리 치료 방식을 참고하였으며, 각 방식의 목적에 맞게 내담자의 정신 상태를 해석하도록 설계했다. CoE 기반 추론을 유도하지 않은 조건에서는 LLM이 사회적 지지를 구하는 내담자의 글에 주로 탐색적 공감 표현(예: 개방형 질문)만을 생성했으며, 추론을 유도한 조건에서는 각 심리 치료 모형을 대표하는 정신 상태 추론 방법과 일치하는 다양한 공감 표현을 생성했다. 공감 표현 분류 과제에서 CBT 기반 CoE는 감정적 반응, 탐색, 해석 등을 가장 균형적으로 분류하였으나, DBT 및 PCT 기반 CoE는 감정적 반응 공감 표현을 더 잘 분류하였다. 추가로, 각 프롬프트 조건 별로 생성된 텍스트 데이터를 정성적으로 분석하고 정렬 정확도를 평가하였다. 본 연구의 결과는 감정 및 맥락 이해가 인간-인공지능 의사소통에 미치는 영향에 대한 함의를 제공한다. 특히 인공지능이 안전하고 공감적으로 인간과 소통하는 데 있어 추론 방식이 중요하다는 근거를 제공하며, 이러한 추론 능력을 높이는 데 심리학의 이론이 인공지능의 발전과 활용에 기여할 수 있음을 시사한다.

Abstract AI-Helper 아이콘AI-Helper

Building human-aligned artificial intelligence (AI) for social support remains challenging despite the advancement of Large Language Models. We present a novel method, the Chain of Empathy (CoE) prompting, that utilizes insights from psychotherapy to induce LLMs to reason about human emotional state...

주제어

참고문헌 (84)

  1. Ahn, Y., Zhang, Y., Park, Y., & Lee, J. (2020). A chatbot solution to chat app problems: Envisioning a chatbot counseling system for teenage victims of online sexual exploitation. Extended Abstracts of the?2020 CHI Conference on Human Factors in Computing Systems, 1-7. 

  2. American Psychiatric Association. (1994). Diagnostic and statistical manual of mental disorders (4th ed.).?American Psychiatric Publishing, Inc. 

  3. Anderson, C., & Keltner, D. (2002). The role of empathy in the formation and maintenance of social?bonds. Behavioral and Brain Sciences, 25(1), 21-22. 

  4. Arab, A., & Khodabakhshi-Koolaee, A. (2022). The magic of WDEP in reality therapy: Improving?intimacy needs and personal communication in married males. European Journal of Psychology Open, 81(3), 97-103. 

  5. Asada, M. (2015). Towards artificial empathy: how can artificial empathy follow the developmental pathway?of natural empathy?. International Journal of Social Robotics, 7, 19-33. 

  6. Ayers, J. W., Poliak, A., Dredze, M., Leas, E. C., Zhu, Z., Kelley, J. B., ... & Smith, D. M. (2023).?Comparing physician and artificial intelligence chatbot responses to patient questions posted to a public social media forum. JAMA internal medicine. 

  7. Beck, A. T. (1976). Cognitive therapy and the emotional disorders. Oxford, England: International University?Press. 

  8. Brocki, L., Dyer, G. C., Gladka, A., & Chung, N. C. (2023). Deep learning mental health dialogue?system. 2023 IEEE International Conference on Big Data and Smart Computing (BigComp), 395-398. 

  9. Bommarito II, M., & Katz, D. M. (2022). GPT takes the bar exam. arXiv preprint arXiv:2212.14402. 

  10. Bommasani, R., Hudson, D. A., Adeli, E., Altman, R., Arora, S., von Arx, S., ... & Liang, P. (2021). On?the opportunities and risks of foundation models. arXiv preprint arXiv:2108.07258. 

  11. Brown, T. B., Mann, B., Ryder, N., Subbiah, M., Kaplan, J., Dhariwal, P., Neelakantan, A., Shyam, P.,?Sastry, G., Askell, A., Agarwal, S., Herbert-Voss, A., Krueger, G., Henighan, T., Child, R., Ramesh,?A., Ziegler, D. M., Wu, J., Winter, C., ... & Amodei, D. (2020). Language models are few-shot?learners. arXiv preprint arXiv:2005.14165. 

  12. Buechel, S., Buffone, A., Slaff, B., Ungar, L., & Sedoc, J. (2018). Modeling empathy and distress in?reaction to news stories. Proceedings of the 2018 Conference on Empirical Methods in Natural Language?Processing, 4758-4765. 

  13. Bubeck, S., Chandrasekaran, V., Eldan, R., Gehrke, J., Horvitz, E., Kamar, E., ... & Zhang, Y. (2023). Sparks of artificial general intelligence: Early experiments with GPT-4. arXiv preprint arXiv:2303.12712. 

  14. Casas, J., Spring, T., Daher, K., Mugellini, E., Khaled, O. A., & Cudre-Mauroux, P. (2021). Enhancing?conversational agents with empathic abilities. Proceedings of the 21st ACM International Conference on Intelligent Virtual Agents, Online, 41-47. 

  15. Charrier, L., Rieger, A., Galdeano, A., Cordier, A., Lefort, M., & Hassas, S. (2019). The rope scale: a measure of how empathic a robot is perceived. 2019 14th ACM/IEEE International Conference on Human-Robot Interaction (HRI), 656-657. 

  16. Concannon, S., & Tomalin, M. (2023). Measuring perceived empathy in dialogue systems. AI & SOCIETY, 1-15. 

  17. Cooper, M., & McLeod, J. (2011). Person-centered therapy: A pluralistic perspective. Person-Centered &?Experiential Psychotherapies, 10(3), 210-223. 

  18. Davis, M. A. (1980). A multidimensional approach to individual differences in empathy. JSAS Catalogue of?Selected Documents in Psychology, 10, 85. 

  19. Davis, M. H. (1983). Measuring individual differences in empathy: Evidence for a multidimensional?approach. Journal of Personality and Social Psychology, 44, 113-126. 

  20. De Vignemont, F., & Singer, T. (2006). The empathic brain: how, when and why?. Trends in Cognitive?Sciences, 10(10), 435-441. 

  21. Devlin, J., Chang, M. W., Lee, K., & Toutanova, K. (2018). Bert: Pre-training of deep bidirectional?transformers for language understanding. arXiv preprint arXiv:1810.04805. 

  22. Diehl, J. J., Schmitt, L. M., Villano, M., & Crowell, C. R. (2012). The clinical use of robots for?individuals with autism spectrum disorders: A critical review. Research in Autism Spectrum Disorders, 6(1),?249-262. 

  23. Eisenberg, N. (2014). Altruistic emotion, cognition, and behavior (PLE: Emotion). Psychology Press. 

  24. Ekman, P., & Friesen, W. V. (1971). Constants across cultures in the face and emotion. Journal of?Personality and Social Psychology, 17(2), 124-129. 

  25. Fitzpatrick, K. K., Darcy, A., & Vierhile, M. (2017). Delivering cognitive behavior therapy to young?adults with symptoms of depression and anxiety using a fully automated conversational agent?(Woebot): A randomized controlled trial. JMIR Mental Health, 4(2), e19. 

  26. Gabriel, I. (2020). Artificial Intelligence, values, and alignment. Minds and Machines, 30(3), 411-437. 

  27. Glasser, W. (1965). Reality therapy. New York: Harper & Row. 

  28. Gallegos, I. O., Rossi, R. A., Barrow, J., Tanjim, M. M., Kim, S., Dernoncourt, F., ... & Ahmed, N. K.?(2023). Bias and fairness in large language models: A survey. arxiv preprint arXiv:2309.00770. 

  29. Gottlieb, S., & Silvis, L. (2023). How to safely integrate large language models Into health care. JAMA?Health Forum, 4(9), e233909. 

  30. Gruschka, F., Lahnala, A., Welch, C., & Flek, L. (2023). Domain transfer for empathy, distress, and?personality prediction. Proceedings of the 13th Workshop on Computational Approaches to Subjectivity,?Sentiment, & Social Media Analysis, Canada, 553-557. 

  31. Hall, J. A., & Schwartz, R. (2019). Empathy present and future. The Journal of Social Psychology, 159(3),?225-243. 

  32. Herrera, F., Bailenson, J., Weisz, E., Ogle, E., & Zaki, J. (2018). Building long-term empathy: A?large-scale comparison of traditional and virtual reality perspective-taking. PloS one, 13(10), e0204494. 

  33. Hofmann, S. G., Sawyer, A. T., & Fang, A. (2010). The empirical status of the "new wave" of cognitive?behavioral therapy. Psychiatric Clinics, 33(3), 701-710. 

  34. Inkster, B., Sarda, S., & Subramanian, V. (2018). An empathy-driven, conversational artificial intelligence?agent (Wysa) for digital mental well-being: Real-world data evaluation mixed-methods study. JMIR?mHealth and uHealth, 6(11), e12106. 

  35. Kaczkurkin, A. N., & Foa, E. B. (2022). Cognitive-behavioral therapy for anxiety disorders: an update on the empirical evidence. Dialogues in Clinical Neuroscience, 17(3) 337-346. 

  36. Kojima, T., Gu, S. S., Reid, M., Matsuo, Y., & Iwasawa, Y. (2022). Large language models are zero-shot reasoners. arXiv preprint arXiv:2205.11916. 

  37. Knutson, D., & Koch, J. M. (2022). Person-centered therapy as applied to work with transgender and?gender diverse clients. Journal of Humanistic Psychology, 62(1), 104-122. 

  38. Lazarus, R. S. (1991). Emotion and adaptation. Oxford University Press. 

  39. Lee, B., & Yi, M. Y. (2023). Understanding the empathetic reactivity of conversational agents: Measure?development and validation. International Journal of Human-Computer Interaction, 1-19. 

  40. Li, Y., Lin, Z., Zhang, S., Fu, Q., Chen, B., Lou, J. G., & Chen, W. (2023). Making language models?better reasoners with step-aware verifier. Proceedings of the 61st Annual Meeting of the Association for?Computational Linguistics, 1, 5315-5333. 

  41. Linehan, M. M. (1987). Dialectical behavioral therapy: A cognitive behavioral approach to parasuicide.?Journal of Personality Disorders, 1(4), 328-333. 

  42. Linehan, M. M., Armstrong, H. E., Suarez, A., Allmon, D., & Heard, H. L. (1991). Cognitive-behavioral?treatment of chronically parasuicidal borderline patients. Archives of General Psychiatry, 48(12),?1060-1064. 

  43. Medeiros, L., Bosse, T., & Gerritsen, C. (2021). Can a chatbot comfort humans? Studying the impact of a supportive chatbot on users' self-perceived stress. IEEE Transactions on Human-Machine Systems, 52(3), 343-353. 

  44. Mehta, A., Niles, A. N., Vargas, J. H., Marafon, T., Couto, D. D., & Gross, J. J. (2021). Acceptability?and Effectiveness of Artificial Intelligence Therapy for Anxiety and Depression (Youper): Longitudinal?Observational Study. Journal of Medical Internet Research, 23(6), e26771. 

  45. Miller, W. R., & Rollnick, S. (2012). Motivational interviewing: Helping people change and grow. Guilford press. 

  46. Momennejad, I., Hasanbeig, H., Vieira, F., Sharma, H., Ness, R. O., Jojic, N., Palangi, H., & Larson, J.?(2023). Evaluating cognitive maps and planning in large language models with CogEval. arXiv preprint?arXiv:2309.15129. 

  47. Neff, K. (2003). Self-compassion: An alternative conceptualization of a healthy attitude toward oneself. Self?and Identity, 2(2), 85-101. 

  48. Nori, H., King, N., McKinney, S. M., Carignan, D., & Horvitz, E. (2023). Capabilities of gpt-4 on medical challenge problems. arXiv preprint arXiv:2303.13375. 

  49. Nwosu, A., Boardman, S., Husain, M. M., & Doraiswamy, P. M. (2022). Digital therapeutics for mental?health: Is attrition the Achilles heel?. Frontiers in Psychiatry, 13, 900615. 

  50. Paiva, A., Leite, I., Boukricha, H., & Wachsmuth, I. (2017). Empathy in virtual agents and robots: A?survey. ACM Transactions on Interactive Intelligent Systems (TiiS), 7(3), 1-40. 

  51. Powers, D. (2011). Evaluation: From Precision, Recall and F-Measure to ROC, Informedness, Markedness &?Correlation. Journal of Machine Learning Technologies, 2(1), 37-63. 

  52. Prystawski, B., Thibodeau, P., Potts, C., & Goodman, N. D. (2022). Psychologically-informed?chain-of-thought prompts for metaphor understanding in large language models. arXiv preprint?arXiv:2209.08141. 

  53. Qiu, L., Jiang, L., Lu, X., Sclar, M., Pyatkin, V., Bhagavatula, C., ... & Ren, X. (2023). Phenomenal Yet?Puzzling: Testing Inductive Reasoning Capabilities of Language Models with Hypothesis Refinement.?arXiv preprint arXiv:2310.08559. 

  54. Rashkin, H., Smith, E. M., Li, M., & Boureau, Y. L. (2018). Towards empathic Open-domain?Conversation Models: a New Benchmark and Dataset. arXiv preprint arXiv:1811.00207. 

  55. Rasouli, S., Gupta, G., Nilsen, E., & Dautenhahn, K. (2022). Potential applications of social robots in?robot-assisted interventions for social anxiety. International Journal of Social Robotics, 14(5), 1-32. 

  56. Rogers, C. R. (1957). The necessary and sufficient conditions of therapeutic personality change. Journal of?Consulting Psychology, 21(2), 95. 

  57. Scherer, K. R., Banse, R., & Wallbott, H. G. (2001). Emotion inferences from vocal expression correlate across languages and cultures. Journal of Cross-cultural Psychology, 32(1), 76-92. 

  58. Sharma, A., Miner, A., Atkins, D., & Althoff, T. (2020). A computational approach to understanding empathy expressed in text-based mental health support. Proceedings of the 2020 Conference on Empirical?Methods in Natural Language Processing (EMNLP), Online, 5263-5276. 

  59. Simon, N., & Muise, C. (2022). TattleTale: Storytelling with Planning and Large Language Models. ICAPS?Workshop on Scheduling and Planning Applications. 

  60. Singhal, K., Azizi, S., Tu, T., Mahdavi, S. S., Wei, J., Chung, H. W., ... & Natarajan, V. (2023). Large?language models encode clinical knowledge. Nature, 620(7972), 172-180. 

  61. Sivarajkumar, S., Kelley, M., Samolyk-Mazzanti, A., Visweswaran, S., & Wang, Y. (2023). An empirical?evaluation of prompting strategies for large language models in zero-shot clinical natural language?processing. arXiv preprint arXiv:2309.08008. 

  62. Ta, V., Griffith, C., Boatfield, C., Wang, X., Civitello, M., Bader, H., ... & Loggarakis, A. (2020). User?experiences of social support from companion chatbots in everyday contexts: thematic analysis. Journal?of Medical Internet Research, 22(3), e16235. 

  63. Taori, R., Gulrajani, I., Zhang, T., Dubois, Y., Li, X., Guestrin, C., Liang, P., & Hashimoto, T. B.?(2023). Alpaca: A strong, replicable instruction-following model. Stanford Center for Research on Foundation Models. https://crfm.stanford.edu/2023/03/13/alpaca.html. 

  64. Tharwat, A. (2020). Classification assessment methods. Applied computing and informatics, 17(1), 168-192. 

  65. Touvron, H., Lavril, T., Izacard, G., Martinet, X., Lachaux, M. A., Lacroix, T., ... & Lample, G. (2023a).?Llama: Open and efficient foundation language models. arXiv preprint arXiv:2302.13971. 

  66. Touvron, H., Martin, L., Stone, K., Albert, P., Almahairi, A., Babaei, Y., ... & Scialom, T. (2023b).?Llama 2: Open foundation and fine-tuned chat models. arXiv preprint arXiv:2307.09288. 

  67. Truax, C. B., & Carkhuff, R. (2007). Toward effective counseling and psychotherapy: Training and practice. Transaction Publishers. 

  68. Turing, A. M. (1950). Computing machinery and intelligence. Mind, 59, 433-460. 

  69. Urakami, J., Moore, B. A., Sutthithatip, S., & Park, S. (2019). Users' perception of empathic expressions?by an advanced intelligent system. Proceedings of the 7th International Conference on Human-Agent?Interaction, Japan, 11-18. 

  70. Wang, L., Wang, D., Tian, F., Peng, Z., Fan, X., Zhang, Z., Yu, M., Ma, X., & Wang, H. (2021).?Cass: Towards building a social-support chatbot for online health community. Proceedings of the ACM on Human-Computer Interaction, 5(CSCW1), 1-31. 

  71. Wei, J., Wang, X., Schuurmans, D., Bosma, M., Xia, F., Chi, E., Le, Q., & Zhou, D. (2022).?Chain-of-thought prompting elicits reasoning in large language models. arXiv preprint?arXiv:2201.11903. 

  72. Weisberg, O., Daniels, S., & Bar-Kalifa, E. (2023). Emotional expression and empathy in an online peer support platform. Journal of Counseling Psychology, 70(6), 671. 

  73. Welivita, A., Xie, Y., & Pu, P. (2021). A large-scale dataset for empathic response generation. Proceedings?of the 2021 Conference on Empirical Methods in Natural Language Processing (EMNLP), Online and?Dominican Republic, 1251-1264. 

  74. West, P., Lu, X., Dziri, N., Brahman, F., Li, L., Hwang, J. D., ... & Choi, Y. (2023). The generative AI?paradox:"What it can create, it may not understand". arXiv preprint arXiv:2311.00059. 

  75. Wondra, J. D., & Ellsworth, P. C. (2015). An appraisal theory of empathy and other vicarious emotional?experiences. Psychological Review, 122(3), 411-428. 

  76. Wubbolding, R. E., Casstevens, W. J., & Fulkerson, M. H. (2017). Using the WDEP system of Reality?Therapy to support person-centered treatment planning. Journal of Counseling & Development, 95(4),?472-477. 

  77. Xie, B., & Park, C. H. (2021). Empathic robot with transformer-based dialogue agent. 2021 18th?International Conference on Ubiquitous Robots (UR), Korea, 290-295. 

  78. Yao, S., Yu, D., Zhao, J., Shafran, I., Griffiths, T. L., Cao, Y., & Narasimhan, K. (2023). Tree of?thoughts: Deliberate problem solving with large language models. arXiv preprint arXiv:2305.10601. 

  79. Yuan, A., Coenen, A., Reif, E., & Ippolito, D. (2022). Wordcraft: story writing with large language?models. 27th International Conference on Intelligent User Interfaces, Finland, 841-852. 

  80. Yongsatianchot, N., Torshizi, P. G., & Marsella, S. (2023). Investigating large language models' perception?of emotion using appraisal theory. arXiv preprint arXiv:2310.04450. 

  81. Zaki, J. (2019). The war for kindness: Building empathy in a fractured world. Crown. 

  82. Zhang, S. J., Florin, S., Lee, A. N., Niknafs, E., Marginean, A., Wang, A., ... & Drori, I. (2023).?Exploring the MIT mathematics and EECS curriculum using large language models. arXiv preprint?arXiv:2306.08997. 

  83. Zhao, Z., Song, S., Duah, B., Macbeth, J., Carter, S., Van, M. P., ... & Filipowicz, A. L. (2023). More?human than human: LLM-generated narratives outperform human-LLM interleaved narratives. Proceedings?of the 15th Conference on Creativity and Cognition, Online, 368-370. 

  84. Zhou, L., Gao, J., Li, D., & Shum, H. Y. (2020). The design and implementation of xiaoice, an empathic?social chatbot. Computational Linguistics, 46(1), 53-93. 

관련 콘텐츠

오픈액세스(OA) 유형

BRONZE

출판사/학술단체 등이 한시적으로 특별한 프로모션 또는 일정기간 경과 후 접근을 허용하여, 출판사/학술단체 등의 사이트에서 이용 가능한 논문

저작권 관리 안내
섹션별 컨텐츠 바로가기

AI-Helper ※ AI-Helper는 오픈소스 모델을 사용합니다.

AI-Helper 아이콘
AI-Helper
안녕하세요, AI-Helper입니다. 좌측 "선택된 텍스트"에서 텍스트를 선택하여 요약, 번역, 용어설명을 실행하세요.
※ AI-Helper는 부적절한 답변을 할 수 있습니다.

선택된 텍스트

맨위로