최소 단어 이상 선택하여야 합니다.
최대 10 단어까지만 선택 가능합니다.
다음과 같은 기능을 한번의 로그인으로 사용 할 수 있습니다.
NTIS 바로가기지능정보연구 = Journal of intelligence and information systems, v.29 no.4, 2023년, pp.287 - 308
박상언 (경기대학교 산업경영정보공학과) , 강주영 (아주대학교 e비즈니스학부)
After launching its service in November 2022, ChatGPT has rapidly increased the number of users and is having a significant impact on all aspects of society, bringing a major turning point in the history of artificial intelligence. In particular, the inference ability of large language models such a...
김맹근. (2023). ChatGPT 활용 사례 및 전망. 디지털 비즈온. http://www.digitalbizon.com/news/articleView.html?idxno2331610?
Brown, T., Mann, B., Ryder, N., Subbiah, M.,?Kaplan, J. D., Dhariwal, P., ... & Amodei, D.?(2020). Language models are few-shot learners.?Advances in Neural Information Processing?Systems, 33, 1877-1901.
Chowdhery, A., Narang, S., Devlin, J., Bosma, M.,?Mishra, G., Roberts, A., ... & Fiedel, N.?(2023). Palm: Scaling language modeling?with pathways. Journal of Machine Learning?Research, 24(240), 1-113.
Cobbe, K., Kosaraju, V., Bavarian, M., Chen, M.,?Jun, H., Kaiser, L., ... & Schulman, J. (2021).?Training verifiers to solve math word problems.?arXiv preprint arXiv:2110.14168.
Diao, S., Wang, P., Lin, Y., & Zhang, T. (2023).?Active prompting with chain-of-thought for?large language models. arXiv preprint arXiv:2302.12246.
Fan, A., Lewis, M., & Dauphin, Y. (2018).?Hierarchical neural story generation. Annual?Meeting of the Association for Computational?Linguistics, 56(1), 889-898.
Ficler, J., & Goldberg, Y. (2017). Controlling?linguistic style aspects in neural language?generation. arXiv preprint arXiv:1707.02633.
Geva, M., Khashabi, D., Segal, E., Khot, T., Roth,?D., & Berant, J. (2021). Did aristotle use a?laptop? a question answering benchmark with?implicit reasoning strategies. Transactions of?the Association for Computational Linguistics,?9, 346-361.
Holtzman, A., Buys, J., Du, L., Forbes, M., & Choi,?Y. (2019). The curious case of neural text?degeneration. arXiv preprint arXiv:1904.09751.
Kojima, T., Gu, S. S., Reid, M., Matsuo, Y., &?Iwasawa, Y. (2022). Large language models?are zero-shot reasoners. Advances in Neural?Information Processing Systems, 35, 22199-22213.
Koncel-Kedziorski, R., Hajishirzi, H., Sabharwal, A.,?Etzioni, O., & Ang, S. D. (2015). Parsing algebraic?word problems into equations. Transactions?of the Association for Computational Linguistics,?3, 585-597.
Koncel-Kedziorski, R., Roy, S., Amini, A., Kushman,?N., & Hajishirzi, H. (2016, June). MAWPS: A?math word problem repository. Proceedings?of the 2016 Conference of the North American?Chapter of the Association for Computational?Linguistics: Human Language Technologies?(pp. 1152-1157).
Lampinen, A. K., Dasgupta, I., Chan, S. C.,?Matthewson, K., Tessler, M. H., Creswell,?A., ... & Hill, F. (2022). Can language?models learn from explanations in context?.?arXiv preprint arXiv:2204.02329.
Liu, J., Liu, A., Lu, X., Welleck, S., West, P., Bras,?R. L., ... & Hajishirzi, H. (2021). Generated?knowledge prompting for commonsense?reasoning. arXiv preprint arXiv:2110.08387.
Liu, P., Yuan, W., Fu, J., Jiang, Z., Hayashi, H.,?& Neubig, G. (2023). Pre-train, prompt, and?predict: A systematic survey of prompting?methods in natural language processing. ACM?Computing Surveys, 55(9), 1-35.
Ouyang, L., Wu, J., Jiang, X., Almeida, D.,?Wainwright, C., Mishkin, P., ... & Lowe, R.?(2022). Training language models to follow?instructions with human feedback. Advances?in Neural Information Processing Systems,?35, 27730-27744.
Patel, A., Bhattamishra, S., & Goyal, N. (2021).?Are NLP models really able to solve simple?math word problems?. arXiv preprint arXiv:2103.07191.
Radford, A., Wu, J., Child, R., Luan, D., Amodei, D.,?& Sutskever, I. (2019). Language models are?unsupervised multitask learners. OpenAI blog,?1(8), 9.
Reynolds, L., & McDonell, K. (2021, May). Prompt?programming for large language models:?Beyond the few-shot paradigm. Extended?Abstracts of the 2021 CHI Conference on?Human Factors in Computing Systems (pp. 1-7).
Roy, S., & Roth, D. (2016). Solving general arithmetic?word problems. arXiv preprint arXiv:1608.01413.
Srivastava, A., Rastogi, A., Rao, A., Shoeb, A. A. M.,?Abid, A., Fisch, A., ... & Wang, G. (2022).?Beyond the imitation game: Quantifying and?extrapolating the capabilities of language models.?arXiv preprint arXiv:2206.04615.
Talmor, A., Herzig, J., Lourie, N., & Berant, J.?(2018). Commonsenseqa: A question answering?challenge targeting commonsense knowledge.?arXiv preprint arXiv:1811.00937.
Thoppilan, R., De Freitas, D., Hall, J., Shazeer, N.,?Kulshreshtha, A., Cheng, H. T., ... & Le, Q.?(2022). Lamda: Language models for dialog?applications. arXiv preprint arXiv:2201.08239.
Wang, X., Wei, J., Schuurmans, D., Le, Q., Chi,?E., Narang, S., ... & Zhou, D. (2022).?Self-consistency improves chain of thought?reasoning in language models. arXiv preprint?arXiv:2203.11171.
Wei, J., Wang, X., Schuurmans, D., Bosma, M.,?Xia, F., Chi, E., ... & Zhou, D. (2022).?Chain-of-thought prompting elicits reasoning?in large language models. Advances in Neural?Information Processing Systems, 35, 24824-24837.
Yao, S., Yu, D., Zhao, J., Shafran, I., Griffiths, T.?L., Cao, Y., & Narasimhan, K. (2023). Tree?of thoughts: Deliberate problem solving with?large language models. arXiv preprint arXiv:2305.10601.
Zhang, T., Kishore, V., Wu, F., Weinberger, K.?Q., & Artzi, Y. (2019). Bertscore: Evaluating?text generation with bert. arXiv preprint arXiv:1904.09675.
Zhang, Z., Zhang, A., Li, M., & Smola, A. (2022).?Automatic chain of thought prompting in large?language models. arXiv preprint arXiv:2210.03493.
*원문 PDF 파일 및 링크정보가 존재하지 않을 경우 KISTI DDS 시스템에서 제공하는 원문복사서비스를 사용할 수 있습니다.
※ AI-Helper는 부적절한 답변을 할 수 있습니다.