최소 단어 이상 선택하여야 합니다.
최대 10 단어까지만 선택 가능합니다.
다음과 같은 기능을 한번의 로그인으로 사용 할 수 있습니다.
NTIS 바로가기컴퓨터그래픽스학회논문지 = Journal of the Korea Computer Graphics Society, v.28 no.3, 2022년, pp.23 - 29
김기락 (서강대학교 아트&테크놀로지) , 연희연 (서강대학교 인공지능학과) , 은태영 (서강대학교 컴퓨터공학과) , 정문열 (서강대학교 아트&테크놀로지)
Virtual humans are implemented with dedicated modeling tools like Unity 3D Engine in virtual space (virtual reality, mixed reality, metaverse, etc.). Various human modeling tools have been introduced to implement virtual human-like appearance, voice, expression, and behavior similar to real people, ...
Wahlster, W. Dialogue systems go multimodal: The smartkom experience. In SmartKom: foundations of multimodal dialogue systems (pp. 3-27). Springer, Berlin, Heidelberg.(2006).
Lee, Lik-Hang, et al. "All one needs to know about me taverse: A complete survey on technological singularity, virtual ecosystem, and research agenda." arXiv preprint arXiv:2110.05352 (2021).
Utrecht University Department of Information and Computing Sciences Virtual Worlds division IVA 2016 Tutorial September 20 (2016)
Zupan, Jure. "Introduction to artificial neural network (ANN) methods: what they are and how to use them." Acta Chimica Slovenica 41 327-327.(1994).
Lewis, John P., et al. "Practice and theory of blendshape facial models." Eurographics (State of the Art Reports) 1.8 2. (2014).
McDonnell, Rachel, et al. "Model for predicting perception of facial action unit activation using virtual humans." Computers & Graphics 100 81-92. (2021).
Ekman, Paul, and Wallace V. Friesen. "Facial action co ding system." Environmental Psychology & Nonverbal Behavior (1978).
Cohn, Jeffrey F., Zara Ambadar, and Paul Ekman. "Obs erver-based measurement of facial expression with the Facial Action Coding System." The handbook of emot ion elicitation and assessment 1.3 203-221. (2007).
Friesen, W. "EMFACS-7: Emotional Facial Action Coding System. Unpublished manual/W. Frisen, P. Ekman." (1983).
Radford, Alec, et al. "Improving language understanding by generative pre-training." (2018).
Liu, Yinhan, et al. "Roberta: A robustly optimized bert pretraining approach." arXiv preprint arXiv:1907.11692 (2019).
Li, Yanran, et al. "Dailydialog: A manually labelled multi-turn dialogue dataset." arXiv preprint arXiv:1710.03957 (2017).
Rashkin, Hannah, et al. "Towards empathetic open-domain conversation models: A new benchmark and dataset." arXiv preprint arXiv:1811.00207 (2018).
Zhang, Saizheng, et al. "Personalizing dialogue agents: I have a dog, do you have pets too?." arXiv preprint arXiv:1801.07243 (2018).
Smith, Eric Michael, et al. "Can you put it all together: Evaluating conversational agents' ability to blend skills." arXiv preprint arXiv:2004.08449 (2020).
Pham, H. X., Wang, Y., & Pavlovic, V. "End-to-end learning for 3d facial animation from speech." Proceedings of the 20th ACM International Conference on Multimodal Interaction. (pp. 361-365). (2018).
Kucherenko, Taras, et al. "Gesticulator: A framework for semantically-aware speech-driven gesture generation." In Proceedings of the 2020 International Conference on Multimodal Interaction (pp.242-250).(2020).
※ AI-Helper는 부적절한 답변을 할 수 있습니다.