최소 단어 이상 선택하여야 합니다.
최대 10 단어까지만 선택 가능합니다.
다음과 같은 기능을 한번의 로그인으로 사용 할 수 있습니다.
NTIS 바로가기IEEE signal processing letters, v.26 no.3, 2019년, pp.505 - 509
Shin, Youhyun (Department of Computer Science Engineering, Seoul National University, Seoul, South Korea) , Yoo, Kang Min (Department of Computer Science Engineering, Seoul National University, Seoul, South Korea) , Lee, Sang-Goo (Department of Computer Science Engineering, Seoul National University, Seoul, South Korea)
Slot filling must be trained using human-labeled data that are expensive and only a limited amount of labeled utterances are readily available for learning. Data generation methods can help increase the size of the dataset and make variations to the training dataset by means of emerging new instance...
Proc Adv Neural Inf Process Syst Semi-supervised learning with deep generative models kingma 0 3581
A neural algorithm of artistic style gatys 2015
Proc Adv Neural Inf Process Syst Learning structured output representation using deep conditional generative models sohn 0 3483
Proc 34th Int Conf Mach Learn Toward controlled generation of text hu 0 1587
Proc Thirty-Second Assoc Advanc Artifi Intell Conf Artif Intell Style transfer in text: Exploration and evaluation fu 0 663
Disentangled representation learning for text style transfer john 2018
Proc Adv Neural Inf Process Syst Style transfer from non-parallel text by cross-alignment shen 0 6830
Proc Neural Inf Process Syst Deep Learn Represent Learn Workshop Distilling the knowledge in a neural network hinton 0
Guu, Kelvin, Hashimoto, Tatsunori B., Oren, Yonatan, Liang, Percy. Generating Sentences by Editing Prototypes. Transactions of the association for computational linguistics, vol.6, 437-450.
Proc INTERSPEECH Labeled data generation with encoder-decoder ISTM for semantic slot filling kurata 0 725
Proc Annu Conf Int Speech Commun Assoc RNN-based labeled data generation for spoken language understanding tam 0 125
Proc 27th Int Conf Comput Linguistics Sequence-to-sequence data augmentation for dialogue language understanding hou 0 1234
Proc INTERSPEECH Attention-based recurrent neural network models for joint intent detection and slot filling liu 0 685
Auto-encoding variational Bayes kingma 2013
Proc AAAI Neural models for sequence chunking zhai 0 3365
Comput Linguistics Class-based n-gram models of natural language brown 1992 18 467
Proc INTERSPEECH Investigation of recurrent-neural-network architectures and learning methods for spoken language understanding mesnil 0 3771
Proc Int Conf Learn Represent Adam: A method for stochastic optimization kingma 0
Mesnil, Gregoire, Dauphin, Yann, Kaisheng Yao, Bengio, Yoshua, Li Deng, Hakkani-Tur, Dilek, Xiaodong He, Heck, Larry, Tur, Gokhan, Dong Yu, Zweig, Geoffrey. Using Recurrent Neural Networks for Slot Filling in Spoken Language Understanding. IEEE/ACM transactions on audio, speech, and language processing, vol.23, no.3, 530-539.
Knowledge as a teacher: Knowledge-guided structural attention networks chen 2016
*원문 PDF 파일 및 링크정보가 존재하지 않을 경우 KISTI DDS 시스템에서 제공하는 원문복사서비스를 사용할 수 있습니다.
※ AI-Helper는 부적절한 답변을 할 수 있습니다.