최소 단어 이상 선택하여야 합니다.
최대 10 단어까지만 선택 가능합니다.
다음과 같은 기능을 한번의 로그인으로 사용 할 수 있습니다.
NTIS 바로가기韓國컴퓨터情報學會論文誌 = Journal of the Korea Society of Computer and Information, v.27 no.2, 2022년, pp.15 - 23
Jang, Ji-Mo (Korea Institute of Patent Information) , Min, Jae-Ok (Korea Institute of Patent Information) , Noh, Han-Sung (Korea Institute of Patent Information)
In the field of patents, as NLP(Natural Language Processing) is a challenging task due to the linguistic specificity of patent literature, there is an urgent need to research a language model optimized for Korean patent literature. Recently, in the field of NLP, there have been continuous attempts t...
Jeong, Su-Jeong, "Zur Analyse von mehr oder weniger festen Wortverbindungen in Patentschriften im Deutschen und Koreanischen," German Literature, Vol. 26, No. 3, pp. 360-361. 2016.
DEVLIN, Jacob, et al. "Bert: Pre-training of deep bidirectional transformers for language understanding," arXiv preprint arXiv:1810.04805, 2018.
Yang, Zhilin, et al. "Xlnet: Generalized autoregressive pretraining for language understanding," Advances in neural information processing systems, 32 , 2019.
RADFORD, Alec, et al, "Improving language understanding by generative pre-training", 2018.
LAN, Zhenzhong, et al. "A Lite BERT for Self-supervised Learning of Language Representations," arXiv preprint arXiv:1909.11942, 2019.
LIU, Yinhan, et al. "Roberta: A robustly optimized bert pretraining approach," arXiv preprint arXiv:1907.11692, 2019.
CLARK, Kevin, et al. "Electra: Pre-training text encoders as discriminators rather than generators," arXiv preprint arXiv:2003.10555, 2020.
WANG, Alex, et al. "GLUE: A multi-task benchmark and analysis platform for natural language understanding," arXiv preprint arXiv:1804.07461, 2018.
LEE, Jinhyuk, et al. "BioBERT: a pre-trained biomedical language representation model for biomedical text mining," Bioinformatics, Vol. 36, No. 4, pp.1234-1240, 2020.
BELTAGY, Iz; LO, Kyle; COHAN, Arman. "Scibert: A pretrained language model for scientific text," arXiv preprint arXiv:1903.10676, 2019.
LEWIS, Patrick, et al. "Pretrained Language Models for Biomedical and Clinical Tasks: Understanding and Extending the State-of-the-Art," Proceedings of the 3rd Clinical Natural Language Processing Workshop, pp. 146-157, 2020.
RAJ KANAKARAJAN, Kamal; KUNDUMANI, Bhuvana; SANKARASUBBU, Malaikannan. "BioELECTRA: Pretrained Biomedical text Encoder using Discriminators," Proceedings of the 20th Workshop on Biomedical Language Processing, pp. 143-154, 2021.
VASWANI, Ashish, et al. "Attention is all you need. In: Advances in neural information processing systems," pp. 5998-6008, 2017.
Min, Jae-Ok, et al, "Korean Machine Reading Comprehension for Patent Consultation Using BERT,/ Software and Data Engineering, Vol. 9, No. 4, pp. 145-152, 2020.
Park, Joo-Yeon, et al. "Improving Recognition of Patent's Claims with Deep Neural Networks," Collection of papers from Korea Information Processing Society, Vol. 27, No. 1, pp. 500-503, 2020.
LEE, Jieh-Sheng; HSIANG, Jieh. "Patent classification by fine-tuning BERT language model," World Patent Information, 61: 101965, 2020.
RUST, Phillip, et al. "How good is your tokenizer? on the monolingual performance of multilingual language models," arXiv preprint arXiv:2012.15613, 2020.
SENNRICH, Rico; HADDOW, Barry; BIRCH, Alexandra. "Neural machine translation of rare words with subword units," arXiv preprint arXiv:1508.07909, 2015.
PARK, Sungjoon, et al. "KLUE: Korean Language Understanding Evaluation," arXiv preprint arXiv:2105.09680, 2021.
PARK, Jinwoo, et al. "Patent Tokenizer: a research on the optimization of tokenize for the Patent sentence using the Morphemes and SentencePiece," Annual Conference on Human and Language Technology. Human and Language Technology, pp. 441-445, 2020.
*원문 PDF 파일 및 링크정보가 존재하지 않을 경우 KISTI DDS 시스템에서 제공하는 원문복사서비스를 사용할 수 있습니다.
Free Access. 출판사/학술단체 등이 허락한 무료 공개 사이트를 통해 자유로운 이용이 가능한 논문
※ AI-Helper는 부적절한 답변을 할 수 있습니다.