최소 단어 이상 선택하여야 합니다.
최대 10 단어까지만 선택 가능합니다.
다음과 같은 기능을 한번의 로그인으로 사용 할 수 있습니다.
NTIS 바로가기韓國 心理 學會誌. Korean journal of psychology. 일반, v.40 no.4, 2021년, pp.459 - 485
Kim, Hyo-eun
초록이 없습니다.
(2021). , 5 13.. Retrieved from https://www.korea.kr/common/download.do?fileId=195009613&tblKey=GMN
(2021). , 25. Retrieved from https://www.kci.go.kr/kciportal/ci/sereArticleSearch/ciSereArtiView.kci?sereArticleSearchBean.artiId=ART002555225
10.15756/dls.2019..69.277 (2019). EU (GDPR). , 69, 277-298. doi:10.15756/dls.2019..69.277
10.22257/kjp.2019.12.38.4.519 (2019). . : , 38(4), 519-548. doi:10.22257/kjp.2019.12.38.4.519
(2020). . , 1(129), 133-153. doi:10.15801/je.1.129.202006.133
Beaupré, Martin G., Hess, Ursula. An Ingroup Advantage for Confidence in Emotion Recognition Judgments: The Moderating Effect of Familiarity With the Expressions of Outgroup Members. Personality & social psychology bulletin, vol.32, no.1, 16-26.
Bolukbasi, T., Chang, K. W., Zou, J. Y., Saligrama, V., & Kalai, A. T. (2016). Man is to computer programmer as woman is to homemaker? debiasing word embeddings. Advances in neural information processing systems, 29, 4349-4357. doi:10.5555/3157382.3157584
Chiappa, Silvia. Path-Specific Counterfactual Fairness. Proceedings of the ... aaai conference on artificial intelligence, vol.33, 7801-7808.
Dave, P. (2018). Fearful of bias, Google blocks gender-based pronouns from new AI tool. Reuters, November, 27. Retrieved from https://www.reuters.com/article/us-alphabet-google-ai-gender-idUSKCN1NW0EF
Firth, R. (1957). A synopsis of linguistic theory, 1930-1955. Studies in linguistic analysis, vol. Special Volume of the Philological Society, 1-32. Retrieved fromhttp://cs.brown.edu/courses/csci2952d/readings/lecture1-firth.pdf
Josh, N. (2019, June 19) 7 Types of Artificial Intelligence, Fobes Media LLC. Retrieved from https://www.forbes.com/sites/cognitiveworld/2019/06/19/7-types-of-artificialintelligence/#145fe100233e
Kirkpatrick, James, Pascanu, Razvan, Rabinowitz, Neil, Veness, Joel, Desjardins, Guillaume, Rusu, Andrei A., Milan, Kieran, Quan, John, Ramalho, Tiago, Grabska-Barwinska, Agnieszka, Hassabis, Demis, Clopath, Claudia, Kumaran, Dharshan, Hadsell, Raia. Overcoming catastrophic forgetting in neural networks. Proceedings of the National Academy of Sciences of the United States of America, vol.114, no.13, 3521-3526.
Menon, A. and Williamson, R. (2018). The cost of fairness in binary classification. In Conference on Fairness, Accountability and Transparency, 107-118.Retrieved fromhttps://arxiv.org/abs/1705.09055
Microsoft (2021). Transparency note and use cases for Custom Neural Voice. Retrieved from https://docs.microsoft.com/en-us/legal/cognitive-services/speech-service/custom-neural-voice/transparency-note-custom-neural-voice
Narayanan, A. (2018). Translation tutorial: 21 fairness definitions and their politics. In Proc. Conf. Fairness Accountability Transp., 1170, New York, USA. Retrieved from https://fairmlbook.org/tutorial2.html
Pariser, E. (2011). The filter bubble: What the Internet is hiding from you. Penguin UK. Retrieved from https://dl.acm.org/doi/10.5555/2029079
Prates, Marcelo O. R., Avelar, Pedro H., Lamb, Luís C.. Assessing gender bias in machine translation: a case study with Google Translate. Neural computing & applications, vol.32, no.10, 6363-6381.
Prince, A. E., & Schwarcz, D. (2019). Proxy discrimination in the age of artificial intelligence and big data. Iowa L. Rev., 105, 1257. Retrieved from https://papers.ssrn.com/sol3/papers.cfm?abstract_id=3347959
Prost, F., Qian, H., Chen, Q., Chi, E. H., Chen, J., & Beutel, A. (2019). Toward a better trade-off between performance and fairness with kernel-based distribution matching.arXiv preprint. Retrieved from https://arXiv:1910.11779.
Saleiro, P., Kuester, B., Hinkson, L., London, J., Stevens, A., Anisfeld, A.,& Ghani, R. (2018). Aequitas: A bias and fairness audit toolkit. arXiv preprint, arXiv. Retrieved from https://arXiv:1811.05577.
Sample, I. (2017, Nov. 5). Computer says no: why making AIs fair, accountable and transparent is crucial. The Guardian, 5, 1-15. Retrieved from https://www.theguardian.com/science/2017/nov/05/computer-says-no-why-making-ais-fair-accountable-and-transparent-is-crucial
SKEEM, JENNIFER L., LOWENKAMP, CHRISTOPHER T.. RISK, RACE, AND RECIDIVISM: PREDICTIVE BIAS AND DISPARATE IMPACT*. Criminology, vol.54, no.4, 680-712.
Tan, S., Caruana, R., Hooker, G., & Lou, Y. (2017). Detecting bias in black-box models using transparent model distillation. arXiv preprint, Retrieved from https://arXiv:1710.06169
Vries, T., Misra, I., Wang, C., & van der Maaten, L. (2019). Does object recognition work for everyone?. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, 52-59. Retrieved from https://arxiv.org/abs/1906.02659
Zliobaite, I. (2015). A survey on measuring indirect discrimination in machine learning. arXiv preprint. Retrieved from https://arxiv.org/abs/1511.00148
*원문 PDF 파일 및 링크정보가 존재하지 않을 경우 KISTI DDS 시스템에서 제공하는 원문복사서비스를 사용할 수 있습니다.
출판사/학술단체 등이 한시적으로 특별한 프로모션 또는 일정기간 경과 후 접근을 허용하여, 출판사/학술단체 등의 사이트에서 이용 가능한 논문
※ AI-Helper는 부적절한 답변을 할 수 있습니다.