• 검색어에 아래의 연산자를 사용하시면 더 정확한 검색결과를 얻을 수 있습니다.
  • 검색연산자
검색연산자 기능 검색시 예
() 우선순위가 가장 높은 연산자 예1) (나노 (기계 | machine))
공백 두 개의 검색어(식)을 모두 포함하고 있는 문서 검색 예1) (나노 기계)
예2) 나노 장영실
| 두 개의 검색어(식) 중 하나 이상 포함하고 있는 문서 검색 예1) (줄기세포 | 면역)
예2) 줄기세포 | 장영실
! NOT 이후에 있는 검색어가 포함된 문서는 제외 예1) (황금 !백금)
예2) !image
* 검색어의 *란에 0개 이상의 임의의 문자가 포함된 문서 검색 예) semi*
"" 따옴표 내의 구문과 완전히 일치하는 문서만 검색 예) "Transform and Quantization"
쳇봇 이모티콘
ScienceON 챗봇입니다.
궁금한 것은 저에게 물어봐주세요.

논문 상세정보

EU 일반정보보호규정(GDPR)의 알고리즘 자동화 의사결정에 대한 통제로써 설명을 요구할 권리에 대한 쟁점 분석과 전망

Analysis of Issues and Prospects with the Right to Explanations as a Control Measure for Algorithm-Based Automated Decision-Making in EU-GDPR

민주법학 no.69 , 2019년, pp.277 - 298   http://dx.doi.org/10.15756/dls.2019..69.277

Modern information and communication technologies(ICTs) are bringing huge changes to the paradigms of human society by presenting and developing artificial intelligence(AI). The very core technologies in the development of AI are Big Data and algorithms. The algorithm system of Big Data uses an exquisite process, which requires the input of data. Big Data and algorithm technologies are most used by corporations and actively used in the fields with political purposes such as political parties. If the designer of an algorithm system bases data entry and manipulation only on his or her certain tendencies or world views, automated decision-making and its outcomes in the AI algorithm will display a bias, differentiation, and unfairness. In such a case, the algorithm is a product of ideology and necessarily has political attributes. There is no saying that outcomes by the automated decision-making process based on the algorithm are totally neutral. Even though such outcomes have no objectivity and fairness, their influences will encompass all the sectors of the society including politics, economy, and culture and change even the preferences, judgments and actions of users. Today, algorithms perform important functions and roles in political, social, economic, and cultural interactions and decision-making processes at the national and social level, continuing to increase their influences. They are even becoming an entity of huge political power themselves. There is thus a need for social discussions and legal measures to explore plans of effectively controlling discrimination, prejudice, exclusion, and unfairness that are negative outcomes of biased data and AI algorithms whose learning is based on biased data.EU-General Data Protection Regulation(GDPR) were introduced and came into effect in May, 2018, creating a chance to expand normative issues and discussions about the transparency and accountability of algorithms. EU-GDPR stipulate that the subjects of information have rights to refuse their profiling and not to become subjected to automated decision-making processes. EU-GDPR also state that they have rights to access to the information collected regarding them and notices about their personal information collected. The interpretations of these regulations and objections to them can result in a right to explanations about automated decision-making processes based on an algorithm. This right to explanations holds huge significance in that it grants an opportunity for users to request explanations about profiling-based algorithm decisions and puts an emphasis on the importance of human interventions in the algorithm design process. This right to explanations also opens up the normative possibilities of enforcing an intervention into the algorithm design process, which is part of a corporation's confidential business information, by the information subject, thus playing an essential role in addressing the fairness, transparency, and differentiation issues of algorithms.

참고문헌 (0)

  1. 이 논문의 참고문헌 없음

이 논문을 인용한 문헌 (0)

  1. 이 논문을 인용한 문헌 없음


원문 PDF 다운로드

  • KCI :

원문 URL 링크

  • 원문 URL 링크 정보가 존재하지 않습니다.
상세조회 0건 원문조회 0건

DOI 인용 스타일