Ethical issues around AI systems fundamentally arise when humans delegate tasks to them. AI systems handle given tasks within their own abilities but outcomes impact humans. However, the interests essentially sought by humans and AI systems may differ. While humans have the goal of maximizing their ...
Ethical issues around AI systems fundamentally arise when humans delegate tasks to them. AI systems handle given tasks within their own abilities but outcomes impact humans. However, the interests essentially sought by humans and AI systems may differ. While humans have the goal of maximizing their own interests in an ethical manner, the goal of an AI systems may not fully compatible with this. Taking this into account, this study applied the agency theory to empirically verify the relationship between humans and AI systems. According to this study, users of AI systems are not aware of ethical issues concerning AI because they lacked information, knowledge and awareness concerning this area. However, once they become aware of these ethical issues, they doubt the ability of AI systems to solve ethical problems. In addition, they expect AI systems would be able to avoid responsibility if such issues occur. However, the analysis of this study shows that these issues could be solved by providing users with specific information on the ethical capabilities and skills of the AI systems before agreeing to the terms and conditions of service. Users will need to also closely and continuously monitor how effective the AI systems are working on ethical issues. In addition, This study considered “Trustworthy AI” as AI systems that humans can satisfy with. According to agency theory, there are agency problems which are called adverse selection and moral hazard and negatively affect to principle’s utility. In this context, this study found the lack of information, knowledge and awareness about ethical AI issues, ethically problem-solving skills and moral hazard avoiding responsibilities have a bad influence on trust. Thus, it verified there is a possibility that trust can be applied instead of utility in economics in this context. This study shows that both the general public and IT experts have low level of awareness on matters relating to AI ethics. However, once issues related to AI ethics occur, the awareness of users about them improves and users start to distrust AI systems that violates their expectations. By considering some cases that have already occurred and expectancy violation effect, this study suggests that in order for the AI systems to gain and maintain trust from users, they should proactively research and develop tools and solutions to prevent and minimize harms. AI systems should be held accountable to prevent a fall in users’ trust whether harms occur or not.
Ethical issues around AI systems fundamentally arise when humans delegate tasks to them. AI systems handle given tasks within their own abilities but outcomes impact humans. However, the interests essentially sought by humans and AI systems may differ. While humans have the goal of maximizing their own interests in an ethical manner, the goal of an AI systems may not fully compatible with this. Taking this into account, this study applied the agency theory to empirically verify the relationship between humans and AI systems. According to this study, users of AI systems are not aware of ethical issues concerning AI because they lacked information, knowledge and awareness concerning this area. However, once they become aware of these ethical issues, they doubt the ability of AI systems to solve ethical problems. In addition, they expect AI systems would be able to avoid responsibility if such issues occur. However, the analysis of this study shows that these issues could be solved by providing users with specific information on the ethical capabilities and skills of the AI systems before agreeing to the terms and conditions of service. Users will need to also closely and continuously monitor how effective the AI systems are working on ethical issues. In addition, This study considered “Trustworthy AI” as AI systems that humans can satisfy with. According to agency theory, there are agency problems which are called adverse selection and moral hazard and negatively affect to principle’s utility. In this context, this study found the lack of information, knowledge and awareness about ethical AI issues, ethically problem-solving skills and moral hazard avoiding responsibilities have a bad influence on trust. Thus, it verified there is a possibility that trust can be applied instead of utility in economics in this context. This study shows that both the general public and IT experts have low level of awareness on matters relating to AI ethics. However, once issues related to AI ethics occur, the awareness of users about them improves and users start to distrust AI systems that violates their expectations. By considering some cases that have already occurred and expectancy violation effect, this study suggests that in order for the AI systems to gain and maintain trust from users, they should proactively research and develop tools and solutions to prevent and minimize harms. AI systems should be held accountable to prevent a fall in users’ trust whether harms occur or not.
※ AI-Helper는 부적절한 답변을 할 수 있습니다.