본문 바로가기
  • Home

“AI for Humanity”: Fairness and the Optimization of Bias in AI

  • 인문논총
  • 2024, 64(), pp.5-15
  • DOI : 10.33638/JHS.64.1
  • Publisher : Institute for Human studies, Kyungnam University
  • Research Area : Humanities > Other Humanities
  • Received : April 30, 2024
  • Accepted : June 4, 2024
  • Published : June 30, 2024

Wonsup Jung 1

1경남대학교

Accredited

ABSTRACT

The article discusses two cases of interventions in AI technology, human and al-gorithmic. The first is the ‘Naver case,’ which was accused by the Korea Government of manipulating search algorithms. The case raises the issue of computer engineer's professional ethics, on whether to ‘intervene’ in existing algorithms to get ‘better results.’ The second is the chatbot case, ‘Yiruda’ (Korean chatbot) which made serious hate speeches against socially disadvantaged groups. It raised concerns about abuses of artificial intelligence. Finally, this paper notes that despite technical efforts in the process of utilizing artificial intelligence, bias cannot be entirely removed. In order to minimize bias, I argue that active feedbacks through the continuous monitoring produced by artificial intelligence will be required in addition to technical efforts such as refining data training. Furthermore, the need for optimization of bias beyond simple reduction is suggested based on John Rawls’s ‘Overlapping Consensus.’

Citation status

* References for papers published after 2023 are currently being built.

This paper was written with support from the National Research Foundation of Korea.