본문 바로가기
  • Home

The Boundary Between AI Training Data and Personal Information Rights: Governance and Control Challenges in the Case of “A.”, an AI Assistant by SK Telecom

  • Civil Society and NGO
  • 2025, 23(1), pp.195~245
  • Publisher : The Third Sector Institute
  • Research Area : Social Science > Social Science in general > Other Social Science in general
  • Received : April 9, 2025
  • Accepted : May 10, 2025
  • Published : May 31, 2025

Sung Hee Yoo 1 Suh Hyo-Joong 2

1서울시 어르신돌봄종사자종합지원센터
2카톨릭대학교

Accredited

ABSTRACT

The rapid expansion of generative AI has significantly increased the demand for large-scale, high-quality training data, raising critical legal and ethical concerns regarding the use of personal information. Telecommunication-based AI services, such as SK Telecom’s “A.” assistant, have structural access to sensitive data, including call logs and voice content. This study examines how such data is utilized for AI training, highlighting challenges related to user control, third-party rights, and algorithmic transparency. Through a case analysis of A., and a comparison with the Stack Overflow incident, this study highlights how even publicly available datasets can cause harm when proper consent, attribution, and legal compliance are absent. Existing legal frameworks, such as Korea’s Personal Information Protection Act (PIPA), are found to be inadequate in addressing AI-specific risks, particularly concerning high-risk data types. As a normative response, this paper explores the applicability of governance principles derived from the GNU General Public License (GPL), including openness, shared responsibility, and continuity of rights. The findings indicate a need for hybrid governance models that integrate legal, technical, and ethical mechanisms to ensure transparency, accountability, and data sovereignty in the era of artificial intelligence(AI).

Citation status

* References for papers published after 2024 are currently being built.