본문 바로가기
  • Home

The Legal Responsibility Framework of Generative AI Hallucinations and Normative Design for the Protection of Fundamental Rights

  • Legal Theory & Practice Review
  • Abbr : LTPR
  • 2026, 14(1), pp.49~88
  • Publisher : The Korea Society for Legal Theory and Practice Inc.
  • Research Area : Social Science > Law
  • Received : January 31, 2026
  • Accepted : February 23, 2026
  • Published : February 28, 2026

Lee Hyeong Seok 1

1우석대학교

Accredited

ABSTRACT

This article reconceptualizes hallucinations in generative artificial intelligence as structural and institutional risks rather than mere technical errors, and examines their implications for fundamental rights and legal responsibility. Generative AI operates through probabilistic language generation without mechanisms for factual verification, which enables the production of non-existent statutes, judicial precedents, or statistical data presented as credible information. Such hallucinations undermine informational reliability and may result in infringements of constitutionally protected rights, including reputation, informational self-determination, and freedom of expression. In particular, when generative AI is utilized in the public sector and professional domains not only as a decision-support tool but as a de facto basis for decision-making, technological risks are transformed into institutional risks and ultimately crystallize into constitutional risks. Drawing on comparative case studies, including Mata v. Avianca, this article identifies gaps in responsibility attribution and deficiencies in existing verification obligations. It argues that regulatory responses should focus on ex post liability mechanisms and the clarification of human final decision-making responsibility, while respecting the prohibition of prior restraint and the principle of proportionality. The article ultimately proposes a normative framework for legal responsibility that prioritizes the protection of fundamental rights in the era of generative artificial intelligence.

Citation status

* References for papers published after 2024 are currently being built.

This paper was written with support from the National Research Foundation of Korea.