본문 바로가기
  • Home

LLM-Based Content Analysis of Sampling Methodology in Library and Information Science Research: A Cross-Model Comparison of Coding Performance by Task Type

  • Journal of Korean Library and Information Science Society
  • Abbr : JKLISS
  • 2026, 57(1), pp.413~438
  • DOI : 10.16981/kliss.57.1.202603.413
  • Publisher : Korean Library And Information Science Society
  • Research Area : Interdisciplinary Studies > Library and Information Science
  • Received : February 21, 2026
  • Accepted : March 22, 2026
  • Published : March 30, 2026

Sein Min 1 Eungi Kim 2

1계명대학교
2계명대학교 문헌정보학과

Accredited

ABSTRACT

The purpose of this study is to compare and examine, across multiple dimensions, the conditions under which large language model (LLM)-based content analysis can be applied according to task type in the context of research methods analysis in library and information science. To this end, 100 survey and interview studies published between 2020 and 2024 in four major Korean journals in library and information science were selected using stratified random sampling. The coding results produced by one human coder and four large language models (Claude-3.5-Haiku, GPT-4o-Mini, Gemini-2.0-Flash, and Grok-4-Latest) were compared across twelve dimensions constituting sampling methodology. The results show that relatively high levels of agreement were observed in dimensions where classification could be made based on explicit criteria, whereas consistently lower levels of agreement appeared in dimensions requiring inferential or evaluative judgment. These findings suggest that the performance of LLM-based automated coding is influenced more by the decision structure of the task and the explicitness of the available information than by model performance itself. Therefore, the scope of LLM application should be more carefully examined from the perspectives of task type and judgment characteristics, and the systematic design of human-AI hybrid validation strategies is required.

Citation status

* References for papers published after 2024 are currently being built.