본문 바로가기
  • Home

Fine-tuned Korean Language Models for Sociolinguistic Studies

  • The Sociolinguistic Journal of Korea
  • Abbr : 사회언어학
  • 2024, 32(3), pp.41-64
  • Publisher : The Sociolinguistic Society Of Korea
  • Research Area : Humanities > Linguistics
  • Received : August 12, 2024
  • Accepted : September 11, 2024
  • Published : September 30, 2024

Kang San Noh 1 Kim, Soo Yeon 2 Hye-Won Choi 3 JANG, HAYEUN 4 Sanghoun Song 1

1고려대학교
2세종대학교
3이화여대
4성균관대학교

Accredited

ABSTRACT

This paper aims to test deep-learning-based Korean language models’ capacity to learn and detect social registers embedded in speech data, specifically age, gender, and regional dialects. A comprehensive understanding of linguistic phenomena requires contextualizing speech based on speakers’ age, gender, and geographic background, along with the processing of syntactic structures. To bridge the gap between human language understanding and model processing, we fine-tuned three representative Korean language models—KR-BERT, KoELECTRA-base, and KLUE-RoBERTa-base—using transcribed data from 4,000 hours of speech by middle-aged and elderly Korean speakers. The findings reveal that KoELECTRA-base outperformed the other two models across all social registers, which is likely attributed to its larger vocabulary and parameters size. Among the dialects, the Jeju dialect showed the highest accuracy in inference, which is attributed to its distinctiveness, making it easier for the models to detect. In addition to the fine-tuning process, we have made our fine-tuned models publicly available to support researchers interested in Korean computational sociolinguistics.

Citation status

* References for papers published after 2023 are currently being built.

This paper was written with support from the National Research Foundation of Korea.