본문 바로가기
  • Home

Zero-shot Korean Sentiment Analysis with Large Language Models: Comparison with Pre-trained Language Models

  • Journal of The Korea Society of Computer and Information
  • Abbr : JKSCI
  • 2024, 29(2), pp.43-50
  • DOI : 10.9708/jksci.2024.29.02.043
  • Publisher : The Korean Society Of Computer And Information
  • Research Area : Engineering > Computer Science
  • Received : January 16, 2024
  • Accepted : February 6, 2024
  • Published : February 29, 2024

Soon-chan Kwon 1 Dong-Hee Lee 1 BEAKCHEOL JANG 2

1연세대학교 정보대학원
2연세대학교

Accredited

ABSTRACT

This paper evaluates the Korean sentiment analysis performance of large language models like GPT-3.5 and GPT-4 using a zero-shot approach facilitated by the ChatGPT API, comparing them to pre-trained Korean models such as KoBERT. Through experiments utilizing various Korean sentiment analysis datasets in fields like movies, gaming, and shopping, the efficiency of these models is validated. The results reveal that the LMKor-ELECTRA model displayed the highest performance based on F1-score, while GPT-4 particularly achieved high accuracy and F1-scores in movie and shopping datasets. This indicates that large language models can perform effectively in Korean sentiment analysis without prior training on specific datasets, suggesting their potential in zero-shot learning. However, relatively lower performance in some datasets highlights the limitations of the zero-shot based methodology. This study explores the feasibility of using large language models for Korean sentiment analysis, providing significant implications for future research in this area.

Citation status

* References for papers published after 2023 are currently being built.

This paper was written with support from the National Research Foundation of Korea.