본문 바로가기
  • Home

Enhancing LoRA Fine-tuning Performance Using Curriculum Learning

  • Journal of The Korea Society of Computer and Information
  • Abbr : JKSCI
  • 2024, 29(3), pp.43-54
  • DOI : 10.9708/jksci.2024.29.03.043
  • Publisher : The Korean Society Of Computer And Information
  • Research Area : Engineering > Computer Science
  • Received : February 5, 2024
  • Accepted : March 4, 2024
  • Published : March 29, 2024

Daegeon Kim 1 Namgyu Kim 1

1국민대학교

Accredited

ABSTRACT

Recently, there has been a lot of research on utilizing Language Models, and Large Language Models have achieved innovative results in various tasks. However, the practical application faces limitations due to the constrained resources and costs required to utilize Large Language Models. Consequently, there has been recent attention towards methods to effectively utilize models within given resources. Curriculum Learning, a methodology that categorizes training data according to difficulty and learns sequentially, has been attracting attention, but it has the limitation that the method of measuring difficulty is complex or not universal. Therefore, in this study, we propose a methodology based on data heterogeneity-based Curriculum Learning that measures the difficulty of data using reliable prior information and facilitates easy utilization across various tasks. To evaluate the performance of the proposed methodology, experiments were conducted using 5,000 specialized documents in the field of information communication technology and 4,917 documents in the field of healthcare. The results confirm that the proposed methodology outperforms traditional fine-tuning in terms of classification accuracy in both LoRA fine-tuning and full fine-tuning.

Citation status

* References for papers published after 2023 are currently being built.

This paper was written with support from the National Research Foundation of Korea.