본문 바로가기
  • Home

DeNERT: Named Entity Recognition Model using DQN and BERT

  • Journal of The Korea Society of Computer and Information
  • Abbr : JKSCI
  • 2020, 25(4), pp.29-35
  • DOI : 10.9708/jksci.2020.25.04.029
  • Publisher : The Korean Society Of Computer And Information
  • Research Area : Engineering > Computer Science
  • Received : February 13, 2020
  • Accepted : April 9, 2020
  • Published : April 30, 2020

Sung-Min Yang 1 Ok-Ran Jeong 1

1가천대학교

Accredited

ABSTRACT

In this paper, we propose a new structured entity recognition DeNERT model. Recently, the field of natural language processing has been actively researched using pre-trained language representation models with a large amount of corpus. In particular, the named entity recognition, which is one of the fields of natural language processing, uses a supervised learning method, which requires a large amount of training dataset and computation. Reinforcement learning is a method that learns through trial and error experience without initial data and is closer to the process of human learning than other machine learning methodologies and is not much applied to the field of natural language processing yet. It is often used in simulation environments such as Atari games and AlphaGo. BERT is a general-purpose language model developed by Google that is pre-trained on large corpus and computational quantities. Recently, it is a language model that shows high performance in the field of natural language processing research and shows high accuracy in many downstream tasks of natural language processing. In this paper, we propose a new named entity recognition DeNERT model using two deep learning models, DQN and BERT. The proposed model is trained by creating a learning environment of reinforcement learning model based on language expression which is the advantage of the general language model. The DeNERT model trained in this way is a faster inference time and higher performance model with a small amount of training dataset. Also, we validate the performance of our model's named entity recognition performance through experiments.

Citation status

* References for papers published after 2023 are currently being built.