본문 바로가기
  • Home

A Study on System Identification Using Deep Learning

  • Journal of Knowledge Information Technology and Systems
  • Abbr : JKITS
  • 2019, 14(4), pp.359-368
  • DOI : 10.34163/jkits.2019.14.4.005
  • Publisher : Korea Knowledge Information Technology Society
  • Research Area : Interdisciplinary Studies > Interdisciplinary Research
  • Received : June 10, 2019
  • Accepted : August 9, 2019
  • Published : August 31, 2019

Joung, Houng Kun 1 Wongeun Oh 2

1청주대학교
2순천대학교

Accredited

ABSTRACT

This paper deals with a study on a system identification using deep learning in the case of a controller tuning for the system where a time delay exists. Of studies on the controller tuning for the system identification, the controller tuning method suggested by Yunwana and Seborg(1982) has an advantage of taking a good control over either none or small time delays due to phase error by Pade' approximation, whereas it comes with a disadvantage of having a greater estimated of time delay over the presence of a large time delay and of being unable to be used in a system. Furthermore, the trial-and-error method suggested by Zigler-Nichols and commonly used in industrial fields shows a disadvantage which is time consuming for a controller tuning. The controller tuning using a process response curve suggested by Cohen-Coon has a benefit of cutting more time taken for a controller tuning than the method by Zigler-Nichols does. It also faces a limitation of being applicable only to the open loop system but not applicable to the close loop system. To make up for these disadvantages, the Suh-suggested method, as its benefit, is applicable even to the close loop system. On top of this, it proposed a controller's optimal tuning method by reducing phase error through setting up control factors in the Pade' approximation with respect to the phase error generated in converting time delay into Pade' approximation. This method, however, involves putting control factors in proportion to time-delay constant values, which is therefore - as a disadvantage - not analytical. This paper went through a theoretical analysis on phase error as an analytical method to solve an issue involving the large estimation of phase error by Pade' approximation and time delay with the use of deep learning. Presented based on the findings of this existing researcher Suh (1984) was a new optimal tuning method dedicated to reducing phase error to a optimal level by setting control factors using the deep belief network algorithm out of deep learning algorithms. Besides, a related simulation was performed to compare the trial-and-error method by Zielger-Nichols and the tuning method for a controller suggested by Yunwana-Seborg, and the validity of the methods suggested in this paper was verified, accordingly.

Citation status

* References for papers published after 2023 are currently being built.