본문 바로가기
  • Home

Minimization of Performance Degrading in Lightweight and Quantized Super-Resolution Models Through Feature-based Knowledge Distillation

  • Journal of The Korea Society of Computer and Information
  • Abbr : JKSCI
  • 2025, 30(12), pp.37~49
  • Publisher : The Korean Society Of Computer And Information
  • Research Area : Engineering > Computer Science
  • Received : November 11, 2025
  • Accepted : December 10, 2025
  • Published : December 31, 2025

Ho-min Jung 1 Tae-Young Lee 1 Byung-In Choi 1

1한화시스템

Accredited

ABSTRACT

This study proposes knowledge distillation (KD) method that minimizes the performance degradation caused by lightweighting and quantization in super-resolution (SR) tasks. The method has been redesigned to leverage simultaneously local and global feature information to maintain the detail restoration performance, and has been optimized the network into the edge device for validations. The spatial L1 loss function is used, in local level, to preserve the feature information such as boundaries, textures, and fine patterns. Meanwhile, in global level, the 2D FFT-based frequency transformation is employed to reflect the spatial characteristics and emphasize high-frequency components. This considerations of semantic context and spatial structure in images ensures to preserve fine details and structural consistency during the SR process. For verification, the network was optimized based on the performance comparison across different active functions for real-time operation on edge devices, and the local/global feature-based KD strategy was applied during initial training and quantization-aware training (QAT) to minimize performance loss. As results in optimized network, the inference speed has been improved by more than 7% on edge devices compared to the baseline. In our proposed method, it showed less performance degradation up to 0.12%, whereas the conventional QAT-based quantized models exhibited approximately 1.15% performance degradation in terms of PSNR. Thus, with our proposal, high-quality SR can be achieved even with lightweight models.

Citation status

* References for papers published after 2024 are currently being built.