본문 바로가기
  • Home

A Study on the Efficacy of Edge-Based Adversarial Example Detection Model: Across Various Adversarial Algorithms

  • Journal of The Korea Society of Computer and Information
  • Abbr : JKSCI
  • 2024, 29(2), pp.31-41
  • DOI : 10.9708/jksci.2024.29.02.031
  • Publisher : The Korean Society Of Computer And Information
  • Research Area : Engineering > Computer Science
  • Received : January 11, 2024
  • Accepted : February 7, 2024
  • Published : February 29, 2024

Jaesung Shim 1 Kyuri Jo 1

1충북대학교

Accredited

ABSTRACT

Deep learning models show excellent performance in tasks such as image classification and object detection in the field of computer vision, and are used in various ways in actual industrial sites. Recently, research on improving robustness has been actively conducted, along with pointing out that this deep learning model is vulnerable to hostile examples. A hostile example is an image in which small noise is added to induce misclassification, and can pose a significant threat when applying a deep learning model to a real environment. In this paper, we tried to confirm the robustness of the edge-learning classification model and the performance of the adversarial example detection model using it for adversarial examples of various algorithms. As a result of robustness experiments, the basic classification model showed about 17% accuracy for the FGSM algorithm, while the edge-learning models maintained accuracy in the 60-70% range, and the basic classification model showed accuracy in the 0-1% range for the PGD/DeepFool/CW algorithm, while the edge-learning models maintained accuracy in 80-90%. As a result of the adversarial example detection experiment, a high detection rate of 91-95% was confirmed for all algorithms of FGSM/PGD/DeepFool/CW. By presenting the possibility of defending against various hostile algorithms through this study, it is expected to improve the safety and reliability of deep learning models in various industries using computer vision.

Citation status

* References for papers published after 2023 are currently being built.

This paper was written with support from the National Research Foundation of Korea.