본문 바로가기
  • Home

A Development Architecture Research of Intelligent Object Recognition and Voice Service for the Visually Impaired

  • Journal of Knowledge Information Technology and Systems
  • Abbr : JKITS
  • 2018, 13(4), pp.441-450
  • DOI : 10.34163/jkits.2018.13.4.004
  • Publisher : Korea Knowledge Information Technology Society
  • Research Area : Interdisciplinary Studies > Interdisciplinary Research
  • Published : August 31, 2018

Kim Chul Jin 1 Myung Soo Park 1 Min Hwan Kim 1

1인하공업전문대학

Accredited

ABSTRACT

The development of information technology has made daily life convenient and enjoys cultural benefits through various information technology services. However, due to the limited accessibility of information technology, disabled people do not receive various information technology services. Therefore, in this paper, we propose object recognition and voice service architecture to improve accessibility of information technology service for the disabled. In particular, it enables the visually impaired to receive recognition services and voice services for objects. The visually impaired person recognizes the desired objects using the mobile device, and the recognized object information is provided by voice so that the visually impaired person can recognize the object. In order to recognize the object in the mobile environment, it is possible to recognize the object using the MobileNet model which is a learned model based on the deep learning. The object recognition service works with Amazon's Alexa service, which provides speech recognition and voice conversion services to provide voice services for recognized objects. In this paper, we propose a base architecture that can provide services in three aspects based on object recognition and voice service. The first one is the voice service for the recognized object, the second is the personalized voice service for the recognition object, and finally the knowledge service for the recognition object using the Alexa service.

Citation status

* References for papers published after 2023 are currently being built.