본문 바로가기
  • Home

An advertisement method using inaudible sound of speaker

  • Journal of The Korea Society of Computer and Information
  • Abbr : JKSCI
  • 2015, 20(8), pp.7-13
  • Publisher : The Korean Society Of Computer And Information
  • Research Area : Engineering > Computer Science

Myoungbeom Chung 1

1성결대학교

Accredited

ABSTRACT

Recently, there are serviced user customized advertisement of various type using smart device. Representative services are advertisement service using light of smart TV screen or audible sound of smart TV to transmit advertisement information. However, those services have to do a specific action of smart device user for advertisement information or need audible audio information of TV contents. To overcome those weakness, therefore, we propose an advertisement method using inaudible sound of speaker based on smart device. This method supports the transfer of advertising content to the smart device user with no additional action or TV audio signal required to access that content. The proposed method used two high frequencies among 18kHz ~ 22kHz of audible frequency range which smart TV can send out. And it generates those frequencies synthesized with audio of TV contents as trigger signal which can send advertisements to smart device. Next, smart device analysis the trigger signal and request advertisement contents related to the signal to server. After then, smart device can show the downloaded contents to user. Because the proposed method uses the high frequencies of sound signals via the inner speaker of the smart device, its main advantage is that it does not affect the audio signal of TV content. To evaluate the efficacy of the proposed method, we developed an application to implement it and subsequently carried out an advertisement transmission experiment. The success rate of the transmission experiment was approximately 97%. Based on this result, we believe the proposed method will be a useful technique in introducing a customized user advertising service.

Citation status

* References for papers published after 2023 are currently being built.