본문 바로가기
  • Home

AI's perception and response to gender discrimination language

  • The Sociolinguistic Journal of Korea
  • Abbr : 사회언어학
  • 2025, 33(2), pp.85~128
  • Publisher : The Sociolinguistic Society Of Korea
  • Research Area : Humanities > Linguistics
  • Received : April 29, 2025
  • Accepted : May 22, 2025
  • Published : June 30, 2025

Lee Jeongbok 1

1대구대학교

Accredited

ABSTRACT

This study evaluates how three AI models—ChatGPT, DeepSeek, and Clova X—detect and respond to gender-biased expressions in Korean. Clova X exhibited the highest accuracy in identifying discriminatory language, followed by ChatGPT, while DeepSeek performed the poorest. While terms like “kimchi girl” and “doenjang girl” were correctly recognized, phrases such as “female doctor” and “maiden work” were often misinterpreted. ChatGPT and DeepSeek occasionally provided inaccurate definitions, raising concerns about misinformation. Interestingly, DeepSeek performed best when interpreting sexist proverbs, although the overall differences across models were minor. All three models generally succeeded in recognizing biased expressions in conversational contexts, but DeepSeek struggled with non-standard sentence formats, leading to delays or missing responses. These results reveal current limitations in generative AI’s ability to process culturally specific and nuanced language. This study emphasized the need to incorporate more diverse Korean language data and up-to-date linguistic research in AI training. As generative AI tools become more integrated into everyday communication, improving their ability to detect and respond to gender biases is crucial for fostering fair and responsible language technologies.

Citation status

* References for papers published after 2023 are currently being built.