Korean Ironic Expression Detector 


Vol. 13,  No. 3, pp. 148-155, Mar.  2024
https://doi.org/10.3745/TKIPS.2024.13.3.148


PDF
  Abstract

Despite the increasing importance of irony and sarcasm detection in the field of natural language processing, research on the Korean language is relatively scarce compared to other languages. This study aims to experiment with various models for irony detection in Korean text. The study conducted irony detection experiments using KoBERT, a BERT-based model, and ChatGPT. For KoBERT, two methods of additional training on sentiment data were applied (Transfer Learning and MultiTask Learning). Additionally, for ChatGPT, the Few-Shot Learning technique was applied by increasing the number of example sentences entered as prompts. The results of the experiments showed that the Transfer Learning and MultiTask Learning models, which were trained with additional sentiment data, outperformed the baseline model without additional sentiment data. On the other hand, ChatGPT exhibited significantly lower performance compared to KoBERT, and increasing the number of example sentences did not lead to a noticeable improvement in performance. In conclusion, this study suggests that a model based on KoBERT is more suitable for irony detection than ChatG

  Statistics


  Cite this article

[IEEE Style]

S. J. Bang, Y. Park, J. E. Kim, K. J. Lee, "Korean Ironic Expression Detector," The Transactions of the Korea Information Processing Society, vol. 13, no. 3, pp. 148-155, 2024. DOI: https://doi.org/10.3745/TKIPS.2024.13.3.148.

[ACM Style]

Seung Ju Bang, Yo-Han Park, Jee Eun Kim, and Kong Joo Lee. 2024. Korean Ironic Expression Detector. The Transactions of the Korea Information Processing Society, 13, 3, (2024), 148-155. DOI: https://doi.org/10.3745/TKIPS.2024.13.3.148.