LLM Reasoning 相关度: 7/10

Small Wins Big: Comparing Large Language Models and Domain Fine-Tuned Models for Sarcasm Detection in Code-Mixed Hinglish Text

Bitan Majumder, Anirban Sen
arXiv: 2602.21933v1 发布: 2026-02-25 更新: 2026-02-25

AI 摘要

针对Hinglish文本,微调的DistilBERT模型在反讽检测中优于大型语言模型。

主要贡献

  • 证明了微调小型模型在低资源场景下的有效性
  • 比较了LLM和微调模型在反讽检测任务上的性能
  • 在Hinglish文本的反讽检测中取得了新的性能

方法论

对比了Llama 3.1等LLM和微调的DistilBERT模型在Hinglish文本上的反讽检测准确率。

原文摘要

Sarcasm detection in multilingual and code-mixed environments remains a challenging task for natural language processing models due to structural variations, informal expressions, and low-resource linguistic availability. This study compares four large language models, Llama 3.1, Mistral, Gemma 3, and Phi-4, with a fine-tuned DistilBERT model for sarcasm detection in code-mixed Hinglish text. The results indicate that the smaller, sequentially fine-tuned DistilBERT model achieved the highest overall accuracy of 84%, outperforming all of the LLMs in zero and few-shot set ups, using minimal LLM generated code-mixed data used for fine-tuning. These findings indicate that domain-adaptive fine-tuning of smaller transformer based models may significantly improve sarcasm detection over general LLM inference, in low-resource and data scarce settings.

标签

反讽检测 Hinglish文本 DistilBERT 大型语言模型 微调

arXiv 分类

cs.CL