LLM Reasoning 相关度: 9/10

xList-Hate: A Checklist-Based Framework for Interpretable and Generalizable Hate Speech Detection

Adrián Girón, Pablo Miralles, Javier Huertas-Tato, Sergio D'Antonio, David Camacho
arXiv: 2602.05874v1 发布: 2026-02-05 更新: 2026-02-05

AI 摘要

xList-Hate通过分解仇恨言论检测任务为多个概念性问题,提升了模型的鲁棒性和可解释性。

主要贡献

  • 提出xList-Hate框架,将仇恨言论检测分解为诊断性问题
  • 使用LLM回答诊断性问题,生成二元诊断表示
  • 通过可解释的决策树聚合诊断信号,进行预测
  • 实验证明该方法在跨数据集上具有更好的鲁棒性

方法论

使用LLM回答一系列与仇恨言论相关的概念性问题,再用决策树聚合结果进行分类。

原文摘要

Hate speech detection is commonly framed as a direct binary classification problem despite being a composite concept defined through multiple interacting factors that vary across legal frameworks, platform policies, and annotation guidelines. As a result, supervised models often overfit dataset-specific definitions and exhibit limited robustness under domain shift and annotation noise. We introduce xList-Hate, a diagnostic framework that decomposes hate speech detection into a checklist of explicit, concept-level questions grounded in widely shared normative criteria. Each question is independently answered by a large language model (LLM), producing a binary diagnostic representation that captures hateful content features without directly predicting the final label. These diagnostic signals are then aggregated by a lightweight, fully interpretable decision tree, yielding transparent and auditable predictions. We evaluate it across multiple hate speech benchmarks and model families, comparing it against zero-shot LLM classification and in-domain supervised fine-tuning. While supervised methods typically maximize in-domain performance, we consistently improves cross-dataset robustness and relative performance under domain shift. In addition, qualitative analysis of disagreement cases provides evidence that the framework can be less sensitive to certain forms of annotation inconsistency and contextual ambiguity. Crucially, the approach enables fine-grained interpretability through explicit decision paths and factor-level analysis. Our results suggest that reframing hate speech detection as a diagnostic reasoning task, rather than a monolithic classification problem, provides a robust, explainable, and extensible alternative for content moderation.

标签

仇恨言论检测 可解释性 鲁棒性 大语言模型

arXiv 分类

cs.CL cs.AI