LLM Reasoning 相关度: 8/10

FINEST: Improving LLM Responses to Sensitive Topics Through Fine-Grained Evaluation

Juhyun Oh, Nayeon Lee, Chani Jung, Jiho Jin, Junho Myung, Jongwon Lee, Taeui Song, Alice Oh
arXiv: 2603.04123v1 发布: 2026-03-04 更新: 2026-03-04

AI 摘要

FINEST通过细粒度评估提升LLM在敏感话题上的安全性和有用性。

主要贡献

  • 提出了FINEST,一种细粒度敏感话题评估分类法
  • 通过FINEST指导的改进流程显著提升LLM回复质量
  • 验证了基于评分的改进方法效果最佳

方法论

构建FINEST分类法,分解helpful和harmless为Content、Logic和Appropriateness三类错误,指导LLM回复改进。

原文摘要

Large Language Models (LLMs) often generate overly cautious and vague responses on sensitive topics, sacrificing helpfulness for safety. Existing evaluation frameworks lack systematic methods to identify and address specific weaknesses in responses to sensitive topics, making it difficult to improve both safety and helpfulness simultaneously. To address this, we introduce FINEST, a FINE-grained response evaluation taxonomy for Sensitive Topics, which breaks down helpfulness and harmlessness into errors across three main categories: Content, Logic, and Appropriateness. Experiments on a Korean-sensitive question dataset demonstrate that our score- and error-based improvement pipeline, guided by FINEST, significantly improves the model responses across all three categories, outperforming refinement without guidance. Notably, score-based improvement -- providing category-specific scores and justifications -- yields the most significant gains, reducing the error sentence ratio for Appropriateness by up to 33.09%. This work lays the foundation for a more explainable and comprehensive evaluation and improvement of LLM responses to sensitive questions.

标签

LLM Evaluation Sensitive Topics Fine-grained Evaluation

arXiv 分类

cs.CL