LLM Reasoning 相关度: 8/10

Evaluating LLM Safety Under Repeated Inference via Accelerated Prompt Stress Testing

Keita Broadwater
arXiv: 2602.11786v1 发布: 2026-02-12 更新: 2026-02-12

AI 摘要

提出APST框架,通过重复推理测试评估LLM在持续使用中的安全性和可靠性。

主要贡献

  • 提出 Accelerated Prompt Stress Testing (APST) 框架
  • 使用伯努利和二项模型量化安全故障率
  • 发现单次评估可能掩盖持续使用中的可靠性差异

方法论

重复采样相同prompt,控制解码温度等条件,使用伯努利和二项模型对故障进行建模和概率估计。

原文摘要

Traditional benchmarks for large language models (LLMs) primarily assess safety risk through breadth-oriented evaluation across diverse tasks. However, real-world deployment exposes a different class of risk: operational failures arising from repeated inference on identical or near-identical prompts rather than broad task generalization. In high-stakes settings, response consistency and safety under sustained use are critical. We introduce Accelerated Prompt Stress Testing (APST), a depth-oriented evaluation framework inspired by reliability engineering. APST repeatedly samples identical prompts under controlled operational conditions (e.g., decoding temperature) to surface latent failure modes including hallucinations, refusal inconsistency, and unsafe completions. Rather than treating failures as isolated events, APST models them as stochastic outcomes of independent inference events. We formalize safety failures using Bernoulli and binomial models to estimate per-inference failure probabilities, enabling quantitative comparison of reliability across models and decoding configurations. Applying APST to multiple instruction-tuned LLMs evaluated on AIR-BENCH-derived safety prompts, we find that models with similar benchmark-aligned scores can exhibit substantially different empirical failure rates under repeated sampling, particularly as temperature increases. These results demonstrate that shallow, single-sample evaluation can obscure meaningful reliability differences under sustained use. APST complements existing benchmarks by providing a practical framework for evaluating LLM safety and reliability under repeated inference, bridging benchmark alignment and deployment-oriented risk assessment.

标签

LLM安全 可靠性 压力测试 重复推理 风险评估

arXiv 分类

cs.LG cs.AI