LLM Memory & RAG 相关度: 8/10

A Guide to Large Language Models in Modeling and Simulation: From Core Techniques to Critical Challenges

Philippe J. Giabbanelli
arXiv: 2602.05883v1 发布: 2026-02-05 更新: 2026-02-05

AI 摘要

论文针对LLM在建模与仿真应用中的常见问题提供实用指南,强调设计选择和评估。

主要贡献

  • LLM在M&S应用中的最佳实践指南
  • 常见问题的分析与诊断策略
  • 知识增强方法(RAG, LoRA)的应用建议

方法论

论文采用经验分析和案例研究,结合理论分析,为LLM在M&S中的应用提供指导。

原文摘要

Large language models (LLMs) have rapidly become familiar tools to researchers and practitioners. Concepts such as prompting, temperature, or few-shot examples are now widely recognized, and LLMs are increasingly used in Modeling & Simulation (M&S) workflows. However, practices that appear straightforward may introduce subtle issues, unnecessary complexity, or may even lead to inferior results. Adding more data can backfire (e.g., deteriorating performance through model collapse or inadvertently wiping out existing guardrails), spending time on fine-tuning a model can be unnecessary without a prior assessment of what it already knows, setting the temperature to 0 is not sufficient to make LLMs deterministic, providing a large volume of M&S data as input can be excessive (LLMs cannot attend to everything) but naive simplifications can lose information. We aim to provide comprehensive and practical guidance on how to use LLMs, with an emphasis on M&S applications. We discuss common sources of confusion, including non-determinism, knowledge augmentation (including RAG and LoRA), decomposition of M&S data, and hyper-parameter settings. We emphasize principled design choices, diagnostic strategies, and empirical evaluation, with the goal of helping modelers make informed decisions about when, how, and whether to rely on LLMs.

标签

LLM Modeling & Simulation RAG LoRA

arXiv 分类

cs.AI