LLM Memory & RAG 相关度: 9/10

Type-Aware Retrieval-Augmented Generation with Dependency Closure for Solver-Executable Industrial Optimization Modeling

Y. Zhong, R. Huang, M. Wang, Z. Guo, YC. Li, M. Yu, Z. Jin
arXiv: 2603.03180v1 发布: 2026-03-03 更新: 2026-03-03

AI 摘要

提出一种类型感知的RAG方法,利用依赖闭包生成可执行的工业优化模型,提升模型编译成功率。

主要贡献

  • 提出类型感知的RAG框架,解决LLM在工业优化建模中生成不可执行代码的问题
  • 构建领域特定的类型化知识库,并利用知识图谱编码数学依赖关系
  • 通过依赖传播计算最小依赖闭包,保证代码的可执行性

方法论

构建类型化的知识库和知识图谱,利用混合检索和依赖传播计算最小依赖闭包,然后进行RAG。

原文摘要

Automated industrial optimization modeling requires reliable translation of natural-language requirements into solver-executable code. However, large language models often generate non-compilable models due to missing declarations, type inconsistencies, and incomplete dependency contexts. We propose a type-aware retrieval-augmented generation (RAG) method that enforces modeling entity types and minimal dependency closure to ensure executability. Unlike existing RAG approaches that index unstructured text, our method constructs a domain-specific typed knowledge base by parsing heterogeneous sources, such as academic papers and solver code, into typed units and encoding their mathematical dependencies in a knowledge graph. Given a natural-language instruction, it performs hybrid retrieval and computes a minimal dependency-closed context, the smallest set of typed symbols required for solver-executable code, via dependency propagation over the graph. We validate the method on two constraint-intensive industrial cases: demand response optimization in battery production and flexible job shop scheduling. In the first case, our method generates an executable model incorporating demand-response incentives and load-reduction constraints, achieving peak shaving while preserving profitability; conventional RAG baselines fail. In the second case, it consistently produces compilable models that reach known optimal solutions, demonstrating robust cross-domain generalization; baselines fail entirely. Ablation studies confirm that enforcing type-aware dependency closure is essential for avoiding structural hallucinations and ensuring executability, addressing a critical barrier to deploying large language models in complex engineering optimization tasks.

标签

RAG 知识图谱 工业优化 大型语言模型 依赖闭包

arXiv 分类

cs.SE cs.AI cs.CL