Agent Tuning & Optimization 相关度: 8/10

Schema on the Inside: A Two-Phase Fine-Tuning Method for High-Efficiency Text-to-SQL at Scale

Chinmay Soni, Shivam Chourasia, Gaurav Kumar, Hitesh Kapoor
arXiv: 2603.24023v1 发布: 2026-03-25 更新: 2026-03-25

AI 摘要

提出了一种两阶段微调方法,使小型模型在Text-to-SQL任务上实现高精度和低延迟。

主要贡献

  • 提出了一种两阶段的微调方法,用于优化Text-to-SQL模型。
  • 显著降低了输入token数量,降低了API成本。
  • 在实际生产环境中实现了高性能的Text-to-SQL。

方法论

采用两阶段监督微调,第一阶段学习schema知识,第二阶段学习SQL生成,使模型内化schema。

原文摘要

Applying large, proprietary API-based language models to text-to-SQL tasks poses a significant industry challenge: reliance on massive, schema-heavy prompts results in prohibitive per-token API costs and high latency, hindering scalable production deployment. We present a specialized, self-hosted 8B-parameter model designed for a conversational bot in CriQ, a sister app to Dream11, India's largest fantasy sports platform with over 250 million users, that answers user queries about cricket statistics. Our novel two-phase supervised fine-tuning approach enables the model to internalize the entire database schema, eliminating the need for long-context prompts. This reduces input tokens by over 99%, from a 17k-token baseline to fewer than 100, and replaces costly external API calls with efficient local inference. The resulting system achieves 98.4% execution success and 92.5% semantic accuracy, substantially outperforming a prompt-engineered baseline using Google's Gemini Flash 2.0 (95.6% execution, 89.4% semantic accuracy). These results demonstrate a practical path toward high-precision, low-latency text-to-SQL applications using domain-specialized, self-hosted language models in large-scale production environments.

标签

Text-to-SQL Fine-tuning Schema Internalization Low-latency Large-scale

arXiv 分类

cs.CL cs.AI