LLM Reasoning 相关度: 7/10

TabICLv2: A better, faster, scalable, and open tabular foundation model

Jingang Qu, David Holzmüller, Gaël Varoquaux, Marine Le Morvan
arXiv: 2602.11139v1 发布: 2026-02-11 更新: 2026-02-11

AI 摘要

TabICLv2通过新颖的合成数据生成和架构优化,在表格数据预测任务上超越现有模型。

主要贡献

  • 新型合成数据生成引擎,提高预训练多样性
  • 可扩展的softmax注意力机制,提升泛化能力
  • Muon优化器替代AdamW,优化预训练过程

方法论

利用合成数据进行预训练,结合新的注意力机制和优化器,提升表格数据预测模型的性能。

原文摘要

Tabular foundation models, such as TabPFNv2 and TabICL, have recently dethroned gradient-boosted trees at the top of predictive benchmarks, demonstrating the value of in-context learning for tabular data. We introduce TabICLv2, a new state-of-the-art foundation model for regression and classification built on three pillars: (1) a novel synthetic data generation engine designed for high pretraining diversity; (2) various architectural innovations, including a new scalable softmax in attention improving generalization to larger datasets without prohibitive long-sequence pretraining; and (3) optimized pretraining protocols, notably replacing AdamW with the Muon optimizer. On the TabArena and TALENT benchmarks, TabICLv2 without any tuning surpasses the performance of the current state of the art, RealTabPFN-2.5 (hyperparameter-tuned, ensembled, and fine-tuned on real data). With only moderate pretraining compute, TabICLv2 generalizes effectively to million-scale datasets under 50GB GPU memory while being markedly faster than RealTabPFN-2.5. We provide extensive ablation studies to quantify these contributions and commit to open research by first releasing inference code and model weights at https://github.com/soda-inria/tabicl, with synthetic data engine and pretraining code to follow.

标签

tabular data foundation model in-context learning

arXiv 分类

cs.LG