FEAT: A Linear-Complexity Foundation Model for Extremely Large Structured Data
AI 摘要
FEAT提出了线性复杂度的结构化数据基础模型,通过混合线性编码提升性能并加速推理。
主要贡献
- 提出了多层双轴架构,用混合线性编码替代二次注意力
- 设计了AFBM和Conv-GLA,实现局部依赖和全局记忆
- 采用混合结构因果模型和稳定的重构目标,增强鲁棒性
方法论
FEAT利用AFBM捕获局部依赖,Conv-GLA处理全局记忆,结合混合结构因果模型提升性能。
原文摘要
Structured data is foundational to healthcare, finance, e-commerce, and scientific data management. Large structured-data models (LDMs) extend the foundation model paradigm to unify heterogeneous datasets for tasks such as classification, regression, and decision support. However, existing LDMs face major limitations. First, most rely on sample-wise self-attention, whose O(N^2) complexity limits the sample count. Second, linear sequence models often degrade representations due to hidden-state compression and artificial causal bias. Third, synthetic-only pre-training often fails to match real-world distributions. We propose FEAT, a linear-complexity foundation model for extremely large structured data. FEAT introduces a multi-layer dual-axis architecture that replaces quadratic attention with hybrid linear encoding. The architecture combines adaptive-fusion bi-Mamba-2 (AFBM) for local sample dependencies and convolutional gated linear attention (Conv-GLA) for global memory. This design enables linear-complexity cross-sample modeling while preserving expressive representations. To improve robustness, FEAT adopts a hybrid structural causal model pipeline and a stable reconstruction objective. Experiments on 11 real-world datasets show that FEAT consistently outperforms baselines in zero-shot performance, while scaling linearly and achieving up to 40x faster inference.