AI Agents 相关度: 6/10

Online Learning for Multi-Layer Hierarchical Inference under Partial and Policy-Dependent Feedback

Haoran Zhang, Seohyeon Cha, Hasan Burhan Beytur, Kevin S Chan, Gustavo de Veciana, Haris Vikalo
arXiv: 2603.04247v1 发布: 2026-03-04 更新: 2026-03-04

AI 摘要

研究多层分层推理系统中的在线路由策略学习,解决反馈稀疏和策略依赖问题。

主要贡献

  • 提出一种方差缩减的EXP4算法
  • 结合Lyapunov优化实现无偏损失估计和稳定学习
  • 证明了在随机到达和资源约束下的遗憾保证和近优性

方法论

结合方差缩减的EXP4算法和Lyapunov优化,解决部分和策略依赖反馈下的在线学习问题。

原文摘要

Hierarchical inference systems route tasks across multiple computational layers, where each node may either finalize a prediction locally or offload the task to a node in the next layer for further processing. Learning optimal routing policies in such systems is challenging: inference loss is defined recursively across layers, while feedback on prediction error is revealed only at a terminal oracle layer. This induces a partial, policy-dependent feedback structure in which observability probabilities decay with depth, causing importance-weighted estimators to suffer from amplified variance. We study online routing for multi-layer hierarchical inference under long-term resource constraints and terminal-only feedback. We formalize the recursive loss structure and show that naive importance-weighted contextual bandit methods become unstable as feedback probability decays along the hierarchy. To address this, we develop a variance-reduced EXP4-based algorithm integrated with Lyapunov optimization, yielding unbiased loss estimation and stable learning under sparse and policy-dependent feedback. We provide regret guarantees relative to the best fixed routing policy in hindsight and establish near-optimality under stochastic arrivals and resource constraints. Experiments on large-scale multi-task workloads demonstrate improved stability and performance compared to standard importance-weighted approaches.

标签

在线学习 分层推理 上下文Bandit Lyapunov优化

arXiv 分类

cs.LG cs.AI