Agent Tuning & Optimization 相关度: 6/10

Reinforcement Learning for Parameterized Quantum State Preparation: A Comparative Study

Gerhard Stenzel, Isabella Debelic, Michael Kölle, Tobias Rohe, Leo Sünkel, Julian Hager, Claudia Linnhoff-Popien
arXiv: 2602.16523v1 发布: 2026-02-18 更新: 2026-02-18

AI 摘要

论文研究了强化学习在参数化量子态制备中的应用,比较了不同策略和算法的性能。

主要贡献

  • 扩展DQCS到参数化量子态制备
  • 比较了单阶段和双阶段训练方法
  • 评估了PPO和A2C算法在量子态制备中的性能

方法论

使用Gymnasium和PennyLane,通过强化学习(PPO和A2C)训练智能体生成量子电路,并使用参数平移梯度优化旋转角度。

原文摘要

We extend directed quantum circuit synthesis (DQCS) with reinforcement learning from purely discrete gate selection to parameterized quantum state preparation with continuous single-qubit rotations \(R_x\), \(R_y\), and \(R_z\). We compare two training regimes: a one-stage agent that jointly selects the gate type, the affected qubit(s), and the rotation angle; and a two-stage variant that first proposes a discrete circuit and subsequently optimizes the rotation angles with Adam using parameter-shift gradients. Using Gymnasium and PennyLane, we evaluate Proximal Policy Optimization (PPO) and Advantage Actor--Critic (A2C) on systems comprising two to ten qubits and on targets of increasing complexity with \(λ\) ranging from one to five. Whereas A2C does not learn effective policies in this setting, PPO succeeds under stable hyperparameters (one-stage: learning rate approximately \(5\times10^{-4}\) with a self-fidelity-error threshold of 0.01; two-stage: learning rate approximately \(10^{-4}\)). Both approaches reliably reconstruct computational basis states (between 83\% and 99\% success) and Bell states (between 61\% and 77\% success). However, scalability saturates for \(λ\) of approximately three to four and does not extend to ten-qubit targets even at \(λ=2\). The two-stage method offers only marginal accuracy gains while requiring around three times the runtime. For practicality under a fixed compute budget, we therefore recommend the one-stage PPO policy, provide explicit synthesized circuits, and contrast with a classical variational baseline to outline avenues for improved scalability.

标签

强化学习 量子计算 量子态制备 Proximal Policy Optimization Advantage Actor-Critic

arXiv 分类

cs.LG quant-ph