Reinforcement Learning for Speculative Trading under Exploratory Framework
AI 摘要
研究探索性强化学习框架下投机交易问题,设计算法并应用于配对交易。
主要贡献
- 建立了探索性HJB方程和吉布斯分布的闭式解
- 证明了RL目标函数收敛到原问题的价值函数
- 设计了强化学习算法并应用于配对交易
方法论
使用探索性强化学习框架,将问题建模为最优停止问题,并通过Cox过程和熵正则化求解。
原文摘要
We study a speculative trading problem within the exploratory reinforcement learning (RL) framework of Wang et al. [2020]. The problem is formulated as a sequential optimal stopping problem over entry and exit times under general utility function and price process. We first consider a relaxed version of the problem in which the stopping times are modeled by the jump times of Cox processes driven by bounded, non-randomized intensity controls. Under the exploratory formulation, the agent's randomized control is characterized via the probability measure over the jump intensities, and their objective function is regularized by Shannon's differential entropy. This yields a system of the exploratory HJB equations and Gibbs distributions in closed-form as the optimal policy. Error estimates and convergence of the RL objective to the value function of the original problem are established. Finally, an RL algorithm is designed, and its implementation is showcased in a pairs-trading application.