Wide-Sense Stationary Policy Optimization with Bellman Residual on Video Games | |
Gong C(龚晨)1,2; He Q(何强)1,2; Bai YP(白云鹏)1,2; Hou XW(侯新文)2; Fan GL(范国梁)2; Liu Y(刘禹)2 | |
2021-06 | |
会议日期 | 05-09 July 2021 |
会议地点 | Shenzhen, China |
关键词 | Video Game Reinforcement Learning Quantile Regression Bellman residual Wasserstein Distance |
DOI | 10.1109/ICME51207.2021.9428293 |
英文摘要 | Deep Reinforcement Learning (DRL) has an increasing application in video games. However, it usually suffers from unstable training, low sampling efficiency, etc. Under the assumption that Bellman residual follows a stationary random process when the training process is convergent, we propose the Wide-sense Stationary Policy Optimization (WSPO) framework, which leverages the Wasserstein distance from the Bellman Residual Distribution (BRD) between two adjacent time steps, to stabilize the training stage and improve the sampling efficiency. We minimize the Wasserstein distance with Quantile Regression, where the specific form of BRD is not needed. Finally, we combine WSPO with Advantage Actor-Critic (A2C) algorithm and Deep Deterministic Policy Gradient (DDPG) algorithm. We evaluate WSPO on Atari 2600 video games and continuous control tasks, illustrating that WSPO compares or outperforms the state-of-the-art algorithms we tested. |
会议录出版者 | IEEE |
语种 | 英语 |
URL标识 | 查看原文 |
内容类型 | 会议论文 |
源URL | [http://ir.ia.ac.cn/handle/173211/48892] |
专题 | 综合信息系统研究中心_脑机融合与认知评估 |
通讯作者 | Hou XW(侯新文) |
作者单位 | 1.University of Chinese Academy of Sciences 2.Institute of Automation, Chinese Academy of Sciences |
推荐引用方式 GB/T 7714 | Gong C,He Q,Bai YP,et al. Wide-Sense Stationary Policy Optimization with Bellman Residual on Video Games[C]. 见:. Shenzhen, China. 05-09 July 2021. |
个性服务 |
查看访问统计 |
相关权益政策 |
暂无数据 |
收藏/分享 |
除非特别说明,本系统中所有内容都受版权保护,并保留所有权利。
修改评论