基于SAC的多智能体深度强化学习算法
Deep Reinforcement Learning Algorithm of Multi-agent Based on SAC
查看参考文献16篇
文摘
|
由于多智能体所处环境动态变化,并且单个智能体的决策也会影响其他智能体,这使得单智能体深度强化学习算法难以在多智能体环境中保持稳定.为了适应多智能体环境,本文利用集中训练和分散执行框架Cen-tralized Training with Decentralized Execution(CTDE),对单智能体深度强化学习算法Soft Actor-Critic(SAC)进行了改进,引入智能体通信机制,构建Multi-Agent Soft Actor-Critic(MASAC)算法. MASAC中智能体共享观察信息和历史经验,有效减少了环境不稳定性对算法造成的影响.最后,本文在协同以及协同竞争混合的任务中,对MASAC算法性能进行了实验分析,结果表明MASAC相对于SAC在多智能体环境中具有更好的稳定性. |
其他语种文摘
|
Due to the dynamic change of multi-agent environment, and the decision of single agent will affect other agents, it is difficult for the deep reinforcement learning algorithm of single agent to maintain stability in multi-agent environment. In order to adapt to multi-agent environment, this paper uses centralized training and decentralized execution framework (CTDE) to improve single agent deep reinforcement learning algorithm soft actor-critic (SAC). By introducing agent communication mechanism, in multi-agent soft actor-critic (MASAC), agents share observation information and historical experience, which effectively reduces the impact of environmental instability on the algorithm. Finally, in the task of cooperation and cooperation and competition, the performance of MASAC algorithm is analyzed experimentally. The results show that MASAC has better stability than SAC in multi-agent environment. |
来源
|
电子学报
,2021,49(9):1675-1681 【核心库】
|
DOI
|
10.12263/DZXB.20200243
|
关键词
|
多智能体环境
;
集中训练
;
分散执行
;
多智能体深度强化学习
|
地址
|
1.
矿山数字化教育部工程研究中心, 矿山数字化教育部工程研究中心, 江苏, 徐州, 221000
2.
(徐州)中国矿业大学计算机科学与技术学院, 江苏, 徐州, 221000
3.
宁波市轨道交通集团有限公司, 浙江, 宁波, 315000
|
语种
|
中文 |
文献类型
|
研究性论文 |
ISSN
|
0372-2112 |
学科
|
自动化技术、计算机技术 |
基金
|
国家自然科学基金
;
江苏省徐州市科技计划项目
|
文献收藏号
|
CSCD:7077341
|
参考文献 共
16
共1页
|
1.
Silver D. Mastering the game of Go with deep neural networks and tree search.
Nature,2016,529(7587):484-489
|
CSCD被引
823
次
|
|
|
|
2.
Silver D. Mastering the game of Go without human knowledge.
Nature,2017,550(7676):354-359
|
CSCD被引
458
次
|
|
|
|
3.
周沛. 跨模态医学图像预测综述.
电子学报,2019,47(1):220-226
|
CSCD被引
7
次
|
|
|
|
4.
Lowe R. On the pitfalls of measuring emergent communication.
Proceedings of the 18th International Conference on Autonomous Agents and MultiAgent Systems,2019:693-701
|
CSCD被引
2
次
|
|
|
|
5.
Wang X. Video captioning via hierarchical reinforcement learning.
Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition,2018:4213-4222
|
CSCD被引
3
次
|
|
|
|
6.
郑兴华. 基于深度学习和智能规划的行为识别.
电子学报,2019,47(8):1661-1668
|
CSCD被引
6
次
|
|
|
|
7.
Schulman J. Trust region policy optimization.
International Conference on Machine Learning,2015:1889-1897
|
CSCD被引
53
次
|
|
|
|
8.
闻佳. 基于深度学习的异常事件检测.
电子学报,2020,48(2):308-313
|
CSCD被引
8
次
|
|
|
|
9.
Abdallah S. Addressing environment non-stationarity by repeating Q-learning updates.
The Journal of Machine Learning Research,2016,17(1):1582-1612
|
CSCD被引
4
次
|
|
|
|
10.
Foerster J N. Counterfactual multi-agent policy gradients.
Thirty-second AAAI Conference on Artificial Intelligence,2018:2974-2982
|
CSCD被引
1
次
|
|
|
|
11.
Lowe R. Multi-agent actor-critic for mixed cooperative-competitive environments.
Advances in Neural Information Processing Systems,2017:6379-6390
|
CSCD被引
26
次
|
|
|
|
12.
Haarnoja T. Soft actor-critic: offpolicy maximum entropy deep reinforcement learning with a stochastic actor.
International Conference on Machine Learning,2018:1856-1865
|
CSCD被引
3
次
|
|
|
|
13.
Haarnoja T. Reinforcement learning with deep energy-based policies.
Proceedings of the 34th International Conference on Machine Learning,2017:1352-1361
|
CSCD被引
12
次
|
|
|
|
14.
Das A. Learning cooperative visual dialog agents with deep reinforcement learning.
Proceedings of the IEEE International Conference on Computer Vision,2017:2951-2960
|
CSCD被引
2
次
|
|
|
|
15.
曹源. 形式化方法在列车运行控制系统中的应用.
交通运输工程学报,2010,10(1):112-126
|
CSCD被引
13
次
|
|
|
|
16.
吴胜权. 有轨电车路权配置与信号系统选择.
中国铁路,2014(8):97-99
|
CSCD被引
1
次
|
|
|
|
|