文摘
|
由于多智能体所处环境动态变化,并且单个智能体的决策也会影响其他智能体,这使得单智能体深度强化学习算法难以在多智能体环境中保持稳定.为了适应多智能体环境,本文利用集中训练和分散执行框架Cen-tralized Training with Decentralized Execution(CTDE),对单智能体深度强化学习算法Soft Actor-Critic(SAC)进行了改进,引入智能体通信机制,构建Multi-Agent Soft Actor-Critic(MASAC)算法. MASAC中智能体共享观察信息和历史经验,有效减少了环境不稳定性对算法造成的影响.最后,本文在协同以及协同竞争混合的任务中,对MASAC算法性能进行了实验分析,结果表明MASAC相对于SAC在多智能体环境中具有更好的稳定性. |
其他语种文摘
|
Due to the dynamic change of multi-agent environment, and the decision of single agent will affect other agents, it is difficult for the deep reinforcement learning algorithm of single agent to maintain stability in multi-agent environment. In order to adapt to multi-agent environment, this paper uses centralized training and decentralized execution framework (CTDE) to improve single agent deep reinforcement learning algorithm soft actor-critic (SAC). By introducing agent communication mechanism, in multi-agent soft actor-critic (MASAC), agents share observation information and historical experience, which effectively reduces the impact of environmental instability on the algorithm. Finally, in the task of cooperation and cooperation and competition, the performance of MASAC algorithm is analyzed experimentally. The results show that MASAC has better stability than SAC in multi-agent environment. |
来源
|
电子学报
,2021,49(9):1675-1681 【核心库】
|
DOI
|
10.12263/DZXB.20200243
|
关键词
|
多智能体环境
;
集中训练
;
分散执行
;
多智能体深度强化学习
|
地址
|
1.
矿山数字化教育部工程研究中心, 矿山数字化教育部工程研究中心, 江苏, 徐州, 221000
2.
(徐州)中国矿业大学计算机科学与技术学院, 江苏, 徐州, 221000
3.
宁波市轨道交通集团有限公司, 浙江, 宁波, 315000
|
语种
|
中文 |
文献类型
|
研究性论文 |
ISSN
|
0372-2112 |
学科
|
自动化技术、计算机技术 |
基金
|
国家自然科学基金
;
江苏省徐州市科技计划项目
|
文献收藏号
|
CSCD:7077341
|