帮助 关于我们

返回检索结果

Energy-Efficient Federated Edge Learning with Joint Communication and Computation Design

查看参考文献50篇

Mo Xiaopeng 1   Xu Jie 2 *  
文摘 This paper studies a federated edge learning system, in which an edge server coordinates a set of edge devices to train a shared machine learning (ML) model based on their locally distributed data samples. During the distributed training, we exploit the joint communication and computation design for improving the system energy efficiency, in which both the communication resource allocation for global ML-parameters aggregation and the computation resource allocation for locally updating ML-parameters are jointly optimized. In particular, we consider two transmission protocols for edge devices to upload ML-parameters to edge server, based on the non-orthogonal multiple access (NOMA) and time division multiple access (TDMA), respectively. Under both protocols, we minimize the total energy consumption at all edge devices over a particular finite training duration subject to a given training accuracy, by jointly optimizing the transmission power and rates at edge devices for uploading ML-parameters and their central processing unit (CPU) frequencies for local update. We propose efficient algorithms to solve the formulated energy minimization problems by using the techniques from convex optimization. Numerical results show that as compared to other benchmark schemes, our proposed joint communication and computation design significantly can improve the energy efficiency of the federated edge learning system, by properly balancing the energy tradeoff between communication and computation.
来源 Journal of Communications and Information Networks ,2021,6(2):110-124 【核心库】
DOI 10.23919/JCIN.2021.9475121
关键词 federated edge learning ; energy efficiency ; joint communication and computation design ; resource allocation ; non-orthogonal multiple access (NOMA) ; optimization
地址

1. School of Information Engineering, Guangdong University of Technology, Guangzhou, 510006  

2. School of Science and Engineering, the Chinese University of Hong Kong, Shenzhen, 518172

语种 英文
文献类型 研究性论文
ISSN 2096-1081
学科 电子技术、通信技术
基金 国家自然科学基金 ;  supported in part by the National Key R&D Program of China ;  Guangdong Province Key Area R&D Program
文献收藏号 CSCD:6990516

参考文献 共 50 共3页

1.  Lecun Y. Deep learning. Nature,2015,521(7553):436-444 被引 2941    
2.  Rodrigues T K. Machine learning meets computation and communication control in evolving edge and cloud: challenges and future perspective. IEEE Communications Surveys and Tutorials,2019,22(1):38-67 被引 2    
3.  Mao Y. A survey on mobile edge computing: the communication perspective. IEEE Communications Surveys and Tutorials,2017,19(4):2322-2358 被引 88    
4.  Li E. Edge AI: on-demand accelerating deep neural network inference via edge computing. IEEE Transactions on Wireless Communications,2019,19(1):447-457 被引 3    
5.  Wang X. In-edge AI: intelligentizing mobile edge computing, caching and communication by federated learning. IEEE Network,2019,33(5):156-165 被引 17    
6.  Zhu G. Toward an intelligent edge: wireless communication meets machine learning. IEEE Communications Magazine,2020,58(1):19-25 被引 7    
7.  Wen D. An overview of data-importance aware radio resource management for edge machine learning. Journal of Communications and Information Networks,2019,4(4):1-14 被引 2    
8.  Letaief K B. The roadmap to 6G: AI empowered wireless networks. IEEE Communications Magazine,2019,57(8):84-90 被引 48    
9.  Konecny J. Federated learning: strategies for improving communication efficiency. arXiv:1610.05492,2016 被引 20    
10.  Yang Q. Federated machine learning: concept and applications. ACM Transactions on Intelligent Systems and Technology,2019,10(2):1-19 被引 135    
11.  Mcmahan B. Communicationefficient learning of deep networks from decentralized data. The 20th International Conference on Artificial Intelligence and Statistics,2017:1273-1282 被引 7    
12.  Hard A. Federated learning for mobile keyboard prediction. arXiv:1811.03604,2018 被引 5    
13.  Li Q. A survey on federated learning systems: vision, hype and reality for data privacy and protection. arXiv:1907.09693,2019 被引 3    
14.  Lin Y. Deep gradient compression: reducing the communication bandwidth for distributed training. arXiv:1712.01887,2017 被引 8    
15.  Sattler F. Robust and communication-efficient federated learning from non-iid data. IEEE transactions on Neural Networks and Learning Systems,2019,31(9):3400-3413 被引 44    
16.  Wang S. Adaptive federated learning in resource constrained edge computing systems. IEEE Journal on Selected Areas in Communications,2019,37(6):1205-1221 被引 24    
17.  Nishio T. Client selection for federated learning with heterogeneous resources in mobile edge. 2019 IEEE International Conference on Communications,2019 被引 1    
18.  Zhu G. Broadband analog aggregation for low-latency federated edge learning. IEEE Transactions onWireless Communications,2019,19(1):491-506 被引 3    
19.  Yang K. Federated learning via over-theair computation. IEEE Transactions on Wireless Communications,2020,19(3):2022-2035 被引 26    
20.  Amirimm. Machine learning at the wireless edge: distributed stochastic gradient descent over-the-air. IEEE Transactions on Signal Processing,2020,68:2155-2169 被引 13    
引证文献 2

1 张雪晴 面向边缘智能的联邦学习综述 计算机研究与发展,2023,60(6):1276-1295
被引 0 次

2 马千飘 异构边缘计算环境下异步联邦学习的节点分组与分时调度策略 通信学报,2023,44(11):2023196
被引 0 次

显示所有2篇文献

论文科学数据集
PlumX Metrics
相关文献

 作者相关
 关键词相关
 参考文献相关

版权所有 ©2008 中国科学院文献情报中心 制作维护:中国科学院文献情报中心
地址:北京中关村北四环西路33号 邮政编码:100190 联系电话:(010)82627496 E-mail:cscd@mail.las.ac.cn 京ICP备05002861号-4 | 京公网安备11010802043238号