Energy-Efficient Federated Edge Learning with Joint Communication and Computation Design
查看参考文献50篇
文摘
|
This paper studies a federated edge learning system, in which an edge server coordinates a set of edge devices to train a shared machine learning (ML) model based on their locally distributed data samples. During the distributed training, we exploit the joint communication and computation design for improving the system energy efficiency, in which both the communication resource allocation for global ML-parameters aggregation and the computation resource allocation for locally updating ML-parameters are jointly optimized. In particular, we consider two transmission protocols for edge devices to upload ML-parameters to edge server, based on the non-orthogonal multiple access (NOMA) and time division multiple access (TDMA), respectively. Under both protocols, we minimize the total energy consumption at all edge devices over a particular finite training duration subject to a given training accuracy, by jointly optimizing the transmission power and rates at edge devices for uploading ML-parameters and their central processing unit (CPU) frequencies for local update. We propose efficient algorithms to solve the formulated energy minimization problems by using the techniques from convex optimization. Numerical results show that as compared to other benchmark schemes, our proposed joint communication and computation design significantly can improve the energy efficiency of the federated edge learning system, by properly balancing the energy tradeoff between communication and computation. |
来源
|
Journal of Communications and Information Networks
,2021,6(2):110-124 【核心库】
|
DOI
|
10.23919/JCIN.2021.9475121
|
关键词
|
federated edge learning
;
energy efficiency
;
joint communication and computation design
;
resource allocation
;
non-orthogonal multiple access (NOMA)
;
optimization
|
地址
|
1.
School of Information Engineering, Guangdong University of Technology, Guangzhou, 510006
2.
School of Science and Engineering, the Chinese University of Hong Kong, Shenzhen, 518172
|
语种
|
英文 |
文献类型
|
研究性论文 |
ISSN
|
2096-1081 |
学科
|
电子技术、通信技术 |
基金
|
国家自然科学基金
;
supported in part by the National Key R&D Program of China
;
Guangdong Province Key Area R&D Program
|
文献收藏号
|
CSCD:6990516
|
参考文献 共
50
共3页
|
1.
Lecun Y. Deep learning.
Nature,2015,521(7553):436-444
|
被引
2941
次
|
|
|
|
2.
Rodrigues T K. Machine learning meets computation and communication control in evolving edge and cloud: challenges and future perspective.
IEEE Communications Surveys and Tutorials,2019,22(1):38-67
|
被引
2
次
|
|
|
|
3.
Mao Y. A survey on mobile edge computing: the communication perspective.
IEEE Communications Surveys and Tutorials,2017,19(4):2322-2358
|
被引
88
次
|
|
|
|
4.
Li E. Edge AI: on-demand accelerating deep neural network inference via edge computing.
IEEE Transactions on Wireless Communications,2019,19(1):447-457
|
被引
3
次
|
|
|
|
5.
Wang X. In-edge AI: intelligentizing mobile edge computing, caching and communication by federated learning.
IEEE Network,2019,33(5):156-165
|
被引
17
次
|
|
|
|
6.
Zhu G. Toward an intelligent edge: wireless communication meets machine learning.
IEEE Communications Magazine,2020,58(1):19-25
|
被引
7
次
|
|
|
|
7.
Wen D. An overview of data-importance aware radio resource management for edge machine learning.
Journal of Communications and Information Networks,2019,4(4):1-14
|
被引
2
次
|
|
|
|
8.
Letaief K B. The roadmap to 6G: AI empowered wireless networks.
IEEE Communications Magazine,2019,57(8):84-90
|
被引
48
次
|
|
|
|
9.
Konecny J. Federated learning: strategies for improving communication efficiency.
arXiv:1610.05492,2016
|
被引
20
次
|
|
|
|
10.
Yang Q. Federated machine learning: concept and applications.
ACM Transactions on Intelligent Systems and Technology,2019,10(2):1-19
|
被引
135
次
|
|
|
|
11.
Mcmahan B. Communicationefficient learning of deep networks from decentralized data.
The 20th International Conference on Artificial Intelligence and Statistics,2017:1273-1282
|
被引
7
次
|
|
|
|
12.
Hard A. Federated learning for mobile keyboard prediction.
arXiv:1811.03604,2018
|
被引
5
次
|
|
|
|
13.
Li Q. A survey on federated learning systems: vision, hype and reality for data privacy and protection.
arXiv:1907.09693,2019
|
被引
3
次
|
|
|
|
14.
Lin Y. Deep gradient compression: reducing the communication bandwidth for distributed training.
arXiv:1712.01887,2017
|
被引
8
次
|
|
|
|
15.
Sattler F. Robust and communication-efficient federated learning from non-iid data.
IEEE transactions on Neural Networks and Learning Systems,2019,31(9):3400-3413
|
被引
44
次
|
|
|
|
16.
Wang S. Adaptive federated learning in resource constrained edge computing systems.
IEEE Journal on Selected Areas in Communications,2019,37(6):1205-1221
|
被引
24
次
|
|
|
|
17.
Nishio T. Client selection for federated learning with heterogeneous resources in mobile edge.
2019 IEEE International Conference on Communications,2019
|
被引
1
次
|
|
|
|
18.
Zhu G. Broadband analog aggregation for low-latency federated edge learning.
IEEE Transactions onWireless Communications,2019,19(1):491-506
|
被引
3
次
|
|
|
|
19.
Yang K. Federated learning via over-theair computation.
IEEE Transactions on Wireless Communications,2020,19(3):2022-2035
|
被引
26
次
|
|
|
|
20.
Amirimm. Machine learning at the wireless edge: distributed stochastic gradient descent over-the-air.
IEEE Transactions on Signal Processing,2020,68:2155-2169
|
被引
13
次
|
|
|
|
|