Machine Learning and data-driven approaches have recently received considerable attention as key enablers for next-generation intelligent networks. Currently, most existing learning solutions for wireless networks rely on centralizing the training and inference processes by uploading data generated at edge devices to data centers. However, such a centralized paradigm may lead to privacy leakage, violate the latency constraints of mobile applications, or may be infeasible due to limited bandwidth or power constraints of edge devices. To address these issues, distributing Machine Learning at the network edge provides a promising solution, where edge devices collaboratively train a shared model using real-time generated mobile data. The avoidance of data uploading to a central server not only helps preserve privacy but also reduces network traffic congestion as well as communication cost. Federated Learning (FL) is one of most important distributed learning algorithms. In particular, FL enables devices to train a shared Machine Learning model while keeping data locally. However, in FL, training machine learning models requires communication between wireless devices and edge servers over wireless links. Therefore, wireless impairments such as noise, interference, and uncertainties among wireless channel states will significantly affect the training process and performance of FL. For example, transmission delay can significantly impact the convergence time of FL algorithms. In consequence, it is necessary to optimize wireless network performance for the implementation of FL algorithms.
On the other hand, FL can also be used for solving wireless communication problems and optimizing network performance. For example, FL endows on edge devices the capabilities of user behavior prediction, user identification, and wireless environment analysis. Moreover, federated reinforcement learning leverages distributed computation power and data to solve complex convex and nonconvex optimization problems that arise in various use cases, such as network control, user clustering, resource management, and interference alignment. Besides, traditionally, FL makes a desirable assumption that edge devices will unconditionally participate in the tasks when invited, which is not practical in reality due to resources cost and wiliness incurred by model training. Therefore, building incentive mechanisms is indispensable for FL network.
Скачать Communication Efficient Federated Learning for Wireless Networks