Edge computing is more and more used for running machine learning algorithms to assist IoT (Internet of Things) tasks such as video analytics, compression, anomaly detections, privacy preservation, and others. On the other hand, edge computing is often resource constrained, challenging training and inference phases of machine learning algorithms.
In this talk, we will discuss Federated Learning (FL), a machine learning paradigm that enables a cluster of decentralized edge devices to collaboratively train a shared machine learning model without exposing users’ raw data and hence providing privacy to clients. However, the intensive model training computation is energy-demanding and poses severe challenges to end devices’ battery life. We will discuss few approaches to develop a training pace controller deployed on the edge devices that actuates the hardware operational frequencies over multiple configurations to achieve energy-efficient federated learning; and to tackle the straggler problem in FL via the decentralized selection of coresets, representative subsets of a dataset, where our approach creates coresets directly on edges and optimizes the coreset clusters to reduce FL training time and hence energy and other resource usage.
We will conclude with discussing further challenges of machine learning on edge devices.