Federated and Tiny Machine Learning for Edge Computing in Industrial IoT

1. Background
By incorporating multiple emerging technologies, including Internet of Things (IoT), 5G, Cloud computing and Artificial Intelligence (AI), industrial 4.0 enhances the efficiency of the entire manufacturing and operational process [1]. By leveraging deep learning-based technologies, industrial artificial intelligence (IAI) has been applied to solve various industrial challenging problems in Industry 4.0. Traditional cloud-based Machine Learning (ML) approaches require the data to be centralized in a cloud server or data center. However, this results in critical issues related to unacceptable latency and communication inefficiency. To this end, Mobile Edge Computing (MEC) has been proposed to bring intelligence closer to the edge, where data is produced. AI technology is showing its strengths in almost every industry and emerging from central AI to distributed AI. However, it still faces the issue of exploring ML at the edge of the embedded IoT world due to limit computing and storage resources. For instance, ABB Ability™ Smart Sensor [3] converts traditional motors, pumps and mounted bearings into smart, wirelessly connected devices. Integrating AI with these smart sensors can potentially help make fast local decisions to reduce the maintenance cost and improve quality of service.  

2. Problem and motivation
Today’s AI still faces two major challenges. One is that in most industries, data is limited and exists in the form of isolated islands. The current public interest in AI is partly driven by Big Data availability: AlphaGo in 2016 used a total of 300,000 games as training data to achieve the excellent results. However, in most industries, data exists in the form of isolated islands. Due to industry competition, privacy security, and complicated administrative procedures, even data integration between different departments of the same company faces heavy resistance. The other is the strengthening of data privacy and security. For privacy reasons, traditional centralized training may be unsuitable for sensitive data-driven industrial scenarios, such as the smart grid, healthcare and autonomous driving. The recent data breach by Facebook has caused a wide range of protests [4]. In response, states across the world are strengthening laws in protection of data security and privacy. An example is the General Data Protection Regulation (GDPR) [5] enforced by the European Union on May 25, 2018.

Recently, federated learning (FL) [8] has received widespread attention since it enables participants to collaboratively learn a shared model without revealing their local data. The concept of federated learning was proposed by Google recently in 2016 [6]. Their main idea is to build machine learning models based on data sets that are distributed across multiple devices while preventing data leakage. Most of the federated learning techniques are utilized in consumer products rather than in Industrial IoT. Tiny machine learning [7]  is broadly defined as a fast growing field of machine learning technologies and applications including hardware (dedicated integrated circuits), algorithms and software capable of performing on-device sensor (vision, audio, IMU, biomedical, etc.) data analytics at extremely low power, typically in the mW range and below, and hence enabling a variety of always-on use-cases and targeting battery operated devices. TinyML has become popular in recent two years to enable deploying machine learning model in embedded IoT devices. Ericsson proposes TinyML as-a-Service in [9]. Several industrial companies also engage in TinyML, such as SensiML[10], OctoML [11]. However, TinyML is still in its infancy.

This thesis proposal proposes the Federated Tiny Machine Learning (FTML) which integrates federated learning and TinyML to address the following issues:
1) Deploy federated machine learning models in embedded IoT devices with limited computing and storage resources.
2) Train the machine learning model in a distributed way while keeping the data inside the devices to protect user privacy.
3) Fuse the data information together in a common site, by transporting the data information across edge devices, departments and organizations.

3. Federated Tiny Machine Learning in Industrial IoT
This thesis proposal aims to answer above questions in Section.2 in Industrial IOT by exploring theories and techniques of federated learning, TinyML and edge/fog computing.

Figure.1 Federated Tiny Machine Learning in Industrial IoT

Figure.1 shows the architecture combining federated learning and TinyML in Industrial IoT. The architecture consists of three layers, end device layer, edge layer and cloud layers. Federated learning models are deployed among all the three layers. However, due to limited resources in end devices, such as in ABB smart sensors [2], TinyML models are deployed in end devices layers.

This research proposals utilize a system approach to address the research questions, combining both theoretical and experimental investigations with following steps:

1. Establish a test bed consisting of three layers using sensor devices and platform, as shown in Figure.1.
2. Explore Federated learning in Industrial IoT. As seen in Fig. 1, we consider the Federated learning framework across end devices, edge devices and cloud. In FL, end devices use their local data to train an ML model required by the server. The end devices then send the model updates rather than raw data to the server for aggregation. FL can serve as an enabling technology in mobile edge networks since it enables the collaborative training of a ML model and also enables DL for mobile edge network optimization. However, in a large-scale and complex mobile edge network, heterogeneous devices with varying constraints are involved. This research proposal will further try to address the challenges of communication costs, resource allocation, and privacy and security in the implementation of FL at scale.
3.    Deploy machine learning capability in smart sensors with TinyML. There are several methods to reduce the machine learning model size with TinyML, such as parameter quantization and pruning [12], filter compression and matrix factorization [13] and network architecture search [14]. Which techniques to apply depends on the sensor devices and application need.

4. Required student background
Interested in Machine Learning;
Background in tools such as Linux, Raspberry Pi;
Good coding skills, such as Python.

Reference:
[1] Zhou, K., Liu, T. and Zhou, L., 2015, August. Industry 4.0: Towards future industrial opportunities and challenges. In 2015 12th International conference on fuzzy systems and knowledge discovery (FSKD) (pp. 2147-2152). IEEE.
[2]https://new.abb.com/grid/events/cigre-2016/microgrids#:~:text=Microgrid,and%20research%20and%20industrial%20campuses.
[3] https://new.abb.com/motors-generators/service/advanced-services/smart-sensor
[4] Wikipedia. 2018. https://en.wikipedia.org/wiki/Facebook-Cambridge_Analytica_data_scandal.
[5] EU. 2016. REGULATION (EU) 2016/679 OF THE EUROPEAN PARLIAMENT AND OF THE COUNCIL on the protection of natural persons with regard to the processing of personal data and on the free movement of such data, and repealing Directive 95/46/EC (General Data Protection Regulation). Available at: https://eur-lex. europa. eu/legal-content/EN/TXT (2016).
[6] Jakub Konecný, H. Brendan McMahan, Daniel Ramage, and Peter Richtárik. 2016. Federated Optimization: Distributed Machine Learning for On-Device Intelligence. CoRR abs/1610.02527 (2016).
[7] TinyML: Machine Learning with TensorFlow Lite on Arduino and Ultra-Low-Power. 2019. By Pete Warden, Daniel Situnayake.
[8] Yang, Q., Liu, Y., Chen, T. and Tong, Y., 2019. Federated machine learning: Concept and applications. ACM Transactions on Intelligent Systems and Technology (TIST), 10(2), pp.1-19.
[9] https://www.ericsson.com/en/blog/2019/12/tinyml-as-a-service-iot-edge
[10] https://sensiml.com/resources/#resources-case-studies
[11] https://octoml.ai/
[12] Han, Song, Huizi Mao, and William J. Dally. "Deep compression: Compressing deep neural networks with pruning, trained quantization and huffman coding." arXiv preprint arXiv:1510.00149 (2015).
[13] Wu, Kailun, Yiwen Guo, and Changshui Zhang. "Compressing Deep Neural Networks With Sparse Matrix Factorization." IEEE Transactions on Neural Networks and Learning Systems (2019).
[14] Liu, Chenxi, et al. "Auto-deeplab: Hierarchical neural architecture search for semantic image segmentation." Proceedings of the IEEE conference on computer vision and pattern recognition. 2019.

 

Publisert 24. okt. 2021 11:07 - Sist endret 24. okt. 2021 11:07

Veileder(e)

Omfang (studiepoeng)

60