Transparent Learning: An Incremental Machine Learning Framework Based on Transparent Computing

In the Internet of Things environment, the capabilities of various clients are being developed in the direction of networking and intellectualization. How to develop the clients' capability from that of only collecting and displaying data to that of possessing intelligence has been a critical i...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:IEEE network 2018-01, Vol.32 (1), p.146-151
Hauptverfasser: Guo, Kehua, Liang, Zhonghe, Shi, Ronghua, Hu, Chao, Li, Zuoyong
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext bestellen
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
Beschreibung
Zusammenfassung:In the Internet of Things environment, the capabilities of various clients are being developed in the direction of networking and intellectualization. How to develop the clients' capability from that of only collecting and displaying data to that of possessing intelligence has been a critical issue. In recent years, machine learning has become a representative technology in client intellectualization and is now attracting growing interest. In machine learning, massive computing, including data preprocessing and training, requires substantial computing resources; however, lightweight clients usually do not have strong computing capability. To solve this problem, we introduce the advantage of transparent computing (TC) for the client intellectualization framework and propose an incremental machine learning framework named transparent learning (TL), where training tasks are moved from lightweight clients to servers and edge devices. After training, test models are transmitted to clients and updated with incremental training. In this study, a cache strategy is designed to divide the training set in order to optimize the performance. We choose deep learning as the performance evaluation case, and conduct several TensorFlow-based experiments to demonstrate the efficiency of the framework.
ISSN:0890-8044
1558-156X
DOI:10.1109/MNET.2018.1700154