| IEEE Access | |
| Low Latency Deep Learning Inference Model for Distributed Intelligent IoT Edge Clusters | |
| Soumyalatha Naveen1  Manjunath R. Kounte2  Mohammed Riyaz Ahmed3  | |
| [1] School of Computer Science and Engineering, REVA University, Bengaluru, Karnataka, India;School of Electronics and Communication Engineering, REVA University, Bengaluru, Karnataka, India;School of Multidisciplinary Studies, REVA University, Bengaluru, Karnataka, India; | |
| 关键词: Convolutional neural network; deep learning; distributed intelligence; edge computing; fog computing; heterogeneous devices; | |
| DOI : 10.1109/ACCESS.2021.3131396 | |
| 来源: DOAJ | |
【 摘 要 】
Edge computing is a new paradigm enabling intelligent applications for the Internet of Things (IoT) using mobile, low-cost IoT devices embedded with data analytics. Due to the resource limitations of Internet of Things devices, it is essential to use these resources optimally. Therefore, intelligence needs to be applied through an efficient deep learning model to optimize resources like memory, power, and computational ability. In addition, intelligent edge computing is essential for real-time applications requiring end-to-end delay or response time within a few seconds. We propose decentralized heterogeneous edge clusters deployed with an optimized pre-trained yolov2 model. In our model, the weights have been pruned and then split into fused layers and distributed to edge devices for processing. Later the gateway device merges the partial results from each edge device to obtain the processed output. We deploy a convolutional neural network (CNN) on resource-constraint IoT devices to make them intelligent and realistic. Evaluation was done by deploying the proposed model on five IoT edge devices and a gateway device enabled with hardware accelerator. The evaluation of our proposed model shows significant improvement in terms of communication size and inference latency. Compared to DeepThings for
【 授权许可】
Unknown