期刊论文详细信息
IEEE Access 卷:7
Power- and Time-Aware Deep Learning Inference for Mobile Embedded Devices
Jaeyong Chung1  Woochul Kang2 
[1] Department of Electronic Engineering, Incheon National University, Incheon, South Korea;
[2] Department of Embedded Systems Engineering, Incheon National University, Incheon, South Korea;
关键词: Deep learning;    DVFS;    feedback control;    embedded systems;    low power;    power-awareness;   
DOI  :  10.1109/ACCESS.2018.2887099
来源: DOAJ
【 摘 要 】

Deep learning is a state-of-the-art approach that provides highly accurate inference for many cyber-physical systems (CPS) such as autonomous cars and robots. Deep learning inference often needs to be performed locally on mobile and embedded devices, rather than in the cloud, to address concerns such as latency, power consumption, and limited bandwidth. However, existing approaches have focused on delivering “best-effort” performance to resource-constrained mobile embedded devices, resulting in unpredictable performance under highly variable environments of CPS. In this paper, we propose a novel deep learning inference runtime, called DeepRT, that supports multiple QoS objectives simultaneously against unpredictable workloads. In DeepRT, the multiple inputs/multiple outputs (MIMO) modeling and control methodology is proposed as a primary tool to support multiple QoS goals including the inference latency and power consumption. DeepRT’s MIMO controller coordinates multiple computing resources, such as CPUs and GPUs, by capturing their close interactions and effects on multiple QoS objectives. We demonstrate the viability of DeepRT’s QoS management architecture by implementing a prototype of DeepRT. The evaluation results demonstrate that, compared with baseline approaches, DeepRT can support the desired inference latency as well as power consumption for various deep learning models in a highly robust manner.

【 授权许可】

Unknown   

  文献评价指标  
  下载次数:0次 浏览次数:0次