期刊论文详细信息
Applied Sciences
Communication Optimization Schemes for Accelerating Distributed Deep Learning Systems
JiSun Shin1  Hyeonwoo Jeong2  Hyeonseong Choi2  Jaehwan Lee2  Baekhyeon Noh2 
[1] Department of Computer and Information Security , Sejong University, Seoul 05006, Korea;School of Electronics and Information Engineering, Korea Aerospace University, Goyang-si 10540, Korea;
关键词: distributed deep learning;    multi-GPU;    data parallelism;    communication optimization;   
DOI  :  10.3390/app10248846
来源: DOAJ
【 摘 要 】

In a distributed deep learning system, a parameter server and workers must communicate to exchange gradients and parameters, and the communication cost increases as the number of workers increases. This paper presents a communication data optimization scheme to mitigate the decrease in throughput due to communication performance bottlenecks in distributed deep learning. To optimize communication, we propose two methods. The first is a layer dropping scheme to reduce communication data. The layer dropping scheme we propose compares the representative values of each hidden layer with a threshold value. Furthermore, to guarantee the training accuracy, we store the gradients that are not transmitted to the parameter server in the worker’s local cache. When the value of gradients stored in the worker’s local cache is greater than the threshold, the gradients stored in the worker’s local cache are transmitted to the parameter server. The second is an efficient threshold selection method. Our threshold selection method computes the threshold by replacing the gradients with the L1 norm of each hidden layer. Our data optimization scheme reduces the communication time by about 81% and the total training time by about 70% in a 56 Gbit network environment.

【 授权许可】

Unknown   

  文献评价指标  
  下载次数:0次 浏览次数:5次