期刊论文详细信息
Electronics
Distributed Deep Learning: From Single-Node to Multi-Node Architecture
Sidi Ahmed Mahmoudi1  Saïd Mahmoudi1  Jean-Sébastien Lerat2 
[1] Computer Science and Management Department, University of Mons, 7000 Mons, Belgium;Science and Technology Department, Haute École en Hainaut, 7000 Mons, Belgium;
关键词: deep learning;    frameworks;    CPU;    GPU;    distributed computing;   
DOI  :  10.3390/electronics11101525
来源: DOAJ
【 摘 要 】

During the last years, deep learning (DL) models have been used in several applications with large datasets and complex models. These applications require methods to train models faster, such as distributed deep learning (DDL). This paper proposes an empirical approach aiming to measure the speedup of DDL achieved by using different parallelism strategies on the nodes. Local parallelism is considered quite important in the design of a time-performing multi-node architecture because DDL depends on the time required by all the nodes. The impact of computational resources (CPU and GPU) is also discussed since the GPU is known to speed up computations. Experimental results show that the local parallelism impacts the global speedup of the DDL depending on the neural model complexity and the size of the dataset. Moreover, our approach achieves a better speedup than Horovod.

【 授权许可】

Unknown   

  文献评价指标  
  下载次数:0次 浏览次数:0次