期刊论文详细信息
IEEE Access
A Comparison of Loss Weighting Strategies for Multi task Learning in Deep Neural Networks
Oguz H. Elibol1  Cory Stephenson1  Ting Gong1  Gokce Keskin1  Suchismita Padhy1  Tyler Lee1  Venkata Renduchintala1  Anthony Ndirango1 
[1] Intel AI Lab, Santa Clara, CA, USA;
关键词: Dynamic weighting average;    multi-MNIST;    multi-objective optimization;    multi-task learning;    uncertainty weighting;   
DOI  :  10.1109/ACCESS.2019.2943604
来源: DOAJ
【 摘 要 】

With the success of deep learning in a wide variety of areas, many deep multi-task learning (MTL) models have been proposed claiming improvements in performance obtained by sharing the learned structure across several related tasks. However, the dynamics of multi-task learning in deep neural networks is still not well understood at either the theoretical or experimental level. In particular, the usefulness of different task pairs is not known a priori. Practically, this means that properly combining the losses of different tasks becomes a critical issue in multi-task learning, as different methods may yield different results. In this paper, we benchmarked different multi-task learning approaches using shared trunk with task specific branches architecture across three different MTL datasets. For the first dataset, i.e. Multi-MNIST (Modified National Institute of Standards and Technology database), we thoroughly tested several weighting strategies, including simply adding task-specific cost functions together, dynamic weight average (DWA) and uncertainty weighting methods each with various amounts of training data per-task. We find that multi-task learning typically does not improve performance for a user-defined combination of tasks. Further experiments evaluated on diverse tasks and network architectures on various datasets suggested that multi-task learning requires careful selection of both task pairs and weighting strategies to equal or exceed the performance of single task learning.

【 授权许可】

Unknown   

  文献评价指标  
  下载次数:0次 浏览次数:0次