Frontiers in Big Data | |
On Robustness of Neural Architecture Search Under Label Noise | |
Xi Liu1  Yi-Wei Chen1  Xia Hu2  P. S. Sastry3  Qingquan Song3  | |
[1] DATALab, Department of Computer Science and Engineering, Texas A&;Department of Electrical and Computer Engineering, Texas A&M University, College Station, TX, United States; | |
关键词: deep learning; automated machine learning; neural architecture search; label noise; robust loss function; | |
DOI : 10.3389/fdata.2020.00002 | |
来源: DOAJ |
【 摘 要 】
Neural architecture search (NAS), which aims at automatically seeking proper neural architectures given a specific task, has attracted extensive attention recently in supervised learning applications. In most real-world situations, the class labels provided in the training data would be noisy due to many reasons, such as subjective judgments, inadequate information, and random human errors. Existing work has demonstrated the adverse effects of label noise on the learning of weights of neural networks. These effects could become more critical in NAS since the architectures are not only trained with noisy labels but are also compared based on their performances on noisy validation sets. In this paper, we systematically explore the robustness of NAS under label noise. We show that label noise in the training and/or validation data can lead to various degrees of performance variations. Through empirical experiments, using robust loss functions can mitigate the performance degradation under symmetric label noise as well as under a simple model of class conditional label noise. We also provide a theoretical justification for this. Both empirical and theoretical results provide a strong argument in favor of employing the robust loss function in NAS under high-level noise.
【 授权许可】
Unknown