IEEE Access | |
Model Capacity Vulnerability in Hyper-Parameters Estimation | |
Xiao Liu1  Qiang Liu1  Wentao Zhao1  Pan Li2  Jiuren Chen3  | |
[1] College of Computer Science and Technology, National University of Defense Technology, Changsha, China;School of Electronic Engineering and Computer Science, Queen Mary University of London, London, U.K.;Science and Technology on Test Physics and Numerical Mathematic Laboratory, Beijing, China; | |
关键词: Adversarial vulnerability; model capacity; hyper-parameter poisoning; gradient-based optimization; | |
DOI : 10.1109/ACCESS.2020.2969276 | |
来源: DOAJ |
【 摘 要 】
Machine learning models are vulnerable to a variety of data perturbation. Recent research mainly focuses on the vulnerability of model training and proposes various model-oriented defense methods to achieve robust machine learning. However, most of the existing research overlooks the vulnerability of model capacity, which is more fundamental for model performance. In this paper, we study an adversarial vulnerability of model capacity caused by the poisoning on the estimation of model hyper-parameters. We further implement this vulnerability catering for the polynomial regression model, on which the evading of model-oriented detection is challenging, to illustrate the effectiveness of the adversarial vulnerability. Extensive experiments on one synthetic and three real-world data sets demonstrate that the vulnerability can effectively mislead the hyper-parameter estimation of the polynomial regression model by poisoning a few numbers of camouflage samples that cannot be detected by model-oriented defense methods.
【 授权许可】
Unknown