期刊论文详细信息
EURASIP Journal on Information Security
Feature partitioning for robust tree ensembles and their certification in adversarial scenarios
Federico Marcuzzi1  Stefano Calzavara1  Claudio Lucchese1  Salvatore Orlando1 
[1] Department of Environmental Sciences, Informatics and Statistics, Ca’ Foscari University of Venice, Venice, Italy;
关键词: Adversarial machine learning;    Evasion attack;    Forests of decision trees;   
DOI  :  10.1186/s13635-021-00127-0
来源: Springer
PDF
【 摘 要 】

Machine learning algorithms, however effective, are known to be vulnerable in adversarial scenarios where a malicious user may inject manipulated instances. In this work, we focus on evasion attacks, where a model is trained in a safe environment and exposed to attacks at inference time. The attacker aims at finding a perturbation of an instance that changes the model outcome.We propose a model-agnostic strategy that builds a robust ensemble by training its basic models on feature-based partitions of the given dataset. Our algorithm guarantees that the majority of the models in the ensemble cannot be affected by the attacker. We apply the proposed strategy to decision tree ensembles, and we also propose an approximate certification method for tree ensembles that efficiently provides a lower bound of the accuracy of a forest in the presence of attacks on a given dataset avoiding the costly computation of evasion attacks.Experimental evaluation on publicly available datasets shows that the proposed feature partitioning strategy provides a significant accuracy improvement with respect to competitor algorithms and that the proposed certification method allows ones to accurately estimate the effectiveness of a classifier where the brute-force approach would be unfeasible.

【 授权许可】

CC BY   

【 预 览 】
附件列表
Files Size Format View
RO202203042405526ZK.pdf 2823KB PDF download
  文献评价指标  
  下载次数:3次 浏览次数:0次