学位论文详细信息
Multiple-implementation testing of supervised learning software
Machine learning;Multiple-implementation testing;Differential testing;Supervised learning;Multiple-implementation monitoring;Software monitoring;k-Nearest neighbor (kNN);Naive Bayes;Software testing;Pseudo oracle;Algorithm configurations;Percentage threshold;Black box;Test oracle;Multiple implementation;NaiveBayes
Alebiosu, Oreoluwa ; Xie ; Tao
关键词: Machine learning;    Multiple-implementation testing;    Differential testing;    Supervised learning;    Multiple-implementation monitoring;    Software monitoring;    k-Nearest neighbor (kNN);    Naive Bayes;    Software testing;    Pseudo oracle;    Algorithm configurations;    Percentage threshold;    Black box;    Test oracle;    Multiple implementation;    NaiveBayes;   
Others  :  https://www.ideals.illinois.edu/bitstream/handle/2142/97479/ALEBIOSU-THESIS-2017.pdf?sequence=1&isAllowed=y
美国|英语
来源: The Illinois Digital Environment for Access to Learning and Scholarship
PDF
【 摘 要 】

Machine Learning (ML) software, used to implement an ML algorithm, is widely used in many application domains such as financial, business, and engineering domains. Faults in ML software can cause substantial losses in these application domains. Thus, it is very critical to conduct effective testing of ML software to detect and eliminate its faults. However, testing ML software is difficult, especially on producing test oracles used for checking behavior correctness (such as using expected properties or expected test outputs).To tackle the test-oracle issue, this thesis presents a novel black-box approach of multiple-implementation testing for supervised learning software. The insight underlying the approach is that there can be multiple implementations (independently written) for a supervised learning algorithm, and majority of them may produce the expected output for a test input (even if none of these implementations are fault-free). In particular, the proposed approach derives a pseudo oracle for a test input by running the test input on n implementations of the supervised learning algorithm, and then using the common test output produced by a majority (determined by a percentage threshold) of these n implementations. The proposed approach includes techniques to address challenges in multiple-implementation testing (or generally testing) of supervised learning software: the definition of test cases in testing supervised learning software, along with resolution of inconsistent algorithm configurations across implementations. In addition, to improve dependability of supervised learning software during in-field usage while incurring low runtime overhead, The approach includes a multiple-implementation monitoring technique. The evaluations on the proposed approach show that multiple-implementation testing is effective in detecting real faults in real-world ML software (even popularly used ones), including 5 faults from 10 NaiveBayes implementations and 4 faults from 20 k-nearest neighbor implementations, and the proposed technique of multiple-implementation monitoring substantially reduces the need of running multiple implementations with high prediction accuracy.

【 预 览 】
附件列表
Files Size Format View
Multiple-implementation testing of supervised learning software 441KB PDF download
  文献评价指标  
  下载次数:12次 浏览次数:40次