Software testing is today the most widely used approach for assessing and improving software quality. Despite its popularity, however, software testing has a number of inherent limitations. First, due to resource limitations, in-house tests necessarily exercise only a tiny fraction of all the possible behaviors of a software system. Second, testers typically select this fraction of behaviors to be tested based either on some (more or less rigorous) selection criteria or on their assumptions, intuition, and experience. As a result, in-house tests are typically not representative of the software behavior exercised by real users, which ultimately results in the software behaving incorrectly and failing in the field, after it has been released. The overarching goal of my dissertation is to address this problem and improve the effectiveness of in-house testing.To this end, I proposed a set of techniques for measuring and bridging the gap between in-house tests and field executions. My first technique allows for quantifying and analyzing the differences between behaviors exercised in-house and in the field. My second approach leverages the differences identified by my first technique to generate, using a guided symbolic analysis, test inputs that mimic field behaviors and can be added to existing in-house test suites. Finally, my third approach leverages the executions observed in the field to improve symbolic input generation and make test generation more effective. The evaluation shows that my techniques can effectively generate test inputs using field execution data and make in-house testing more representative of field executions.
【 预 览 】
附件列表
Files
Size
Format
View
Improving in-house testing using field execution data