To fulfill the future aviation needs of the public and military, there are efforts in industry and government to integrate aircraft with enabling technologies to achieve aggressive goals and requirements for performance and capabilities. However, many enabling technologies are immature, and system integrators incur the associated risk when they integrate these technologies. This risk can be reduced through technology development programs, but these programs often require over ten years and significant resources before the technology can be transitioned to the vehicle. Ideally, the process could be accelerated and the required resources reduced by creating the development activities, such as physical experiments and tests, such that they maximize performance improvement, maturation, and risk reduction during the development program. The motivating question is \textit{How should technology development activities be designed?} The research in this dissertation comprises contributions toward a solution this problem.A review of the literature pertaining to the design of technology development activities revealed that current practices are driven by a qualitative criterion called Technology Readiness Level that does not provide a clear picture of the state of knowledge about technology impacts. The immediate consequence of using this criterion for decision making is that it does not capture all of the critical dimensions of the consequence space for evaluating alternative activity designs and may result in misinformed decisions. Existing technology development activity design methodologies were identified that improve upon current practices, but they fall short of providing a complete path to designing a portfolio of technology development activities. To address the gaps from the literature, a novel framework was proposed that comprises three phases: (1) thought experimentation, (2) detailed definition of the activities, and (3) statistical design of experiments. Although the proposed framework can be implemented as is for a given technology development program, opportunities were identified to enhance the framework by adding rigor to the decision making processes.Three enhancements to the proposed solution framework are presented in this dissertation. Each enhancement improves upon methods from the literature by addressing research gaps. First, existing methodologies for planning and managing technology development leverage sensitivity analyses to inform decisions regarding which classes of development activities to pursue. It was argued that this approach does not explicitly evaluate alternatives, but rather provides measures of the potential of \textit{any} development activities to affect system-level uncertainty and performance. Thus, a need was identified for an appropriate way for decision makers to evaluate the alternatives for downselection. Second, existing quantitative methodologies make the assumption that the combined epistemic and aleatory uncertainty surrounding technology integration impacts can be quantified from a combination of data and expert elicitation. Bayesian inference has been proposed for sequentially updating initial probability distributions with data from technology development activities, but misleading inferences can arise when the data sources are heterogeneous. To overcome this issue, there is a need for an appropriate way to quantify technology integration impact uncertainty in light of data from multiple, heterogeneous experiments. Finally, as part of any decision process for the detailed design of the development activities, there are multiple criteria that are important to include when evaluating the alternatives. One of the most prominent criteria that is mentioned in the literature is uncertainty reduction. To enable the evaluation of alternatives, a need was identified for an appropriate way to quantitatively estimate expected uncertainty reduction for planned technology development activities.The first research gap was addressed with a normative decision support methodology that incorporates techniques from multiattribute utility theory. The methodology entails establishing objectives and attributes, constructing a utility model to represent decision makers' values, modeling the impacts of the alternatives, and evaluating the alternatives with expected utility. The product of the methodology is not simply a single expected utility for each alternative but rather a capability that enables quantitative tradeoffs and sensitivity analyses to provide insights and stimulate deeper thinking about the problem on the part of the decision makers. Compared with the state of the art, the proposed methodology is an improvement because it was shown to enable explicit evaluation of alternatives rather than only providing measures of potential for each technology.The second and third research gaps were addressed for two types of technology development activities: computer experiments and physical experiments. Although there are many types of technology development activities, these were the focus because they are crucial to development; technologies cannot be matured without them. The ingredients for a solution were identified in the statistics and machine learning literature. These ingredients were synthesized and adapted for the technology development context to formulate a methodology that addresses the research gaps. The first three steps of the methodology were borrowed from the data analysis literature. These steps comprise the traditional pipeline of cleaning a data set, identifying a set of predictive models, and evaluating and selecting from the set of models. The fourth step is a novel contribution because it provides an approach for incorporating epistemic technology maturity uncertainty in Gaussian process model predictions. The fifth step is also a novel contribution because it fuses a rigorous information theoretic framework for quantifying uncertainty reduction with predictive models that incorporate the additional layer of epistemic uncertainty associated with technology maturity.The second gap was also investigated for success/failure reliability tests. An adaptation of the traditional Bayesian beta-binomial probability model was formulated to address the research gap. The novel Bayesian reliability analysis methodology begins with traditional Bayesian data analysis steps. Then, a maturity weight is introduced in the posterior beta distribution to enable discounting of the reliability data at a given point in the development process. The flexibility provided by the infusion of a maturity weight was shown to enable an analyst to inject additional subjective uncertainty into the inference process, thereby enabling estimates of failure probabilities that reflect this maturity uncertainty.The objective of this research was to establish a framework for designing technology development activities that improves the state of decision support capabilities. Although the framework has been established so that it can be populated with additional improvements in the future, the research objective was achieved because all of the contributions presented in this dissertation have been shown to improve upon existing methods and current practices.
【 预 览 】
附件列表
Files
Size
Format
View
A framework for designing technology development activities