Systems in which there is a human in the loop are hard to design, analyze, and verify. When the correct behavior of these system is safety-critical or mission-critical, considerable effort must be invested in the design of the human-machine interface and in the analysis of the integrated human-machine behavior. Although accidents are often attributed to "operator error", these errors can often be predicted and avoided by accounting more thoroughly for the human in the loop[6]. Decisions concerning thehuman-machine interface are more than after-thought cosmeticdecisions of layout, color, and font; they concern the essence of the functionality of the system encompassing decisions concerning which tasks to automate and which tasks to leave for the human operator and, for those tasks that are not fully automated, deciding for the support role that the system can play(see, e.g.[3]). These ollaborative decisions are made taking into account the capabilities and llimitations of the human agent including the speed of making complex decisions, the capacity to recall information, the accuracy with which such information is recalled, as a function of its recency —— among other factors, the effect of mental overload and mental overload, and the effect of other human behavioral modifiers such as fatigue, stress, fear, and other environmental and emotional conditions[1, 2,7-10].[first paragraph]