MANAGING LARGE DISTRIBUTED DATA SETS FOR TESTING IN A JOINT ENVIRONMENT
暂无分享,去创建一个
As the Department of Defense (DOD) matures its testing methodologies for examining the contributions of a particular system or system of systems (SoS) to joint mission effectiveness (JMe) throughout the course of the acquisition process, the test community has begun the process of examining methods and processes for managing large and distributed data sets in a joint environment. Assuming a realistic joint mission data set can be constructed and relevant quantifiable data obtained, we are still left with the analytic question: Can this data repository, operating within this SoS, contribute to one particular, or a set of mission desired effects within a given scenario and specific conditions? This issue becomes even more complex as we examine the fluid environment of modern military operations. More specifically, the Secretary of Defense (SecDef), tasked the Director, Operational Test and Evaluation (DOT&E) to determine the actions necessary to create new joint testing capabilities and institutionalize the evaluation of JMe. In response to the Strategic Planning Guidance (SPG) tasking, DOT&E’s Testing in a Joint Environment Roadmap identifies changes to policy, procedures, and test infrastructure to ensure the Services can conduct test and evaluation (T&E) in joint mission environments (JME). Regarding methods and processes, the roadmap states, “T&E must adapt test methodologies to be prepared to test systems and SoS in assigned joint mission environments and accommodate evolving acquisition processes.”