Advanced technology development program review – a US Department of the Navy case study
暂无分享,去创建一个
The science and technology (S&T) programs sponsored by the US Department of the Navy (DoN) are divided into three major budget categories:
1. Basic research (6.1)
2. Applied research (6.2)
3. Advanced technology development (6.3)
In 1999, DoN commissioned an internal review of the 6.3 program. A 31 member review panel met for 1 week to rate and comment on six evaluation criteria (military goal, military impact, technical approach/payoff, program executability, transitionability (to more advanced development/ engineering budget categories or acquisition), overall item evaluation) for each of the 55 presentation topics into which the mid-$500 million per year 6.3 program was categorized. This paper describes the review process, documents insights gained from the review, summarizes key principles for a high-quality S&T evaluation process, and presents a network-centric protocol for future large-scale S&T reviews.
Insights gained from both the planning and conduct of the review should be of considerable value when conducting future large-scale 6.3-type reviews, and include the following:
1. Provision of detailed programmatic descriptive material to the panelists and audience before the review is very useful; its value could be enhanced by e-mail interchange between the presenter or facilitator and the panelists before the presentations to clarify outstanding issues and allow for more effective use of actual meeting time.
2. Appropriate use of group-ware could allow:
• Streamlining the review process with real-time data analysis and aggregation;
• Remote reviewer participation, thereby minimizing travel and logistics problems;
• More reviewers to participate in the process, producing a more representative sample of the technical community;
•Reviewers to be selected for expertise in specific evaluation criteria only, thereby enhancing the credibility of each rating;
• Sufficient expertise on the panel such that the Jury function (fully independent decision-making) can be separated from the expert witness function (potentially conflicted technical judgement and testimony).
3. When assessing and comparing quality of programs representing multiple disciplines, it is necessary to normalize. Evaluating all programs in one setting is an excellent way to accomplish this objective. Because of the realistic time constraints associated with a single-setting review, depth must be traded off for breadth. This trade-off is acceptable, as long as depth is evaluated by some means during the S&T operational management cycle.