Data Driven Campaign Management At The National Ignition Facility
暂无分享,去创建一个
The Campaign Management Tool (CMT) Suite provides tools for establishing the experimental goals, conducting reviews and approvals, and ensuring readiness for a National Ignition Facility (NIF) experiment. Over the last two years, CMT has significantly increased the number of diagnostics that it supports to approximately 40. Meeting this ever increasing demand for new functionality has resulted in a design whereby more and more of the functionality can be specified in data rather than coded directly in Java. To do this support tools have been written that manage various aspects of the data and to also handle potential inconsistencies that can arise from a data driven paradigm. For example; drop down menu selections for our experiment editor are specified in the Part and Lists Manager, the Shot Setup Reports that lists the configurations for diagnostics are specified in the database, the review tool Approval Manager has many aspects of it’s workflows configured through metadata that can be changed without a software deployment, and the Target Diagnostic Template Manager is used to provide predefined entry of hundreds setup parameters. The trade-offs, benefits and issues of adapting and implementing this data driven philosophy will be presented. BACKGROUND AND SYSTEM COMPONENTS The suite of applications discussed here are used to set up and approve experiments on the NIF. In the context of this paper, an “experiment” is an XML document that stores all of the settings that experimenters are able to configure for an individual laser shot event on the NIF. These settings are functionally associated into “data groups” that define the granularity at which review and approval occurs for the experiment. For example, all of the laser energy and timing settings form a data group, the beam pointing settings are a data group, the target setup is another, and each target chamber diagnostic device setup is its own data group. A “campaign” in the CMT suite is a collection of experiments associated under a given campaign name. The applications in the suite divide workflow into three broad, nominally consecutive, phases: setup, approval, and readiness. Within the overall lifecycle of an experiment from conception through post-shot analysis, these phases occur in the interval from several weeks to several hours before a shot is taken. Tools in the CMT Suite The Campaign Management Tool, CMT, is the experiment setup editor. A Java Swing application, CMT provides a spreadsheet-like interface to the experiments in a single campaign, with each experiment represented in a single column. As data group setups in an experiment are completed in CMT, they are submitted for review and approval. This process is managed by the Approval Manager (“AppMan”), a Java web app that sequences the approval workflow, provides approval status information, and provides links to reports needed for review and approval. Experiment readiness is the evaluation of the state of the NIF facility with respect to the requested configuration for an experiment. Readiness is monitored via another web app, ConfigChecker, which depends on applications outside of the CMT suite (LoCoS and Glovia) that provide up-to-date facility configuration information. Other applications provide specialized functionality to facilitate key aspects of the suite workflow. The Parts and Lists Manager (PLM) is a database front end through which the project manages most of the setup data option values exposed in CMT selection menus. A close cousin of PLM, the Target Selection Manager (TSM), manages the particular subset of setup menu data having to do with target system configurations. ShotSetupReports is the report generator for the suite, called from within AppMan to access experiment XML and expose setup selections via electronic reports. The Target Diagnostic Template Manager (TDTM) simplifies and shortens the experiment setup process by permitting CMT users to populate reusable setup templates for most of the target diagnostics in use at the NIF. The Pulse Shape Editor (PSE) provides a similar reuse capability for laser pulse shapes, as does the Beam Pointing Assistant (BPA) for pointing setups. Motivation for a Data-driven Architecture CMT is the oldest and largest application in the suite, developed around a core architecture that was laid down a decade ago. That architecture has been robust and extensible enough to accommodate tremendous growth in both the number of experiments configured as well as in the number of target diagnostics deployed at the NIF. Nevertheless, over the course of its evolution, both logic and data that were initially defined in the CMT code base have been migrated into other applications and data sources, respectively. The value in moving logic into other applications is that it keeps CMT focused as much as possible on being an experiment editor, which pays off in a relative reduction in complexity and the manifold benefits which accrue from that. Thus were born each of ___________________________________________ *This work performed under the auspices of the U.S. Department of Energy by Lawrence Livermore National Laboratory under Contract DE-AC52-07NA27344. #LLNL-ABS-632634, LLNL-CONF-644521 Proceedings of ICALEPCS2013, San Francisco, CA, USA FRCOAAB04 Experiment Control ISBN 978-3-95450-139-7 1473 C op yr ig ht c ○ 20 14 C C -B Y3. 0 an d by th e re sp ec tiv e au th or s the CMT satellite applications PLM, TSM, PSE, BPA, and AppMan. The benefit associated with moving data out of the CMT code base and into external sources is not in simplifying CMT, since in fact this generally adds code and complexity to CMT, but in making the data accessible via well-defined interfaces where it can be updated without the expense of updating CMT. The expense of updating CMT is incurred in the resources required to develop, test, and deploy each CMT release, whereas modifying data stored in a database requires only welldefined update operations and comparatively simple verification testing. Subsequent sections of this paper will examine the benefits and tradeoffs of key thrusts of our data-driven evolution. PARTS AND LISTS MANAGER Over time, there are two major drivers for change we experience in the Campaign Management software project: the development of new target chamber diagnostic systems, and the steady stream of resource changes within individual diagnostic systems. New diagnostics generally necessitate matching creation of new elements of supporting software. However, resource changes within individual diagnostic systems require configuration changes rather than design changes (for example, new detector filters to accommodate more energetic implosions). Changes of this sort are readily realized in data-only updates so long as the software is architected to enable this. PLM and the interfaces it provides represent a key realization of our efforts to implement such an architecture. The Data that Drives the Setup Setup selections in CMT generally fall into one of two types: numbers within a range (Fig. 1), in which the desired value is typed directly into the field, and selections from a list of discrete entries in a pulldown menu (Fig. 2). Both types are managed internally as lists. For the former, the list typically contains three entries: a minimum, a maximum, and a step size. For the latter, the list contains the set of discrete selections. Note that each list may have a set of additional attributes on each of its entries particular to that list, depending on required functionality. Figure 1: Numeric setup data entry Figure 2: Discrete setup data entry These lists are maintained in relational database tables. PLM provides a web interface through which the list data are interactively managed, and it provides a web service interface through which CMT queries the lists during its startup processing. Once CMT has completed startup it has a copy of each list in memory and it no longer requires a connection to PLM. This approach supports a design goal of CMT which is to permit users to launch CMT then disconnect from the network and continue editing experiments in a campaign. Conversely, it also means that users must restart CMT to see changes in setup data that were submitted after their currently running instance of CMT was launched.