Challenges The need for efficient between-shot analysis and visualization is driven by the high cost of operating the experimental facility. (“Shots” are the basic units of fusion experiments. Today, a typical large facility might take shots at a rate of 2-4 per hour and accumulate about 2,000 shots per year.) The average cost per shot for ITER, defined here as the integrated project cost divided by the total shots estimated over the project lifetime, will approach one million US dollars. Thus, the number of shots required to optimize performance and to carry out experimental programs must be minimized. This translates into a need to carry out extensive analysis and assessment immediately after each shot. ITER shots will also be much longer than on most current machines and will generate much more data, perhaps a Terabyte per shot. The quantity of data itself, perhaps 2 PB per year, will likely not be a technical challenge at the time that ITER will be operating – about a decade from now. However long-pulse operation will require concurrent writing, reading, visualization and analysis of experimental data. More challenging is the integration across time scales. The data set will encompass more than a factor of 10 9 in significant time scales, leading to requirements for efficient browsing of very long data records and the ability to describe and locate specific events accurately from within very long time series. Not only will ITER be an expensive device, as a licensed nuclear facility and the first reactor-scale fusion experiment, security of the plant will be a paramount concern. The data systems must balance these requirements with the need to keep data access as open to the participating scientists as possible. Mechanisms and modalities for remote control must also fit into a robust security model. Further, the 10-year construction and 15+ year operating life for ITER will encompass evolutionary and revolutionary changes in hardware, software and protocols; thus the system must be based on a conceptual design that is extensible, flexible and robust enough to meet new requirements and be capable of adapting and migrating to new technologies and to new computing platforms as they arise. Backward compatibility, the ability to read old data and perform old analysis, must be maintained over the life of the experiment.
[1]
Thomas W. Fredian,et al.
MDSplus remote collaboration support-internet and world wide web
,
1999
.
[2]
D. J. Campbell,et al.
Chapter 1: Overview and summary
,
1999
.
[3]
R. J. Hawryluk,et al.
An Empirical Approach to Tokamak Transport
,
1981
.
[4]
D. Post,et al.
ITER: Physics basis
,
1990,
1990 Plasma Science IEEE Conference Record - Abstracts.
[5]
Mary R. Thompson,et al.
Simplifying FusionGrid security
,
2005,
CLADE 2005. Proceedings Challenges of Large Applications in Distributed Environments, 2005..
[6]
S. Wolfe,et al.
Remote control of alcator C-mod from Lawrence Livermore National Laboratory
,
1997
.
[7]
Ian Foster,et al.
Building the US National Fusion Grid: results from the National Fusion Collaboratory Project
,
2004
.
[8]
J. Stillerman,et al.
WWW interfaces for runtime relational database applications
,
2000
.
[9]
R. J. Fonck,et al.
Remote operation of the TFTR BES experiment from an off-site location
,
1992
.
[10]
Michael E. Papka,et al.
Developments in Remote Collaboration and Computation
,
2005
.
[11]
David P. Schissel,et al.
Grid computing and collaboration technology in support of fusion energy sciences
,
2005
.
[12]
J. Stillerman,et al.
MDSplus Current Developments and Future Directions
,
2002
.
[13]
J. Stillerman,et al.
MDSplus data acquisition system
,
1997
.