Distributed computing and data analysis in the CMS Experiment
暂无分享,去创建一个
The CMS Experiments at the Large Hadron Collider (LHC) at CERN/Geneva is expected to start taking data during summer 2008. The CMS Computing, Software and Analysis projects will need to meet the expected performances in terms of data archiving, calibration and reconstruction at the host laboratory, as well as data transferring to many computing centers located around the word, where further archiving and re-processing will take place. Hundreds of physicists will then expect to find the necessary infrastructure to easily access and start analysing the long awaited LHC data. In recent years, CMS has conducted a series of Computing, Software, and Analysis challenges to demonstrate the functionality, scalability and usability of the relevant components and infrastructure. These challenges have been designed to validate the CMS distributed computing model [1] and to run operations in quasi-real data taking conditions. We will present the CMS readiness in terms of data archiving, offline processing, data transferring and data analysis. We will particularly focus on the achieved metrics during 2008 and potentially on first data taking experiences.
[1] Marcelino B. Santos,et al. CMS Physics Technical Design Report, Volume II: Physics Performance , 2007 .
[2] Daniele Spiga,et al. The CMS Remote Analysis Builder (CRAB) , 2007, HiPC.