General
Site
|
Computing for the ATLAS experiment at the LHC proton-proton collider at
CERN will pose a number of new, yet unsolved technological and organisational
challenges:
-
The CPU power required would currently not be affordable. For the offline
processing, it is estimated that 2.5x105 SPECint95 (107 MIPS) will be needed.
To collect this computer power today, it would take 50,000 of the most
powerful PCs. We count on the continuing improvement of the price-performance
ratio of CPU power and of the corresponding disks and memory.
-
The data volume produced by the experiment of about 1 Pbyte (1015 bytes)
per year requires new methods for data reduction, data selection, and data
access for physics analysis. The basic consideration is that every
physicist in ATLAS must have the best possible access to the data necessary
for the analysis, irrespective of his/her location. The proposed scheme
consists of archiving the raw daat (1 Pbyte/year) selected by the
Level-3 trigger system. A first event reconstruction will be performed
at CERN, on all data a few hours after data taking. For this processing,
basic calibration and alignment have to be available. The purpose of this
processing is to determine physics quantities for use in analysis and to
allow event classification according to physics channels. The data produced
in the processing have to be accessible at the event level and even below
that at the physics object level. We are considering an object-oriented
database system for this purpose. One copy of the data will be held at
CERN. We also consider replicating some or all of the data at a small number
of regional centres.
-
The world-wide collaboration will require performant wide-area networks
for data access and the physics analysis. Big improvements in the price-performance
evolution of networks are hoped for in view of thede-regulation of the
PTTs, the evolution of the Internet and the wide-spread use of networks
for multi-media applications such as video on demand.
The effort required to develop and maintain the ATLAS software will
be enormous. Because of the dependence of the whole experiment?s success
and due to the long lifetime of about 20 years of the project, the software
quality requirements will have to be very high. For the whole ATLAS software
development, up to 1000 person-years will be required. It appears that
the overall manpower is available within the collaboration. A complication
is that the workforce is very much spread geographically and that many
developers will be students who can spend only a few years in the project.
|