概要
The ATLAS experiment successfully commissioned a software and computing
infrastructure to support the physics program during LHC Run 2. The next
phases of the accelerator upgrade will present new challenges in the
offline area. In particular, at High Luminosity LHC (also known as Run 4)
the data taking conditions will be very demanding in terms of computing
resources: between 5 and 10 KHz of event rate from the HLT to be
reconstructed (and possibly further reprocessed) with an average pile-up
of up to 200 events per collision and an equivalent number of simulated
samples to be produced. The same parameters for the current run are lower
by up to an order of magnitude.
While processing and storage resources would need to scale accordingly,
the funding situation allows one at best to consider a flat budget over
the next few years for offline computing needs. In this seminar I present
a study quantifying the challenge in terms of computing resources for
HL-LHC and ideas about the possible evolution of the ATLAS computing
model, the distributed computing tools, and the offline software to cope
with such a challenge.