概要
The world's largest and most powerful scientific machine - the Large Hadron Collider(LHC) - is in the middle of a multi-year physics program with increasing energy and luminosity. In order to exploit the scientific potential of the machine, the experiments at the LHC face computational challenges with enormous data volumes that need to be analysed by thousand of physics users and compared to simulated data. Given diverse funding constraints, the computational resources for the LHC have been deployed in a worldwide mesh of data centres, connected to each other through Grid technologies.
This talk will cover the ATLAS Computing Model designed to exploit these distributed computing resources during the Run-2 (2015-2018) data taking
period. We will give an insight into the main components, in particular the data and workload management systems, and we will explain the operational model behind ATLAS Computing.