This site uses cookies. By continuing to use this site you agree to our use of cookies. To find out more, see our Privacy and Cookies policy.
The following article is Open access

Benchmarking the ATLAS software through the Kit Validation engine

and

Published under licence by IOP Publishing Ltd
, , Citation Alessandro De Salvo and Franco Brasolin 2010 J. Phys.: Conf. Ser. 219 042037 DOI 10.1088/1742-6596/219/4/042037

1742-6596/219/4/042037

Abstract

The measurement of the experiment software performance is a very important metric in order to choose the most effective resources to be used and to discover the bottlenecks of the code implementation. In this work we present the benchmark techniques used to measure the ATLAS software performance through the ATLAS offline testing engine Kit Validation and the online portal Global Kit Validation. The performance measurements, the data collection, the online analysis and display of the results will be presented. The results of the measurement on different platforms and architectures will be shown, giving a full report on the CPU power and memory consumption of the Monte Carlo generation, simulation, digitization and reconstruction of the most CPU-intensive channels. The impact of the multi-core computing on the ATLAS software performance will also be presented, comparing the behavior of different architectures when increasing the number of concurrent processes. The benchmark techniques described in this paper have been used in the HEPiX group since the beginning of 2008 to help defining the performance metrics for the High Energy Physics applications, based on the real experiment software.

Export citation and abstract BibTeX RIS

Please wait… references are loading.
10.1088/1742-6596/219/4/042037