This site uses cookies. By continuing to use this site you agree to our use of cookies. To find out more, see our Privacy and Cookies policy.
Close this notification
NOTICE: Ensuring subscriber access to content on IOPscience throughout the coronavirus outbreak - see our remote access guidelines.
The following article is Open access

Lessons learned from the ATLAS performance studies of the Iberian Cloud for the first LHC running period

, , , , , , , , , , and

Published 11 June 2014 Published under licence by IOP Publishing Ltd
, ,

1742-6596/513/3/032082

Abstract

In this contribution we describe the performance of the Iberian (Spain and Portugal) ATLAS cloud during the first LHC running period (March 2010-January 2013) in the context of the GRID Computing and Data Distribution Model. The evolution of the resources for CPU, disk and tape in the Iberian Tier-1 and Tier-2s is summarized. The data distribution over all ATLAS destinations is shown, focusing on the number of files transferred and the size of the data. The status and distribution of simulation and analysis jobs within the cloud are discussed. The Distributed Analysis tools used to perform physics analysis are explained as well. Cloud performance in terms of the availability and reliability of its sites is discussed. The effect of the changes in the ATLAS Computing Model on the cloud is analyzed. Finally, the readiness of the Iberian Cloud towards the first Long Shutdown (LS1) is evaluated and an outline of the foreseen actions to take in the coming years is given. The shutdown will be a good opportunity to improve and evolve the ATLAS Distributed Computing system to prepare for the future challenges of the LHC operation.

Export citation and abstract BibTeX RIS

Content from this work may be used under the terms of the Creative Commons Attribution 3.0 licence. Any further distribution of this work must maintain attribution to the author(s) and the title of the work, journal citation and DOI.

References

  • [1]
    http://public.web.cern.ch/public/en/lhc/lhc-en.html http://public.web.cern.ch/public/en/lhc/lhc-en

    Google Scholar

  • [2]
    ATLAS Collaboration 1999

    Google Scholar

  • [3]
    http://public.web.cern.ch/public/en/lhc/Computing-en.html http://public.web.cern.ch/public/en/lhc/Computing-en

    Google Scholar

  • [4]
    Adams D, Barberis D, Bee C, Hawkings R, Jarp S, Jones1 R, Malon D, Poggioli L, Poulard G, Quarrie D and Wenaus T 2005 The atlas computing model

    Google Scholar

  • [5]
    https://sdm.lbl.gov/srm-wg/doc/SRM.v2.2.html https://sdm.lbl.gov/srm-wg/doc/SRM

    Google Scholar

  • [6]
    Garonne V, G A Stewart, M Lassnig, A Molfetas, M Barisits, T Beermann, A Nairz, L Goossens, F B Megino, C Serfon, D Oleynik and A Petrosyan 2012 The atlas distributed data management project: Past and future CHEP 396 032045

    Google Scholar

  • [7]
    Branco M et al 2008 Managing ATLAS data on a petabyte-scale with DQ2 J. Phys.: Conf. Ser. 119 062017

    IOPscienceGoogle Scholar

  • [8]
    Branco M, Zaluska E, de Roure D, Salgado P, Garonne V, Lassnig M and Rocha R Managing Very-Large Distributed Datasets Proceedings of the OTM 2008 Confederated International Conferences 775-792

    Google Scholar

  • [9]
    DaTRI Notes v.0.8.5

    Google Scholar

  • [10]
    https://twiki.cern.ch/twiki/bin/viewauth/AtlasComputing/AtlasMetadataInterface https://twiki.cern.ch/twiki/bin/viewauth/AtlasComputing/AtlasMetadataInterface

    Google Scholar

  • [11]
    Brochu F et al 2009 Ganga: a tool for computational-task management and easy access to GRID resources

    Google Scholar

  • [12]
    Nilsson P, Caballero J, De K, Maeno T, Potekhin M, Wenaus T 2008 The PanDA System in the ATLAS Experiment

    Google Scholar

  • [13]
    http://atlas.fis.utfsm.cl/atlas/adcos/AdcMonitoring.htm#Service_and_Availability_Monitor http://atlas.fis.utfsm.cl/atlas/adcos/AdcMonitoring

    Google Scholar

Export references: BibTeX RIS