This site uses cookies. By continuing to use this site you agree to our use of cookies. To find out more, see our Privacy and Cookies policy. Close this notification

Table of contents

Volume 523

2014

Previous issueNext issue

15th International Workshop on Advanced Computing and Analysis Techniques in Physics Research (ACAT2013) 16–21 May 2013, Beijing, China

Accepted papers received: 12 May 2014
Published online: 06 June 2014

Preface

Preface

011001
The following article is Open access

This volume of Journal of Physics: Conference Series is dedicated to scientific contributions presented at the 15th International Workshop on Advanced Computing and Analysis Techniques in Physics Research (ACAT 2013) which took place on 16–21 May 2013 at the Institute of High Energy Physics, Chinese Academy of Sciences, Beijing, China.

The workshop series brings together computer science researchers and practitioners, and researchers from particle physics and related fields to explore and confront the boundaries of computing and of automatic data analysis and theoretical calculation techniques.

This year's edition of the workshop brought together over 120 participants from all over the world. 18 invited speakers presented key topics on the universe in computer, Computing in Earth Sciences, multivariate data analysis, automated computation in Quantum Field Theory as well as computing and data analysis challenges in many fields. Over 70 other talks and posters presented state-of-the-art developments in the areas of the workshop's three tracks: Computing Technologies, Data Analysis Algorithms and Tools, and Computational Techniques in Theoretical Physics. The round table discussions on open-source, knowledge sharing and scientific collaboration stimulate us to think over the issue in the respective areas.

ACAT 2013 was generously sponsored by the Chinese Academy of Sciences (CAS), National Natural Science Foundation of China (NFSC), Brookhaven National Laboratory in the USA (BNL), Peking University (PKU), Theoretical Physics Cernter for Science facilities of CAS (TPCSF-CAS) and Sugon.

We would like to thank all the participants for their scientific contributions and for the en- thusiastic participation in all its activities of the workshop. Further information on ACAT 2013 can be found at http://acat2013.ihep.ac.cn.

Professor Jianxiong Wang Institute of High Energy Physics Chinese Academy of Science

Details of committees and sponsors are available in the PDF

011002
The following article is Open access

All papers published in this volume of Journal of Physics: Conference Series have been peer reviewed through processes administered by the proceedings Editors. Reviews were conducted by expert referees to the professional and scientific standards expected of a proceedings journal published by IOP Publishing.

Track 1

Computing Technology for Physics Research

012001
The following article is Open access

The history of the supercomputers in Japan and the U.S. is briefly summarized and the difference between the two is discussed. The development of the K Computer project in Japan is described as compared to other PetaFlops projects. The difficulties to be solved in the Exascale computer project now being developed are discussed.

012002
The following article is Open access

This work discusses the significant changes in computing landscape related to the progression of Moore's Law, and the implications on scientific computing. Particular attention is devoted to the High Energy Physics domain (HEP), which has always made good use of threading, but levels of parallelism closer to the hardware were often left underutilized. Findings of the CERN openlab Platform Competence Center are reported in the context of expanding "performance dimensions", and especially the resurgence of vectors. These suggest that data oriented designs are feasible in HEP and have considerable potential for performance improvements on multiple levels, but will rarely trump algorithmic enhancements. Finally, an analysis of upcoming hardware and software technologies identifies heterogeneity as a major challenge for software, which will require more emphasis on scalable, efficient design.

012003
The following article is Open access

and

The Daya Bay Reactor Neutrino Experiment started running on September 23, 2011. The Performance Quality Monitoring system (PQM) has been developed to monitor the detector performance and data quality. The main feature is its ability to efficiently process multi-data-stream from three experimental halls. The PQM processes raw data files from the Daya Bay data acquisition system, generates and publishes histograms via a graphical web interface by executing the user-defined algorithm modules, and saves the histograms for permanent storage. That the whole process takes only around 40 minutes makes it valuable for the shift crew to monitor the running status of all the sub-detectors and the data quality.

012004
The following article is Open access

, , , and

High Energy Physics has traditionally been a technology-limited science that has pushed the boundaries of both the detectors collecting the information about the particles and the computing infrastructure processing this information. However, since a few years the increase in computing power comes in the form of increased parallelism at all levels, and High Energy Physics has now to optimise its code to take advantage of the new architectures, including GPUs and hybrid systems. One of the primary targets for optimisation is the particle transport code used to simulate the detector response, as it is largely experiment independent and one of the most demanding applications in terms of CPU resources. The Geant Vector Prototype project aims to explore innovative designs in particle transport aimed at obtaining maximal performance on the new architectures. This paper describes the current status of the project and its future perspectives. In particular we describe how the present design tries to expose the parallelism of the problem at all possible levels, in a design that is aimed at minimising contentions and maximising concurrency, both at the coarse granularity level (threads) and at the micro granularity one (vectorisation, instruction pipelining, multiple instructions per cycle). The future plans and perspectives will also be mentioned.

012005
The following article is Open access

and

The ALICE experiment at CERN employs a number of human operators (shifters), who have to make sure that the experiment is always in a state compatible with taking Physics data. Given the complexity of the system and the myriad of errors that can arise, this is not always a trivial task. The aim of this paper is to describe an expert system that is capable of assisting human shifters in the ALICE control room. The system diagnoses potential issues and attempts to make smart recommendations for troubleshooting. At its core, a Prolog engine infers whether a Physics or a technical run can be started based on the current state of the underlying sub-systems. A separate C++ component queries certain SMI objects and stores their state as facts in a Prolog knowledge base. By mining the data stored in different system logs, the expert system can also diagnose errors arising during a run. Currently the system is used by the on-call experts for faster response times, but we expect it to be adopted as a standard tool by regular shifters during the next data taking period.

012006
The following article is Open access

and

The quantity of information produced in Nuclear and Particle Physics (NPP) experiments necessitates the transmission and storage of data across diverse collections of computing resources. Robust solutions such as XRootD have been used in NPP, but as the usage of cloud resources grows, the difficulties in the dynamic configuration of these systems become a concern. Hadoop File System (HDFS) exists as a possible cloud storage solution with a proven track record in dynamic environments. Though currently not extensively used in NPP, HDFS is an attractive solution offering both elastic storage and rapid deployment. We will present the performance of HDFS in both canonical I/O tests and for a typical data analysis pattern within the RHIC/STAR experimental framework. These tests explore the scaling with different levels of redundancy and numbers of clients. Additionally, the performance of FUSE and NFS interfaces to HDFS were evaluated as a way to allow existing software to function without modification. Unfortunately, the complicated data structures in NPP are non-trivial to integrate with Hadoop and so many of the benefits of the MapReduce paradigm could not be directly realized. Despite this, our results indicate that using HDFS as a distributed filesystem offers reasonable performance and scalability and that it excels in its ease of configuration and deployment in a cloud environment.

012007
The following article is Open access

, , , , , , , , , et al

We describe a pilot project (GAP – GPU Application Project) for the use of GPUs (Graphics processing units) for online triggering applications in High Energy Physics experiments. Two major trends can be identified in the development of trigger and DAQ systems for particle physics experiments: the massive use of general-purpose commodity systems such as commercial multicore PC farms for data acquisition, and the reduction of trigger levels implemented in hardware, towards a fully software data selection system ("trigger-less"). The innovative approach presented here aims at exploiting the parallel computing power of commercial GPUs to perform fast computations in software not only in high level trigger levels but also in early trigger stages. General-purpose computing on GPUs is emerging as a new paradigm in several fields of science, although so far applications have been tailored to the specific strengths of such devices as accelerators in offline computation. With the steady reduction of GPU latencies, and the increase in link and memory throughputs, the use of such devices for real-time applications in high energy physics data acquisition and trigger systems is becoming relevant. We discuss in detail the use of online parallel computing on GPUs for synchronous low-level triggers with fixed latency. In particular we show preliminary results on a first test in the CERN NA62 experiment. The use of GPUs in high level triggers is also considered, the CERN ATLAS experiment being taken as a case study of possible applications.

012008
The following article is Open access

and

BESIII experiment is operated in the τ-charm threshold energy region. It has collected the world's largest data samples of J/ψ, ψ(3686), ψ(3770) and ψ(4040) decays. These data are being used to make a variety of interesting and unique studies of light hadron spectroscopy, precision charmonium physics and high-statistics measurements of D meson decays. As one of the experiments at the high luminosity frontier, data processing at BESIII is computationally very expensive for large data sets. In this presentation, we report two recent progresses in using high performance computing: a Tag-based preselection for data reduction and GPUPWA, a PWA framework harnessing the GPU parallel computing.

012009
The following article is Open access

, , and

Power efficiency is becoming an ever more important metric for both high performance and high throughput computing. Over the course of next decade it is expected that flops/watt will be a major driver for the evolution of computer architecture. Servers with large numbers of ARM processors, already ubiquitous in mobile computing, are a promising alternative to traditional x86-64 computing. We present the results of our initial investigations into the use of ARM processors for scientific computing applications. In particular we report the results from our work with a current generation ARMv7 development board to explore ARM-specific issues regarding the software development environment, operating system, performance benchmarks and issues for porting High Energy Physics software.

012010
The following article is Open access

, , , and

Since the ALICE experiment began data taking in early 2010, the amount of end user jobs on the AliEn Grid has increased significantly. Presently 1/3 of the 40K CPU cores available to ALICE are occupied by jobs submitted by about 400 distinct users, individually or in organized analysis trains. The overall stability of the AliEn middleware has been excellent throughout the 3 years of running, but the massive amount of end-user analysis and its specific requirements and load has revealed few components which can be improved. One of them is the interface between users and central AliEn services (catalogue, job submission system) which we are currently re-implementing in Java. The interface provides persistent connection with enhanced data and job submission authenticity. In this paper we will describe the architecture of the new interface, the ROOT binding which enables the use of a single interface in addition to the standard UNIX-like access shell and the new security-related features.

012011
The following article is Open access

The second part of the two-part account of the enabling technologies behind a successful completion of the Troitsk-nu-mass experiment, summarizes an almost 20 years' experience of using Oberon/Component Pascal in complex applications ranging from the cutting-edge computer algebra (b-quark decays etc.) to experimental data processing (neutrino mass etc.) to exploratory algorithm design work (the optimal jet finder etc.) to systematic computer science education (the international project Informatika-21).

012012
The following article is Open access

, , , and

In a typical scientific computing centre, several applications coexist and share a single physical infrastructure. An underlying Private Cloud infrastructure eases the management and maintenance of such heterogeneous applications (such as multipurpose or application-specific batch farms, Grid sites, interactive data analysis facilities and others), allowing dynamic allocation resources to any application. Furthermore, the maintenance of large deployments of complex and rapidly evolving middleware and application software is eased by the use of virtual images and contextualization techniques. Such infrastructures are being deployed in some large centres (see e.g. the CERN Agile Infrastructure project), but with several open-source tools reaching maturity this is becoming viable also for smaller sites. In this contribution we describe the Private Cloud infrastructure at the INFN-Torino Computer Centre, that hosts a full-fledged WLCG Tier-2 centre, an Interactive Analysis Facility for the ALICE experiment at the CERN LHC and several smaller scientific computing applications. The private cloud building blocks include the OpenNebula software stack, the GlusterFS filesystem and the OpenWRT Linux distribution (used for network virtualization); a future integration into a federated higher-level infrastructure is made possible by exposing commonly used APIs like EC2 and OCCI.

012013
The following article is Open access

, , , , , , , , and

APEnet+ is an INFN (Italian Institute for Nuclear Physics) project aiming to develop a custom 3-Dimensional torus interconnect network optimized for hybrid clusters CPU-GPU dedicated to High Performance scientific Computing. The APEnet+ interconnect fabric is built on a FPGA-based PCI-express board with 6 bi-directional off-board links showing 34 Gbps of raw bandwidth per direction, and leverages upon peer-to-peer capabilities of Fermi and Kepler-class NVIDIA GPUs to obtain real zero-copy, GPU-to-GPU low latency transfers. The minimization of APEnet+ transfer latency is achieved through the adoption of RDMA protocol implemented in FPGA with specialized hardware blocks tightly coupled with embedded microprocessor. This architecture provides a high performance low latency offload engine for both trasmit and receive side of data transactions: preliminary results are encouraging, showing 50% of bandwidth increase for large packet size transfers. In this paper we describe the APEnet+ architecture, detailing the hardware implementation and discuss the impact of such RDMA specialized hardware on host interface latency and bandwidth.

012014
The following article is Open access

, , , and

Performance is a critical issue in a production system accommodating hundreds of analysis users. Compared to a local session, distributed analysis is exposed to services and network latencies, remote data access and heterogeneous computing infrastructure, creating a more complex performance and efficiency optimization matrix. During the last 2 years, ALICE analysis shifted from a fast development phase to the more mature and stable code. At the same time, the frameworks and tools for deployment, monitoring and management of large productions have evolved considerably too. The ALICE Grid production system is currently used by a fair share of organized and individual user analysis, consuming up to 30% or the available resources and ranging from fully I/O-bound analysis code to CPU intensive correlations or resonances studies. While the intrinsic analysis performance is unlikely to improve by a large factor during the LHC long shutdown (LS1), the overall efficiency of the system has still to be improved by an important factor to satisfy the analysis needs. We have instrumented all analysis jobs with "sensors" collecting comprehensive monitoring information on the job running conditions and performance in order to identify bottlenecks in the data processing flow. This data are collected by the MonALISa-based ALICE Grid monitoring system and are used to steer and improve the job submission and management policy, to identify operational problems in real time and to perform automatic corrective actions. In parallel with an upgrade of our production system we are aiming for low level improvements related to data format, data management and merging of results to allow for a better performing ALICE analysis.

012015
The following article is Open access

, , and

Process checkpoint-restart is a technology with great potential for use in HEP workflows. Use cases include debugging, reducing the startup time of applications both in offline batch jobs and the High Level Trigger, permitting job preemption in environments where spare CPU cycles are being used opportunistically and efficient scheduling of a mix of multicore and single-threaded jobs. We report on tests of checkpoint-restart technology using CMS software, Geant4-MT (multi-threaded Geant4), and the DMTCP (Distributed Multithreaded Checkpointing) package. We analyze both single- and multi-threaded applications and test on both standard Intel x86 architectures and on Intel MIC. The tests with multi-threaded applications on Intel MIC are used to consider scalability and performance. These are considered an indicator of what the future may hold for many-core computing.

012016
The following article is Open access

, and

Processing data in distributed environment has found its application in many fields of science (Nuclear and Particle Physics (NPP), astronomy, biology to name only those). Efficiently transferring data between sites is an essential part of such processing. The implementation of caching strategies in data transfer software and tools, such as the Reasoner for Intelligent File Transfer (RIFT) being developed in the STAR collaboration, can significantly decrease network load and waiting time by reusing the knowledge of data provenance as well as data placed in transfer cache to further expand on the availability of sources for files and data-sets. Though, a great variety of caching algorithms is known, a study is needed to evaluate which one can deliver the best performance in data access considering the realistic demand patterns. Records of access to the complete data-sets of NPP experiments were analyzed and used as input for computer simulations. Series of simulations were done in order to estimate the possible cache hits and cache hits per byte for known caching algorithms. The simulations were done for cache of different sizes within interval 0.001 – 90% of complete data-set and low-watermark within 0-90%. Records of data access were taken from several experiments and within different time intervals in order to validate the results. In this paper, we will discuss the different data caching strategies from canonical algorithms to hybrid cache strategies, present the results of our simulations for the diverse algorithms, debate and identify the choice for the best algorithm in the context of Physics Data analysis in NPP. While the results of those studies have been implemented in RIFT, they can also be used when setting up cache in any other computational work-flow (Cloud processing for example) or managing data storages with partial replicas of the entire data-set.

012017
The following article is Open access

and

In the simulation of High Energy Physics experiment a very high precision in the description of the detector geometry is essential to achieve the required performances. The physicists in charge of Monte Carlo Simulation of the detector need to collaborate efficiently with the engineers working at the mechanical design of the detector. Often, this collaboration is made hard by the usage of different and incompatible software. ROOT is an object-oriented C++ framework used by physicists for storing, analyzing and simulating data produced by the high-energy physics experiments while CAD (Computer-Aided Design) software is used for mechanical design in the engineering field. The necessity to improve the level of communication between physicists and engineers led to the implementation of an interface between the ROOT geometrical modeler used by the virtual Monte Carlo simulation software and the CAD systems. In this paper we describe the design and implementation of the TGeoCad Interface that has been developed to enable the use of ROOT geometrical models in several CAD systems. To achieve this goal, the ROOT geometry description is converted into STEP file format (ISO 10303), which can be imported and used by many CAD systems.

012018
The following article is Open access

and

The ATLAS experiment at CERN's Large Hadron Collider (LHC) deploys a three-level processing scheme for the trigger system. The development of fast trigger algorithms and the design of topological selections are the main challenges to allow for a large program of physics analyses. In the following, two of the ATLAS trigger signatures are described: the muon and the tau triggers. The structure of the three levels of these two trigger signatures are explained in detail as well as their performance during the first three years of operation.

012019
The following article is Open access

and

The ATLAS experiment, aimed at recording the results of LHC proton-proton collisions, is upgrading its Trigger and Data Acquisition (TDAQ) system during the current LHC first long shutdown. The purpose of the upgrade is to add robustness and flexibility to the selection and the conveyance of the physics data, simplify the maintenance of the infrastructure, exploit new technologies and, overall, make ATLAS data-taking capable of dealing with increasing event rates. The TDAQ system used to date is organised in a three-level selection scheme, including a hardware-based first-level trigger and second- and third-level triggers implemented as separate software systems distributed on separate, commodity hardware nodes. While this architecture was successfully operated well beyond the original design goals, the accumulated experience stimulated interest to explore possible evolutions. We will also be upgrading the hardware of the TDAQ system by introducing new elements to it. For the high-level trigger, the current plan is to deploy a single homogeneous system, which merges the execution of the second and third trigger levels, still separated, on a unique hardware node. Prototyping efforts already demonstrated many benefits to the simplified design. In this paper we report on the design and the development status of this new system.

012020
The following article is Open access

and

The ATLAS experiment has just concluded its first running period which commenced in 2010. After two years of remarkable performance from the LHC and ATLAS, the experiment has accumulated more than 25 fb−1 of data. The total volume of beam and simulated data products exceeds 100 PB distributed across more than 150 computing centres around the world, managed by the experiment's distributed data management system. These sites have provided up to 150,000 computing cores to ATLAS's global production and analysis processing system, enabling a rich physics programme including the discovery of the Higgs-like boson in 2012. The wealth of accumulated experience in global data-intensive computing at this massive scale, and the considerably more challenging requirements of LHC computing from 2015 when the LHC resumes operation, are driving a comprehensive design and development cycle to prepare a revised computing model together with data processing and management systems able to meet the demands of higher trigger rates, energies and event complexities. An essential requirement will be the efficient utilisation of current and future processor technologies as well as a broad range of computing platforms, including supercomputing and cloud resources. We will report on experience gained thus far and our progress in preparing ATLAS computing for the future.

012021
The following article is Open access

, , , , , , , , and

Visual Physics Analysis (VISPA) is a web-based development environment addressing high energy and astroparticle physics. It covers the entire analysis spectrum from the design and validation phase to the execution of analyses and the visualization of results. VISPA provides a graphical steering of the analysis flow, which consists of self-written, re-usable Python and C++ modules for more demanding tasks. All common operating systems are supported since a standard internet browser is the only software requirement for users. Even access via mobile and touch-compatible devices is possible. In this contribution, we present the most recent developments of our web application concerning technical, state-of-the-art approaches as well as practical experiences. One of the key features is the use of workspaces, i.e. user-configurable connections to remote machines supplying resources and local file access. Thereby, workspaces enable the management of data, computing resources (e.g. remote clusters or computing grids), and additional software either centralized or individually. We further report on the results of an application with more than 100 third-year students using VISPA for their regular particle physics exercises during the winter term 2012/13. Besides the ambition to support and simplify the development cycle of physics analyses, new use cases such as fast, location-independent status queries, the validation of results, and the ability to share analyses within worldwide collaborations with a single click become conceivable.

Track 2

Data Analysis - Algorithms and Tools

012022
The following article is Open access

Increasing beam intensities and input data rates require to rethink the traditional approaches in trigger concepts. At the same time the advanced many-core computer architectures providing new dimensions in programming require to rework the standard methods or to develop new methods of track reconstruction in order to efficiently use parallelism of the computer hardware. As a results a new tendency appears to replace the standard (usually implemented in FPGA) hardware triggers by clusters of computers running software reconstruction and selection algorithms. In addition that makes possible unification of the offline and on-line data processing and analysis in one software package running on a heterogeneous computer farm.

012023
The following article is Open access

and

A novel technique using a set of artificial neural networks to identify and split merged measurements created by multiple charged particles in the ATLAS pixel detector is presented. Such merged measurements are a common feature of boosted physics objects such as tau leptons or strongly energetic jets where particles are highly collimated. The neural networks are trained using Monte Carlo samples produced with a detailed detector simulation. The performance of the splitting technique is quantified using LHC data collected by the ATLAS detector and Monte Carlo simulation. The number of shared hits per track is significantly reduced, particularly in boosted systems, which increases the reconstruction efficiency and quality. The improved position and error estimates of the measurements lead to a sizable improvement of the track and vertex resolution.

012024
The following article is Open access

and

The CMS all-silicon tracker consists of 16 588 modules, embedded in a solenoidal magnet providing a field of B = 3.8 T. The targeted performance requires that the software alignment tools determine the module positions with a precision of a few micrometers. Ultimate local precision is reached by the determination of sensor curvatures, challenging the algorithms to determine about 200 000 parameters simultaneously. The main remaining challenge for alignment are the global distortions that systematically bias the track parameters and thus physics measurements. They are controlled by adding further information into the alignment work-flow, e.g. the mass of decaying resonances or track data taken with B = 0T. To make use of the latter and also to integrate the determination of the Lorentz angle into the alignment procedure, the alignment framework has been extended to treat position sensitive calibration parameters. This is relevant since due to the increased LHC luminosity in 2012, the Lorentz angle exhibits time dependence. Cooling failures and ramping of the magnet can induce movements of large detector sub-structures. These movements are now detected in the CMS prompt calibration loop to make the corrections available for the reconstruction of the data for physics analysis. The geometries are finally carefully validated. The monitored quantities include the basic track quantities for tracks from both collisions and cosmic ray muons and physics observables.

012025
The following article is Open access

and

The Daya Bay reactor neutrino experiment is designed to precisely determine the neutrino mixing angle θ13. In this paper, we present an algorithm using the maximum likelihood (ML) method to reconstruct the vertex and energy of events in the anti-neutrino detector, based on a simplified optical model describing light propagation. The key parameters of the optical model are calibrated with 60Co source, by comparing the predicted charges of the PMTs with the observed charges. With the optimized parameters, the ML reconstruction provides a uniform energy reconstruction, and a vertex reconstruction with small bias along radial direction.

012026
The following article is Open access

, , , , and

In this paper, we describe some of the computational challenges that need to be addressed when developing active Space Radiation Monitors and Dosimeters using the Timepix detectors developed by the Medipix2 Collaboration at CERN. Measurement of the Linear Energy Transfer (LET), the source and velocity of incident ionizing radiation, are of initial interest when developing such operational devices because they provide the capability to calculate the Dose-equivalent, and to characterize the radiation field for the design of radiation protective devices. In order to facilitate the LET measurement, we first propose a new method for calculating azimuth direction and polar angle of individual tracks of penetrating charged particles based on the pixel clusters they produce. We then describe an energy compensation method for heavy ion tracks suffering from saturation and plasma effects. Finally, we identify interactions within the detector that need to be excluded from the total effective Dose-Equivalent assessment. We make use of data taken at the HIMAC (Heavy Ion Medical Accelerator Center) facility in Chiba, Japan and NSRL (NASA Space Radiation Laboratory) at the Brookhaven National Laboratory in New York, USA for evaluation purposes.

012027
The following article is Open access

Use of the Standard Tessellation Language (STL) for automatic transport of CAD 1 geometry into Geant is presented. The hybrid approach of combining Geant native and STL objects is preferred. The tradeoffs between the CPU cost of the simulation and the accuracy of tessellation are discussed.

012028
The following article is Open access

Given a sample of experimental events and a set of theoretical models, the matrix element method (MEM) is a procedure to select the most plausible model that governs the production of these events. From a theoretical point of view, it is probably the most powerful multi-variate analysis technique since it maximally uses the information contained in the Feynman amplitudes. This technique is now widely known since it has been used for the precision top mass measurement at Tevatron, for example.

The MadWeight software is presented. MadWeight is a phase-space generator designed for the automated numerical estimation of matrix elements based on MadGraph amplitudes. With the modern computing resources, it allows the large-scale deployment of the MEM technique on high-statistics data and simulated samples.

Several applications of the method at LHC are discussed, including the measurement of the spin and parity of the recently discovered boson, signal-to-background discrimination, full differential spectrum estimation and other promising applications.

012029
The following article is Open access

and

This article describes a multivariate polynomial regression method where the uncertainty of the input parameters are approximated with Gaussian distributions, derived from the central limit theorem for large weighted sums, directly from the training sample. The estimated uncertainties can be propagated into the optimal fit function, as an alternative to the statistical bootstrap method. This uncertainty can be propagated further into a loss function like quantity, with which it is possible to calculate the expected loss function, and allows to select the optimal polynomial degree with statistical significance. Combined with simple phase space splitting methods, it is possible to model most features of the training data even with low degree polynomials or constants.

012030
The following article is Open access

In this first part of the two-part account of the enabling technologies behind a successful completion of the Troitsk-nu-mass experiment, the parameter estimation method of quasioptimal weights is reviewed. In regard of statistical quality, it is on a par with the maximal likelihood method. but exceeds the latter in analytical transparency, flexibility, scope of applicability and numerical robustness. It also couples perfectly with the optimal jet definition and thus provides a comprehensive framework for data processing in particle physics and beyond.

012031
The following article is Open access

, and

Several scenarios, both present and future, require re-simulation of the trigger response in the ATLAS experiment at the LHC. While software for the detector response simulation and event reconstruction is allowed to change and improve, the trigger response simulation has to reflect the conditions at which data was taken. This poses a maintenance and data preservation problem. Several strategies have been considered and a proof-of-concept model using virtualization has been developed. While the virtualization with CernVM elegantly solves several aspects of the data preservation problem, the limitations of current methods for contextualization of the virtual machine as well as incompatibilities in the currently used data format introduces new challenges. In this proceeding these challenges, their current solutions and the proof of concept model for precise trigger simulation are discussed.

012032
The following article is Open access

and

MadAnalysis 5 is a new Python/C+ + package facilitating phenomenological analyses that can be performed in the framework of Monte Carlo simulations of collisions to be produced in high-energy physics experiments. It allows one, by means of a user-friendly interpreter, to perform professional physics analyses in a very simple way. Starting from event samples as generated by any Monte Carlo event generator, large classes of selections can be implemented through intuitive commands, many standard kinematical distributions can be automatically represented by histograms and all results are eventually gathered into detailed Html and latex reports. In this work, we briefly report on the latest developments of the code, focusing on the interface to the FastJet program dedicated to jet reconstruction.

012033
The following article is Open access

The new version of the DELPHES C++ fast-simulation framework is presented. The tool is written in C++ and is interfaced with the most common Monte Carlo file formats (LHEF, HepMC, STDHEP). Its purpose is the simulation of a multipurpose detector response, which includes a track propagation system embedded in a magnetic field, electromagnetic and hadronic calorimeters, and a muon system. The new modular version allows to easily produce the collections that are needed for later analysis, from low level objects such as tracks and calorimeter deposits up to high level collections such as isolated electrons, jets, taus, and missing energy.

012034
The following article is Open access

, , , , , and

The R3B experiment (Reaction studies with Relativistic Radioactive Beams) will be built within the future FAIR / GSI (Facility for Antiproton and Ion Research) in Darmstadt, Germany. The international collaboration R3B has a scientific program devoted to the physics of stable and radioactive beams at energies between 150 MeV and 1.5 GeV per nucleon. In preparation for the experiment, the R3BRoot software framework is under development, it deliver detector simulation, reconstruction and data analysis. The basic functionalities of the framework are handled by the FairRoot framework which is used also by the other FAIR experiments (CBM, PANDA, ASYEOS, etc) while the R3B detector specifics and reconstruction code are implemented inside R3BRoot. In this contribution first results of data analysis from the detector prototype test in November 2012 will be reported, moreover, comparison of the tracker performance versus experimental data, will be presented.

012035
The following article is Open access

and

The huge success of the physics program of the ATLAS experiment at the Large Hadron Collider (LHC) during Run 1 relies upon a great number of simulated Monte Carlo events. This Monte Carlo production takes the biggest part of the computing resources being in use by ATLAS as of now. In this document we describe the plans to overcome the computing resource limitations for large scale Monte Carlo production in the ATLAS Experiment for Run 2, and beyond. A number of fast detector simulation, digitization and reconstruction techniques are being discussed, based upon a new flexible detector simulation framework. To optimally benefit from these developments, a redesigned ATLAS MC production chain is presented at the end of this document.

012036
The following article is Open access

, , and

After the current maintenance period, the LHC will provide higher energy collisions with increased luminosity. In order to keep up with these higher rates, ATLAS software needs to speed up substantially. However, ATLAS code is composed of approximately 6M lines, written by many different programmers with different backgrounds, which makes code optimisation a challenge. To help with this effort different profiling tools and techniques are being used. These include well known tools, such as the Valgrind suite and Intel Amplifier; less common tools like Pin, PAPI, and GOoDA; as well as techniques such as library interposing. In this paper we will mainly focus on Pin tools and GOoDA. Pin is a dynamic binary instrumentation tool which can obtain statistics such as call counts, instruction counts and interrogate functions' arguments. It has been used to obtain CLHEP Matrix profiles, operations and vector sizes for linear algebra calculations which has provided the insight necessary to achieve significant performance improvements. Complimenting this, GOoDA, an in-house performance tool built in collaboration with Google, which is based on hardware performance monitoring unit events, is used to identify hot-spots in the code for different types of hardware limitations, such as CPU resources, caches, or memory bandwidth. GOoDA has been used in improvement of the performance of new magnetic field code and identification of potential vectorization targets in several places, such as Runge-Kutta propagation code.

Track 3

Computations in Theoretical Physics: Techniques and Methods

012037
The following article is Open access

A large class of Feynman integrals, like e.g., two-point parameter integrals with at most one mass and containing local operator insertions, can be transformed to multi-sums over hypergeometric expressions. In this survey article we present a difference field approach for symbolic summation that enables one to simplify such definite nested sums to indefinite nested sums. In particular, the simplification is given -if possible- in terms of harmonic sums, generalized harmonic sums, cyclotomic harmonic sums or binomial sums. Special emphasis is put on the developed Mathematica packages Sigma, EvaluateMultiSums and SumProduction that assist in the task to perform these simplifications completely automatically for huge input expressions.

012038
The following article is Open access

, , and

This report is devoted to the analytical calculation of heavy quarkonia production processes in modern experiments such as LHC, B-factories and superB-factories in computer. Theoretical description of heavy quarkonia is based on the factorization theorem. This theorem leads to special structure of the production amplitudes which can be used to develop computer algorithm which calculates these amplitudes automatically. This report is devoted to the description of this algorithm. As an example of its application we present the results of the calculation of double charmonia production in bottomonia decays and inclusive the χcJ mesons production in pp-collisions.

012039
The following article is Open access

, , and

In this report, an automatic calculating package based on REDUCE and RLISP, FDC, is introduced, especially its one-loop calculation part and its special treatment for quarkonium physics. With FDC, many works have been completed, most of them are very important in solve/clarify current puzzles in quarkonium physics.

012040
The following article is Open access

We review some of the recent advances in the computation of one-loop scattering amplitudes which led to the construction of efficient and automated computational tools for NLO predictions. Particular attention is devoted to unitarity-based methods and integrand-level reduction techniques. Extensions of one-loop integrand-level techniques to higher orders are also briefly illustrated.

012041
The following article is Open access

In last decades, it has been realized that the next-to-leading order corrections may become very important, and sometimes requisite, for some processes involving quarkoinum production or decay, e.g., e+eJ/ψ + ηc and J/ψ → 3γ. In this article, we review some basic steps to perform automated one-loop computations in quarkonium process within the Non-relativistic Quantum Chromodynamics (NRQCD) factorization framework1 and we give an introduction to some related public tools or packages and their usages in each step. We start from generating Feynman diagrams and amplitudes with FEYNARTS for the quarkonium process, performing Dirac- and Color- algebras simplifications using FEYNCALC and FEYNCALCFORMLINK, and then to doing partial fractions on the linear-dependent propagators by APART, and finally to reducing the Tensor Integrals (TI) into Scalar Integrals (SI) or Master Integrals (MI) using Integration-By-Parts (IBP) method with the help of FIRE. We will use a simple concrete example to demonstrate the basic usages of the corresponding packages or tools in each step.

012042
The following article is Open access

Doubly heavy mesons and baryons provide a good platform for testing pQCD. Two high efficient generators BCVEGPY and GENXICC for simulating the hadronic production of the doubly heavy mesons and baryons have been developed in recent years. In this talk, we present their main idea and their recent progresses. The dominant gluon-gluon fusion mechanism programmed in those two generators are written based on the improved helicity amplitude approach, in which the hard scattering amplitude are dealt with directly at the amplitude level and the numerical efficiency are greatly improved by properly decomposing the Feynman diagrams and by fully applying the symmetries among them. Moreover, in comparison to the previous versions, we have updated the programs in order to generate the unweighted meson or baryon events much more effectively within various simulation environments. The generators can be conveniently imported into PYTHIA to do further hadronization and decay simulation, which have already been adopted by several collaborations as LHCb, CMS and etc. to do Bc and Ξcc simulations.

012043
The following article is Open access

, , , , , , , and

The SANC computer system is aimed at support of analytic and numeric calculations for experiments at colliders. The system is reviewed briefly. Recent results on high-precision description of the Drell-Yan processes at the LHC are presented. Special attention is paid to the evaluation of higher order final-state QED corrections to the single W and Z boson production processes. A new Monte Carlo integrator mcsanc suited for description of a series of high-energy physics processes at the one-loop precision level is presented.

012044
The following article is Open access

, , , and

The program FEYNRULES is a MATHEMATICA package developed to facilitate the implementation of new physics theories into high-energy physics tools. Starting from a minimal set of information such as the model gauge symmetries, its particle content, parameters and Lagrangian, FEYNRULES provides all necessary routines to extract automatically from the Lagrangian (that can also be computed semi-automatically for supersymmetric theories) the associated Feynman rules. These can be further exported to several Monte Carlo event generators through dedicated interfaces, as well as translated into a PYTHON library, under the so-called UFO model format, agnostic of the model complexity, especially in terms of Lorentz and/or color structures appearing in the vertices or of number of external legs. In this work, we briefly report on the most recent new features that have been added to FEYNRULES, including full support for spin-1 fermions, a new module allowing for the automated diagonalization of the particle spectrum and a new set of routines dedicated to decay width calculations.

012045
The following article is Open access

, and

In this talk the methods and computer tools which were used in our recent calculation of the three-loop Standard Model renormalization group coefficients are discussed. A brief review of the techniques based on special features of dimensional regularization and minimal subtraction schemes is given. Our treatment of γ5 is presented in some details. In addition, for a reasonable set of initial parameters the numerical estimates of the obtained three-loop contributions are presented.

012046
The following article is Open access

, , , , , , , , and

We are developing a new lattice QCD code set "Bridge++" aiming at extensible, readable, and portable workbench for QCD simulations, while keeping a high performance at the same time. Bridge++ covers conventional lattice actions and numerical algorithms. The code set is constructed in C++ with an object oriented programming. In this paper we describe fundamental ingredients of the code and the current status of development.

012047
The following article is Open access

and

We report on recent developments on the open source computer algebra system Form. Especially we focus on the code optimization implemented after the release of Form version 4.0 in March 2012.

012048
The following article is Open access

and

The new version 2.1 of the program SECDEC is described, which can be used for the factorisation of poles and subsequent numerical evaluation of multi-loop integrals, in particular massive two-loop integrals. The program is not restricted to scalar master integrals; more general parametric integrals can also be treated in an automated way.

012049
The following article is Open access

and

We present a new approach to calculation of anomalous dimensions in the framework of e-expansion and renormalization group method. This approach allows one to skip the calculation of renormalization constants and express anomalous dimensions in terms of renormalized diagrams, which are presented in a form suitable for numerical calculations. This approach can be easily automated and extended to a wide range of models. The power of this approach is illustrated on 5 loop calculations of beta-function and anomalous dimensions in phgr4 model.

012050
The following article is Open access

, , and

We present Version 8 of the Feynman-diagram calculator FormCalc. New features include in particular significantly improved algebraic simplification as well as vectorization of the generated code. The Cuba Library, used in FormCalc, features checkpointing to disk for all integration algorithms.

012051
The following article is Open access

, , , , , , and

We present recent next-to-leading order (NLO) results in perturbative QCD obtained using the BLACKHAT software library. We discuss the use of n-tuples to separate the lengthy matrix-element computations from the analysis process. The use of n-tuples allows many analyses to be carried out on the same phase-space samples, and also allows experimenters to conduct their own analyses using the original NLO computation.

012052
The following article is Open access

and

For an automatic computation of Feynman loop integrals in the physical region we rely on an extrapolation technique where the integrals of the sequence are obtained with iterated/repeated adaptive methods from the QUADPACK 1D quadrature package. The integration rule evaluations in the outer level, corresponding to independent inner integral approximations, are assigned to threads dynamically via the OpenMP runtime in the parallel implementation. Furthermore, multi-level (nested) parallelism enables an efficient utilization of hyperthreading or larger numbers of cores. For a class of loop integrals in the unphysical region, which do not suffer from singularities in the interior of the integration domain, we find that the distributed adaptive integration methods in the multivariate PARINT package are highly efficient and accurate. We apply these techniques without resorting to integral transformations and report on the capabilities of the algorithms and the parallel performance for a test set including various types of two-loop integrals.

012053
The following article is Open access

Next-to-leading order electroweak and QCD radiative corrections to the Drell-Yan process with high dimuon masses for experiments CMS LHC at CERN have been studied in fully differential form. The FORTRAN code READY for numerical analysis of Drell-Yan observables has been presented. The radiative corrections are found to become significant for CMS LHC experiment setup.

012054
The following article is Open access

, and

In the framework of the littlest Higgs model with T parity, we study the WH/Zh+q_ and WH-pair productions at the CERN Large Hadron Collider up to the QCD next-to-leading order (NLO). The kinematic distributions of final decay products and the theoretical dependence of the cross section on the factorization/renormalization scale are analyzed. We adopt the PROSPINO scheme in the QCD NLO calculations to avoid double counting and keep the convergence of the perturbative QCD description. By using the subtraction scheme, the QCD NLO corrections enhance the leading order cross section with a K-factor in the range of 1.00 ~ 1.43 for WH (ZH)q- production process, and in the range of 1.09 ~ 1.22 for the WH pair production process.

012055
The following article is Open access

We present Mathematica7 numerical simulation of the process pp → jet + E/Tin the framework of modified Randall-Sundrum brane-world model with one infinite and n compact extra dimension. We compare the energy missing signature with the standard model background pp → jet + vbar v, which was simulated at CompHep. We show that the models with numbers of compact extra dimensions greater than 4 can be probed at the protons center-of-mass energy equal 14 TeV. We also find that testing the brane-world models at 7 TeV on the LHC appears to hopeless.

012056
The following article is Open access

, , , , , , , , , et al

We present applications of the program GoSAM for the automated calculation of one-loop amplitudes. Results for NLO QCD corrections to beyond the Standard Model processes as well as Higgs plus up to three-jet production in gluon fusion are shown. We also discuss some new features of the program.

012057
The following article is Open access

, , and

In these proceedings we report our progress in the development of the publicly available C++ library NJet for accurate calculations of high-multiplicity one-loop amplitudes. As a phenomenological application we present the first complete next-to-leading order (NLO) calculation of five jet cross section at hadron colliders.

012058
The following article is Open access

, , , , and

We report on the OpenLoops generator for one-loop matrix elements and its application to four-lepton production in association with up to one jet. The open loops algorithm uses a numerical recursion to construct the numerator of one-loop Feynman diagrams as functions of the loop momentum. In combination with tensor integrals this results in a highly efficient and numerically stable matrix element generator. In order to obtain a fully automated setup for the simulation of next-to-leading order scattering processes we interfaced OpenLoops to the Sherpa Monte Carlo event generator.

012060
The following article is Open access

, and

A survey is given on mathematical structures which emerge in multi-loop Feynman diagrams. These are multiply nested sums, and, associated to them by an inverse Mellin transform, specific iterated integrals. Both classes lead to sets of special numbers. Starting with harmonic sums and polylogarithms we discuss recent extensions of these quantities as cyclotomic, generalized (cyclotomic), and binomially weighted sums, associated iterated integrals and special constants and their relations.

012061
The following article is Open access

, and

We discuss recent progress in multi-loop integrand reduction methods. Motivated by the possibility of an automated construction of multi-loop amplitudes via generalized unitarity cuts we describe a procedure to obtain a general parameterisation of any multi-loop integrand in a renormalizable gauge theory. The method relies on computational algebraic geometry techniques such as Gröbner bases and primary decomposition of ideals. We present some results for two and three loop amplitudes obtained with the help of the MACAULAY2 computer algebra system and the Mathematica package BASISDET.

012062
The following article is Open access

We consider computational problems in the framework of nonpower Analityc Perturbation Theory and Fractional Analytic Perturbation Theory that are the generalization of the standard QCD perturbation theory. The singularity-free, finite couplings appear in these approaches as analytic images of the standard QCD coupling powers αvs(Q2) in the Euclidean and Minkowski domains, respectively. We provide a package "FAPT" based on the system Mathematica for QCD calculations of the images up to N3LO of renormalization group evolution. Application of these approaches to Bjorken sum rule analysis and Q2-evolution of higher twist μp−n4 is considered.

012063
The following article is Open access

and

General one-loop integrals with arbitrary mass and kinematical parameters in d-dimensional space-time are studied. By using Bernstein theorem, a recursion relation is obtained which connects (n + 1)-point to n-point functions. In solving this recursion relation, we have shown that one-loop integrals are expressed by a newly defined hypergeometric function, which is a special case of Aomoto-Gelfand hypergeometric functions.

We have also obtained coefficients of power series expansion around 4-dimensional space-time for two-, three- and four-point functions. The numerical results are compared with "LoopTools" for the case of two- and three-point functions as examples.

012064
The following article is Open access

, , and

We calculated the next to leading order quantum chromodynamics corrections of J/ψ production at B factories. The next to leading order corrections are very important for heavy quarkonium production at B factories. The next to leading order cross sections can be compared with the B factories data. Some detail of the next to leading order calculations within the non-relativistic quantum chromodynamics factorization framework are given here.

Discussions and Others

012065
The following article is Open access

I discuss some aspects of the use of computers in Relativity, Astrophysics and Cosmology. For each section I provide two examples representative of the field, including gravitational collapse, black hole imagery, supernovae explosions, star-black hole tidal interactions, N-body cosmological simulations and detection of cosmic topology.

012066
The following article is Open access

, and

Round table discussions are in the tradition of ACAT. This year's plenary round table discussion was devoted to questions related to the use of scientific software in High Energy Physics and beyond. The 90 minutes of discussion were lively, and quite a lot of diverse opinions were spelled out. Although the discussion was, in part, controversial, the participants agreed unanimously on several basic issues in software sharing:

• The importance of having various licensing models in academic research;

• The basic value of proper recognition and attribution of intellectual property, including scientific software;

• The user respect for the conditions of use, including licence statements, as formulated by the author.

The need of a similar discussion on the issues of data sharing was emphasized and it was recommended to cover this subject at the conference round table discussion of next ACAT. In this contribution, we summarise selected topics that were covered in the introductory talks and in the following discussion.