This site uses cookies. By continuing to use this site you agree to our use of cookies. To find out more, see our Privacy and Cookies policy.
Brought to you by:

Table of contents

Volume 513

2014

Previous issue Next issue

Accepted papers received: 11 April 2014
Published online: 11 June 2014

022001
The following article is Open access

, , , and

The FairRoot framework is the standard framework for simulation, reconstruction and data analysis for the FAIR experiments. The framework is designed to optimise the accessibility for beginners and developers, to be flexible and to cope with future developments. FairRoot enhances the synergy between the different physics experiments. As a first step toward simulation of free streaming data, the time based simulation was introduced to the framework. The next step is the event source simulation. This is achieved via a client server system. After digitization the so called "samplers" can be started, where sampler can read the data of the corresponding detector from the simulation files and make it available for the reconstruction clients. The system makes it possible to develop and validate the online reconstruction algorithms. In this work, the design and implementation of the new architecture and the communication layer will be described.

022002
The following article is Open access

Softinex names a software environment targeted to do data analysis and visualization. It covers the C++ inlib and exlib "header only" libraries that permit, through GL-ES and a maximum of common code, to build applications deliverable on the Apple AppStore (iOS), GooglePlay (Android), traditional laptops/desktops under MacOSX, Linux and Windows, but also deliverable as a web service able to display in various web browsers compatible with WebGL. In this paper we explain the coarse graining ideas, choices, code organization of softinex along a short presentation of some applications done so far (ioda, g4view, etc...). At end we present the "wall" programs that permit to visualize HEP data (plots, geometries, events) on a large display surface done with an assembly of screens driven by a set of computers. The web portal for softinex is http://softinex.lal.in2p3.fr.

022003
The following article is Open access

, , , , and

Due to conceptual difference between geometry descriptions in Computer-Aided Design (CAD) systems and particle transport Monte Carlo (MC) codes direct conversion of detector geometry in either direction is not feasible. The paper presents an update on functionality and application practice of the CATIA-GDML geometry builder first introduced at CHEP2010. This set of CATIAv5 tools has been developed for building a MC optimized GEANT4/ROOT compatible geometry based on the existing CAD model. The model can be exported via Geometry Description Markup Language (GDML). The builder allows also import and visualization of GEANT4/ROOT geometries in CATIA. The structure of a GDML file, including replicated volumes, volume assemblies and variables, is mapped into a part specification tree. A dedicated file template, a wide range of primitives, tools for measurement and implicit calculation of parameters, different types of multiple volume instantiation, mirroring, positioning and quality check have been implemented. Several use cases are discussed.

022004
The following article is Open access

, and

We perform a LHC data analysis workflow using tools and data formats that are commonly used in the "Big Data" community outside High Energy Physics (HEP). These include Apache Avro for serialisation to binary files, Pig and Hadoop for mass data processing and Python Scikit-Learn for multi-variate analysis. Comparison is made with the same analysis performed with current HEP tools in ROOT.

022005
The following article is Open access

and

The Geant4 simulation toolkit has reached maturity in the middle of the previous decade, providing a wide variety of established features coherently aggregated in a software product, which has become the standard for detector simulation in HEP and is used in a variety of other application domains.

We review the most recent capabilities introduced in the kernel, highlighting those, which are being prepared for the next major release (version 10.0) that is scheduled for the end of 2013. A significant new feature contained in this release will be the integration of multi-threading processing, aiming at targeting efficient use of modern many-cores system architectures and minimization of the memory footprint for exploiting event-level parallelism. We discuss its design features and impact on the existing API and user-interface of Geant4. Revisions are made to balance the need for preserving backwards compatibility and to consolidate and improve the interfaces; taking into account requirements from the multithreaded extensions and from the evolution of the data processing models of the LHC experiments.

022006
The following article is Open access

and

The huge success of Run 1 of the LHC would not have been possible without detailed detector simulation of the experiments. The outstanding performance of the accelerator with a delivered integrated luminosity of 25 fb−1 has created an unprecedented demand for large simulated event samples. This has stretched the possibilities of the experiments due to the constraint of their computing infrastructure and available resources. Modern, concurrent computing techniques optimised for new processor hardware are being exploited to boost future computing resources, but even the most optimistic scenarios predict that additional action needs to be taken to guarantee sufficient Monte Carlo production statistics for high quality physics results during Run 2.

In recent years, the ATLAS collaboration has put dedicated effort in the development of a new Integrated Simulation Framework (ISF) that allows running full and fast simulation approaches in parallel and even within one event. We present the main concepts of the ISF, which allows a fine-tuned detector simulation targeted at specific physics cases with a decrease in CPU time per event by orders of magnitude. Additionally, we will discuss the implications of a customised simulation in terms of validity and accuracy and will present new concepts in digitization and reconstruction to achieve a fast Monte Carlo chain with a per event execution time of a few seconds.

022007
The following article is Open access

, , , , , , , , , et al

CMOS Monolithic Active Pixel Sensors (MAPS) demonstrated excellent performances in the field of charged particle tracking. They feature an excellent single point resolution of few μm, a light material budget of 0.05% Xo in combination with a good radiation tolerance and time resolution. This makes the sensors a valuable technology for micro vertex detectors (MVD) of various experiments in heavy ion and particle physics like STAR and CBM. State of the art MAPS are equipped with a rolling shutter readout. Therefore, the data of one individual event is typically found in more than one data train generated by the sensor. This paper presents a concept to introduce this feature in both simulation and data analysis, taking profit of the sensor topology of the MVD. This topology allows to use for massive parallel data streaming and handling strategies within the FairRoot framework.

022008
The following article is Open access

, , , , and

The international Muon Ionisation Cooling Experiment (MICE) is designed to demonstrate the principle of muon ionization cooling, for application to a future Neutrino Factory or Muon Collider. In order to measure the change in emittance, MICE is equipped with a pair of high precision scintillating fibre trackers. The trackers are required to measure a 10% change in emittance to 1% accuracy (giving an overall precision of 0.1%).

This paper describes the tracker reconstruction software, as a part of the overall MICE software framework, MAUS. Channel clustering is described, proceeding to the formation of space-points, which are then associated with particle tracks using pattern recognition algorithms. Finally a full custom Kalman track fit is performed, to account for energy loss and multiple scattering. Exemplar results are shown for Monte Carlo data.

022009
The following article is Open access

, , , and

We describe the software chain for the Atlas muon optical alignment system, dedicated to the measurement of geometry corrections for the Muon Spectrometer chambers positions. The corrections are then used inside the reconstruction software. We detail in particular the architecture of the monitoring application, deployed in a J2EE server, and the monitoring tools that have been developed for the daily follow up. The system has been in production during the whole Run 1 period (2010-2013).

022010
The following article is Open access

, , and

The detector description is an essential component that is used to analyze data resulting from particle collisions in high energy physics experiments. We will present a generic detector description toolkit and describe the guiding requirements and the architectural design for such a toolkit, as well as the main implementation choices. The design is strongly driven by easy of use; developers of detector descriptions and applications using them should provide minimal information and minimal specific code to achieve the desired result. The toolkit will be built reusing already existing components from the ROOT geometry package and provides missing functional elements and interfaces to offer a complete and coherent detector description solution. A natural integration to Geant4, the detector simulation program used in high energy physics, is provided.

022011
The following article is Open access

, , , and

One of the key requirements for Higgs physics at the International Linear Collider ILC is excellent track reconstruction with very good momentum and impact parameter resolution. ILD is one of the two detector concepts at the ILC. Its central tracking system comprises of an outer Si-tracker, a highly granular TPC, an intermediate silicon tracker and a pixel vertex detector, and it is complemented by silicon tracking disks in the forward direction. Large hit densities from beam induced coherent electron-positron pairs at the ILC pose an additional challenge to the pattern recognition algorithms. We present the recently developed new ILD tracking software, the pattern recognition algorithms that are using clustering techniques, Cellular Automatons and Kalman filter based track extrapolation. The performance of the ILD tracking system is evaluated using a detailed simulation including dead material, gaps and imperfections.

022012
The following article is Open access

The CMS collaboration has developed a fast Monte Carlo simulation of the CMS detector with event production rates ~ 100 times faster than the GEANT4-based simulation ("full simulation"), with nonetheless comparable accuracy for most of the physics objects typically considered in the analyses. We discuss basic technical principles of the CMS Fast Simulation and their implementation in the different components of the detector, and we illustrate the most recent developments towards a tighter integration with the full simulation and a better flexibility.

022013
The following article is Open access

, , and

In the past, the increasing demands for HEP processing resources could be fulfilled by the ever increasing clock-frequencies and by distributing the work to more and more physical machines. Limitations in power consumption of both CPUs and entire data centres are bringing an end to this era of easy scalability. To get the most CPU performance per watt, future hardware will be characterised by less and less memory per processor, as well as thinner, more specialized and more numerous cores per die, and rather heterogeneous resources. To fully exploit the potential of the many cores, HEP data processing frameworks need to allow for parallel execution of reconstruction or simulation algorithms on several events simultaneously. We describe our experience in introducing concurrency related capabilities into Gaudi, a generic data processing software framework, which is currently being used by several HEP experiments, including the ATLAS and LHCb experiments at the LHC. After a description of the concurrent framework and the most relevant design choices driving its development, we describe the behaviour of the framework in a more realistic environment, using a subset of the real LHCb reconstruction workflow, and present our strategy and the used tools to validate the physics outcome of the parallel framework against the results of the present, purely sequential LHCb software. We then summarize the measurement of the code performance of the multithreaded application in terms of memory and CPU usage.

022014
The following article is Open access

g4tools, that is originally part of the inlib and exlib packages, provides a very light and easy to install set of C++ classes that can be used to perform analysis in a Geant4 batch program. It allows to create and manipulate histograms and ntuples, and write them in supported file formats (ROOT, AIDA XML, CSV and HBOOK).

It is integrated in Geant4 through analysis manager classes, thus providing a uniform interface to the g4tools objects and also hiding the differences between the classes for different supported output formats. Moreover, additional features, such as for example histogram activation or support for Geant4 units, are implemented in the analysis classes following users requests. A set of Geant4 user interface commands allows the user to create histograms and set their properties interactively or in Geant4 macros. g4tools was first introduced in the Geant4 9.5 release where its use was demonstrated in one basic example, and it is already used in a majority of the Geant4 examples within the Geant4 9.6 release.

In this paper, we will give an overview and the present status of the integration of g4tools in Geant4 and report on upcoming new features.

022015
The following article is Open access

, , , , , , , , , et al

In this work we present recent progress in Geant4 electromagnetic physics modelling, with an emphasis on the new refinements for the processes of multiple and single scattering, ionisation, high energy muon interactions, and gamma induced processes. The future LHC upgrade to 13 TeV will bring new requirements regarding the quality of electromagnetic physics simulation: energy, particle multiplicity, and statistics will be increased. The evolution of CPU performance and developments for Geant4 multi-threading connected with Geant4 electromagnetic physics sub-packages will also be discussed.

022016
The following article is Open access

, , , , , , , , and

In 2018 data taking for hadronphysics facility PANDA is planned to commence. It will be build at the antiproton accelerator HESR, which itself is a part of the FAIR complex (GSI, Darmstadt, Germany). The luminosity at PANDA will be measured by a dedicated sub-detector, which will register scattered antiproton tracks from pp elastic scattering. From a software point of view, the Luminosity Detector is a tracking system. Therefore the most of its offline software parts are typical for a track reconstruction. The basic concept and Monte Carlo based performance studies of each reconstruction step is presented in this paper.

022017
The following article is Open access

, , , , and

A Geant4-based Python/C++ simulation and coding framework, which has been developed and used in order to aid the R&D efforts for thermal neutron detectors at neutron scattering facilities, is described. Built upon configurable geometry and generator modules, it integrates a general purpose object oriented output file format with meta-data, developed to facilitate a faster turn-around time when setting up and analysing simulations. Also discussed are the extensions to Geant4 which have been implemented in order to include the effects of low-energy phenomena such as Bragg diffraction in the polycrystalline support materials of the neutron detectors. Finally, an example application of the framework is briefly shown.

022018
The following article is Open access

, , , , , and

The track reconstruction algorithms of the ATLAS experiment have demonstrated excellent performance in all of the data delivered so far by the LHC. The expected large increase in the number of interactions per bunch crossing introduces new challenges both in the computational aspects and physics performance of the algorithms. With the aim of taking advantage of modern CPU design and optimizing memory and CPU usage in the reconstruction algorithms a number of projects are being pursued. These include rationalization of the event data model, vectorization of the core components of the algorithms, and removing algorithm bottlenecks by using modern code analysis tools. Recent results of the advances made in these ongoing projects indicate up to three-fold speedup in the optimized modules while in some modules the code size could be reduced by up to 97% leading to higher readability, maintainability and decreased interface complexity.

022019
The following article is Open access

, , and

Development of fast and efficient event reconstruction algorithms is an important and challenging task in the Compressed Baryonic Matter (CBM) experiment at the future FAIR facility. The event reconstruction algorithms have to process terabytes of input data produced in particle collisions. In this contribution, several event reconstruction algorithms are presented. Optimization of the algorithms in the following CBM detectors are discussed: Ring Imaging Cherenkov (RICH) detector, Transition Radiation Detectors (TRD) and Muon Chamber (MUCH). The ring reconstruction algorithm in the RICH is discussed. In TRD and MUCH track reconstruction algorithms are based on track following and Kalman Filter methods. All algorithms were significantly optimized to achieve maximum speed up and minimum memory consumption. Obtained results showed that a significant speed up factor for all algorithms was achieved and the reconstruction efficiency stays at high level.

022020
The following article is Open access

, and

At the mass scale of a proton, the strong force is not well understood. Various quark models exist, but it is important to determine which quark model(s) are most accurate. Experimentally, finding resonances predicted by some models and not others would give valuable insight into this fundamental interaction. Several labs around the world use photoproduction experiments to find these missing resonances. The aim of this work is to develop a robust Bayesian data analysis program for extracting polarisation observables from pseudoscalar meson photoproduction experiments using CLAS at Jefferson Lab. This method, known as nested sampling, has been compared to traditional methods and has incorporated data parallelisation and GPU programming. It involves an event-by-event likelihood function, which has no associated loss of information from histogram binning, and results can be easily constrained to the physical region. One of the most important advantages of the nested sampling approach is that data from different experiments can be combined and analysed simultaneously. Results on both simulated and previously analysed experimental data for the K+Λ channel will be discussed.

022021
The following article is Open access

, , , , , , , , , et al

CMOS Monolithic Active Pixel Sensors (MAPS) demonstrated excellent performances in the field of charged particle tracking. Among their strong points are an single point resolution few μm, a light material budget of 0.05% X0 in combination with a good radiation tolerance and high rate capability. Those features make the sensors a valuable technology for vertex detectors of various experiments in heavy ion and particle physics. To reduce the load on the event builders and future mass storage systems, we have developed algorithms suited for preprocessing and reducing the data streams generated by the MAPS. This real-time processing employs remaining free resources of the FPGAs of the readout controllers of the detector and complements the on-chip data reduction circuits of the MAPS.

022022
The following article is Open access

As part of the CERN openlab collaboration a study was made into the possibility of performing analysis of the data collected by the experiments at the Large Hadron Collider (LHC) through SQL-queries on data stored in a relational database. Currently LHC physics analysis is done using data stored in centrally produced "ROOT-ntuple" files that are distributed through the LHC computing grid. The SQL-based approach to LHC physics analysis presented in this paper allows calculations in the analysis to be done at the database and can make use of the database's in-built parallelism features. Using this approach it was possible to reproduce results for several physics analysis benchmarks. The study shows the capability of the database to handle complex analysis tasks but also illustrates the limits of using row-based storage for storing physics analysis data, as performance was limited by the I/O read speed of the system.

022023
The following article is Open access

and

A small experiment must devote its limited computing expertise to writing physics code directly applicable to the experiment. A software "framework" is essential for providing an infrastructure that makes writing the physics-relevant code easy. In this paper, we describe a highly modular and easy to use framework for writing Geant4 based simulations called "artg4". This framework is a layer on top of the art framework.

022024
The following article is Open access

and

The high luminosity of the LHC results in a significant background to interesting physics events known as pile-up. ATLAS has adopted two independent methods for modeling pile-up and its effect on analyses. The first is a bottom-up approach, using a detailed simulation of the detector to recreate each component of the pile-up background. The second uses specially recorded data events to emulate it. This article reports on the experience using both of these methods, including performance considerations, for simulating pile-up in ATLAS.

022025
The following article is Open access

, , , , , and

A large part of the physics program of the PANDA experiment at FAIR deals with the search for new conventional and exotic hadronic states like e.g. hybrids and glueballs. For many analyses PANDA will need an amplitude analysis, e.g. a partial wave analysis (PWA), to identify possible candidates and for the classification of known states. Therefore, a new, agile and efficient amplitude analysis framework ComPWA is under development. It is modularized to provide easy extension with models and formalisms as well as fitting of multiple datasets, even from different experiments. Experience from existing PWA programs was used to fix the requirements of the framework and to prevent it from restrictions. It will provide the standard estimation and optimization routines like Minuit2 and the Geneva library and be open to insert additional ones. The challenges involve parallelization, fitting with a high number of free parameters, managing complex meta-fits and quality assurance / comparability of fits. To test and develop the software, it will be used with data from running experiments like BaBar or BESIII. These proceedings show the status of the framework implementation as well as first test results.

022026
The following article is Open access

, , , , and

CMS faces real challenges with upgrade of the CMS detector through 2020 and beyond. One of the challenges, from the software point of view, is managing upgrade simulations with the same software release as the 2013 scenario. We present the CMS geometry description software model, its integration with the CMS event setup and core software. The CMS geometry configuration and selection is implemented in Python. The tools collect the Python configuration fragments into a script used in CMS workflow. This flexible and automated geometry configuration allows choosing either transient or persistent version of the same scenario and specific version of the same scenario. We describe how the geometries are integrated and validated, and how we define and handle different geometry scenarios in simulation and reconstruction. We discuss how to transparently manage multiple incompatible geometries in the same software release. Several examples are shown based on current implementation assuring consistent choice of scenario conditions. The consequences and implications for multiple/different code algorithms are discussed.

022027
The following article is Open access

, and

We describe a graphical processing unit (GPU) implementation of the Hybrid Markov Chain Monte Carlo (HMC) method for training Bayesian Neural Networks (BNN). Our implementation uses NVIDIA's parallel computing architecture, CUDA. We briefly review BNNs and the HMC method and we describe our implementations and give preliminary results.

022028
The following article is Open access

, , and

The Italian National Centre of Hadrontherapy for Cancer Treatment (CNAO -Centro Nazionale di Adroterapia Oncologica) in Pavia, Italy, has started the treatment of selected cancers with the first patients in late 2011. In the coming months at CNAO plans are to activate a new dedicated treatment line for irradiation of uveal melanomas using the available active beam scan. The beam characteristics and the experimental setup should be tuned in order to reach the necessary precision required for such treatments. Collaboration between CNAO foundation, University of Pavia and INFN has started in 2011 to study the feasibility of these specialised treatments by implementing a MC simulation of the transport beam line and comparing the obtained simulation results with measurements at CNAO. The goal is to optimise an eye-dedicated transport beam line and to find the best conditions for ocular melanoma irradiations. This paper describes the Geant4 toolkit simulation of the CNAO setup as well as a modelised human eye with a tumour inside. The Geant4 application could be also used to test possible treatment planning systems. Simulation results illustrate the possibility to adapt the CNAO standard transport beam line by optimising the position of the isocentre and the addition of some passive elements to better shape the beam for this dedicated study.

022029
The following article is Open access

, , , , , and

The Generator Services (GENSER) provide ready-to-use Monte Carlo generators, compiled on multiple platforms or ready to be compiled, for the LHC experiments. In this paper we discuss the recent developments in the build machinery, which allowed to fully automatize the installation process. The new system is based on and is integrated entirely with the "LCG external software" infrastructure, providing all the external packages needed by the LHC experiments.

022030
The following article is Open access

and

In this paper we present the recent developments in the Geant4 hadronic framework. Geant4 is the main simulation toolkit used by the LHC experiments and therefore a lot of effort is put into improving the physics models in order for them to have more predictive power. As a consequence, the code complexity increases, which requires constant improvement and optimization on the programming side. At the same time, we would like to review and eventually reduce the complexity of the hadronic software framework. As an example, a factory design pattern has been applied in Geant4 to avoid duplications of objects, like cross sections, which can be used by several processes or physics models. This approach has been applied also for physics lists, to provide a flexible configuration mechanism at run-time, based on macro files. Moreover, these developments open the future possibility to build Geant4 with only a specified sub-set of physics models. Another technical development focused on the reproducibility of the simulation, i.e. the possibility to repeat an event once the random generator status at the beginning of the event is known. This is crucial for debugging rare situations that may occur after long simulations. Moreover, reproducibility in normal, sequential Geant4 simulation is an important prerequisite to verify the equivalence with multithreaded Geant4 simulations.

022031
The following article is Open access

We have improved the performance of HF GFlash, a very fast simulation of electromagnetic showers using parameterizations of the profiles in Hadronic Forward Calorimeter. HF GFlash has good agreement to 7 TeV Collision Data and previous Test Beam results. In addition to good agreement with Data and previous Test Beam results, HF GFlash can simulate about 10000 times faster than Geant4. We expect that HF GFlash can help simulation performance at Super Large Hadron Collider (LHC).

022032
The following article is Open access

and

We have created 3D models of the CMS detector and particle collision events in SketchUp, a 3D modelling program. SketchUp provides a Ruby API which we use to interface with the CMS Detector Description to create 3D models of the CMS detector. With the Ruby API, we also have created an interface to the JSON-based event format used for the iSpy event display to create 3D models of CMS events. These models have many applications related to 3D representation of the CMS detector and events. Figures produced based on these models were used in conference presentations, journal publications, technical design reports for the detector upgrades, art projects, outreach programs, and other presentations.

022033
The following article is Open access

and

In the context of Monte Carlo (MC) simulation of particle transport Uncertainty Quantification (UQ) addresses the issue of predicting non statistical errors affecting the physical results, i.e. errors deriving mainly from uncertainties in the physics data and/or in the model they embed. In the case of a single uncertainty a simple analytical relation exists among its the Probability Density Function (PDF) and the corresponding PDF for the output of the simulation: this allows a complete statistical analysis of the results of the simulation. We examine the extension of this result to the multi-variate case, when more than one of the physical input parameters are affected by uncertainties: a typical scenario is the prediction of the dependence of the simulation on input cross section tabulations.

022034
The following article is Open access

and

Modern computing hardware is transitioning from using a single high frequency complicated computing core to many lower frequency simpler cores. As part of that transition, hardware manufacturers are urging developers to exploit concurrency in their programs via operating system threads. We will present CMS' effort to evolve our single threaded framework into a highly concurrent framework. We will outline the design of the new framework and how the design was constrained by the initial single threaded design. Then we will discuss the tools we have used to identify and correct thread unsafe user code. Finally we will end with a description of the coding patterns we found useful when converting code to being thread-safe.

022035
The following article is Open access

, , and

An algorithm for reconstruction of the Higgs mass in H → ττ decays is presented. The algorithm computes for each event a likelihood function P(Mττ) which quantifies the level of compatibility of a Higgs mass hypothesis Mττ with measured momenta of the visible tau decay products plus the missing transverse energy reconstructed in the event. The algorithm is used in the CMS H → ττ analysis, where it is found to improve the sensitivity to discover the Standard Model Higgs boson in this decay channel by about 30%.

022036
The following article is Open access

, and

The STAR experiment has adopted an Abstract Geometry Modeling Language (AgML) as the primary description of our geometry model. AgML establishes a level of abstraction, decoupling the definition of the detector from the software libraries used to create the concrete geometry model. Thus, AgML allows us to support both our legacy GEANT 3 simulation application and our ROOT/TGeo based reconstruction software from a single source, which is demonstrably self- consistent. While AgML was developed primarily as a tool to migrate away from our legacy FORTRAN-era geometry codes, it also provides a rich syntax geared towards the rapid development of detector models. AgML has been successfully employed by users to quickly develop and integrate the descriptions of several new detectors in the RHIC/STAR experiment including the Forward GEM Tracker (FGT) and Heavy Flavor Tracker (HFT) upgrades installed in STAR for the 2012 and 2013 runs. AgML has furthermore been heavily utilized to study future upgrades to the STAR detector as it prepares for the eRHIC era.

With its track record of practical use in a live experiment in mind, we present the status, lessons learned and future of the AgML language as well as our experience in bringing the code into our production and development environments. We will discuss the path toward eRHIC and pushing the current model to accommodate for detector miss-alignment and high precision physics.

022037
The following article is Open access

, , and

The STAR experiment pursues a broad range of physics topics in pp,pA and AA collisions produced by the Relativistic Heavy Ion Collider (RHIC). Such a diverse experimental program demands a simulation framework capable of supporting an equally diverse set of event generators, and a flexible event record capable of storing the (common) particle-wise and (varied) event-wise information provided by the external generators. With planning underway for the next round of upgrades to exploit ep and eA collisions from the electron-ion collider (or eRHIC), these demands on the simulation infrastructure will only increase and requires a versatile framework. STAR has developed a new event-generator framework based on the best practices in the community (a survey of existing approach had been made and the "best of all worlds" kept in mind in our design). It provides a common set of base classes which establish the interface between event generators and the simulation and handles most of the bookkeeping associated with a simulation run. This streamlines the process of integrating and configuring an event generator within our software chain. Developers implement two classes: the interface for their event generator, and their event record. They only need to loop over all particles in their event and push them out into the event record. The framework is responsible for vertex assignment, stacking the particles out for simulation, and event persistency. Events from multiple generators can be merged together seamlessly, with an event record which is capable of tracing each particle back to its parent generator. We present our work and approach in detail and illustrate its usefulness by providing examples of event generators implemented within the STAR framework covering for very diverse physics topics. We will also discuss support for event filtering, allowing users to prune the event record of particles which are outside of our acceptance, and/or abort events prior to the more computationally expensive digitization and reconstruction phases. Event filtering has been supported in the previous framework and showed to save enormous amount of resources – the approach within the new framework is a generalization of filtering.

022038
The following article is Open access

and

The Langton Ultimate Cosmic ray Intensity Detector (LUCID) experiment is a satellite-based device that will use five Timepix hybrid silicon pixel detectors to make measurements of the radiation environment at an altitude of approximately 635 km, i.e. in Low Earth Orbit (LEO). The experiment is due to launch aboard Surrey Satellite Technology Limited's (SSTL's) TechDemoSat-1 in 2014. The Timepix detectors, developed by the Medipix Collaboration, are arranged to form the five sides of a cube enclosed by a 0.7 mm thick aluminium "dome", and will be operated in Time-over-Threshold mode to allow the flux, energy and directionality of incident ionising radiation to be measured. To estimate the anticipated data rates with respect to these measurements, the LUCID experiment has been modelled using the GEANT4 software framework. As an input to these simulations, SPENVIS, ESA's Space Environment information system, was used to obtain the estimated flux of trapped protons and electrons in TechDemoSat-1's orbit with NASA's AP-8 and AE-8 models. A web portal, LUCIDITY, was developeded to allow school students from the LUCID Collaboration to manage SPENVIS flux spectra and GEANT4 input cards. The initial results reported here confirm that the LUCID's data transmission allowance is sufficient, and further work applying the techniques to more specific space radiation environments with a more sophisticated simulation is proposed.

022039
The following article is Open access

, and

Analyses at the LHC which search for rare physics processes or determine with high precision Standard Model parameters require accurate simulations of the detector response and the event selection processes. The accurate determination of the trigger response is crucial for the determination of overall selection efficiencies and signal sensitivities. For the generation and the reconstruction of simulated event data, the most recent software releases are usually used to ensure the best agreement between simulated data and real data. For the simulation of the trigger selection process, however, ideally the same software release that was deployed when the real data were taken should be used. This potentially requires running software dating many years back. Having a strategy for running old software in a modern environment thus becomes essential when data simulated for past years start to present a sizable fraction of the total.

We examined the requirements and possibilities for such a simulation scheme within the ATLAS software framework and successfully implemented a proof-of-concept simulation chain. One of the greatest challenges was the choice of a data format which promises long term compatibility with old and new software releases. Over the time periods envisaged, data format incompatibilities are also likely to emerge in databases and other external support services. Software availability may become an issue, when e.g. the support for the underlying operating system might stop. In this paper we present the encountered problems and developed solutions, and discuss proposals for future development. Some ideas reach beyond the retrospective trigger simulation scheme in ATLAS as they also touch more generally aspects of data preservation.