Generative Adversarial Networks for the fast simulation of the Time Projection Chamber responses at the MPD detector

The detailed detector simulation models are vital for the successful operation of modern high-energy physics experiments. In most cases, such detailed models require a significant amount of computing resources to run. Often this may not be afforded and less resource-intensive approaches are desired. In this work, we demonstrate the applicability of Generative Adversarial Networks (GAN) as the basis for such fast-simulation models for the case of the Time Projection Chamber (TPC) at the MPD detector at the NICA accelerator complex. Our prototype GAN-based model of TPC works more than an order of magnitude faster compared to the detailed simulation without any noticeable drop in the quality of the high-level reconstruction characteristics for the generated data. Approaches with direct and indirect quality metrics optimization are compared.


Introduction
Simulation of particle detectors is inevitable in the High Energy Physics (HEP) experiments.For a typical HEP data analysis, the limited size of simulated data samples often contributes directly to the uncertainty in the final result.Since the number of simulated events that one can afford to produce is constrained by the computational efficiency of the simulation algorithms, faster algorithms are always desired [1].
Computational efficiency of the detailed simulation is often limited by the fine granularity of the physics simulation steps being performed.Therefore, a speed-up may be achieved by aggregating a sequence of such steps with a single estimate of the probability distribution for the last step output parameters, conditioned by the first step inputs.An important requirement for such a probability distribution estimate is that it should allow for efficient sampling.Generative Adversarial Networks (GANs) [2] are a good candidate for such a parametric estimate since they only require a forward pass through a neural network to generate new samples.In this work, we demonstrate an application of GANs for building a fast-simulation model of the Time Projection Chamber (TPC) detector at the MPD experiment at the NICA accelerator complex [3].

TPC fast simulation approach and validation
The TPC detector is a gas-filled cylindrical volume with a uniform electric field parallel to its main axis.It performs charged particle tracking by measuring the 2D projection of the ionization centers locations, as the electrons drift through the gas volume and reach the sensitive pads at the end-caps of the chamber, the 3rd coordinate being reconstructed from the duration of the drift.
The main steps that we aim to accelerate the simulation of are the ionization process, the electron drift and the electronics response [3].We achieve this goal by training a GAN to reproduce the distribution of the read-out signals, as modeled by the detailed simulation procedure, conditioned on the parameters of the track segment contributing to the given response, as shown in Fig. 1.For each track segment, we simulate a rectangular matrix of sensitive pad responses for a single row of pads in multiple discrete time intervals.Overall, we generate these responses, each in 8 pads for 16 time intervals, which makes a total of 128 numbers per segment.The track segments are parameterized by three spatial coordinates and two angles, as shown in Fig. 2.  We validate our model at two different levels: -The low-level evaluation involves direct comparison of the GAN-generated detector response distribution with the left-out data sample of the training set.-For the high-level evaluation we compare the reconstructed track characteristics between the simulation configurations when our GAN and the detailed simulation models are used, respectively.
For the low-level evaluation, we transform the 128-dimensional responses into lower-dimensional representation which is then compared between the fast and detailed simulation models, coordinate-wise, as the function of the input parameters.The lower-dimensional representation is obtained by calculating the coordinates of the centers of mass for the obtained responses, along with their widths, skew and the integrated amplitude -six numbers per response in total.These quantities are chosen from our expectations on the factors that affect the TPC tracking characteristics the most.E.g., the center of mass locations translate directly to the reconstructed hit coordinates and therefore should affect the coordinate resolution of the detector.Similarly, response widths should relate to the two-track resolution.We characterize the distributions of these numbers with their means and standard deviations in 1D bins of the input variables.Some examples of the low-level evaluation plots for such quantities are shown in Fig. 3. Also, for model selection and to control the convergence of the training process, we calculate a χ 2 -like quantity by accumulating the squared discrepancies between the mean ± one standard deviation values of such distributions in each 1D bin.The high-level evaluation is done by comparing the reconstructed track characteristics between the fast and detailed TPC simulation configurations.Fig. 4 shows an example of such a comparison plot.Overall, both validation strategies demonstrate good quality of the proposed fast simulation model.For a more detailed validation, see [3].
Our model speeds up the TPC simulation by at least a factor of 12, with some room for further inference time optimization, as described in the following section.

Inference time optimization
Once the model is successfully trained and validated, it is crucial to deploy it in the most computationally efficient way.Having developed the baseline model with good enough quality and inference time, as described in the previous section, we then explore the ways of further decreasing the inference time without a loss in quality.We study the model performance in terms of time required for predicting a batch of detector responses on a single CPU thread.Such configuration corresponds to the expected target execution platform.For a single-point quality estimate, we use the χ 2 -like quantity defined in the previous section.
The generator of the baseline model is a fully-connected network consisting of four hidden layers of sizes ranging from 32 to 64 and a 32-dimensional normally distributed latent variable.By profiling the inference of this generator we observe that, for such a small network, the latent space random number generation takes a notable fraction of the inference time.This means that we can optimize the inference time by choosing a simpler latent representation.Fig. 5 shows that models based on uniformly distributed latent variables allow for faster inference without any loss in quality, unless we decrease the size of this representation down to 8.
Optimizing the architecture for a network that is already small is tricky.Further reducing the size of the model results in poor generation quality and unstable training within the GAN framework.We observe that the best quality of such smaller models can be achieved by training them with the knowledge distillation technique [4], the baseline generator being used as the teacher.The resulting experimental quality-speed trade-off curve is shown in Fig. 6.It can be seen that, while latent space optimizations do allow for faster models of the same quality, further simplifying the network architecture results in a quality drop.

Direct low-level quality metric optimization
One may wonder: why bother generating the high-dimensional detector response if its major characteristics can be defined with just six numbers that we use in the low-level evaluation, as described in Sec.2? Indeed, we may learn just the distribution of those six parameters, and then deterministically transform them into the higher-dimensional signals.In such a scenario, our generative model aims to directly reproduce the low-level quality metrics, as they get explicitly incorporated in the optimized objective function.In this section, we explore this low-level optimization approach and highlight some pitfalls that make it not as straightforward as one may expect.
First of all, when trying to learn the 6-dimensional distribution of the signal characteristics, we find that it is much harder to reach a reasonable quality level.The quality metrics for our best such model are shown in Fig. 7.One can clearly see that there are noticeable discrepancies between the GAN-generated and training samples.We acknowledge that this result may be a consequence of an imperfect hyper-parameter search and therefore continue to look for a more optimal configuration.
Another issue with this approach is that it is not obvious how to transform the six lowlevel characteristics back into the 128-dimensional (8-by-16) response matrix.Since the chosen characteristics can be thought of as the 0-th (integral), 1-st (center of mass location) and 2-nd (widths and skew) moments of a 2D distribution, it may be natural to describe the inverse transformation by a probability density function (normalized to the required integral) of a 2D distribution that does not have higher moments -i.e., the Gaussian distribution, parameterized by the six mentioned quantities.The problem is that the target 2D distribution should be quantized onto a rectangular 8-by-16 grid, while this operation does not preserve the moments.Fig. 8 shows the distribution of the two widths for the signals in the training dataset, superimposed with the arrows denoting the distortion for given pairs of widths, if we were to convert the signals to quantized Gaussians.Each arrow has the length of the average distortion magnitude and points towards the most popular distortion direction, within each 2D bin.One can clearly see that the level of distortion is too high to be ignored.A solution that we are looking into is to incorporate the distorting operation between the generator and the discriminator networks during training, such that the generator learns to compensate for it automatically.

Summary
In this work, we present a fast simulation approach for the TPC detector using GANs.We demonstrate that our model accelerates the detailed simulation by at least an order of magnitude, and it is capable of producing detector responses that look authentic in both low-and high-level validation procedures.The trade-off between the inference time and generation quality is further explored by optimizing the model architecture.An alternative solution with direct low-level quality metric optimization is also outlined.

Figure 1 .
Figure 1.The transverse projection of a track on top of a sector of sensitive pads of the detector.The track is split into segments contributing to each of the rows of sensitive pads.

Figure 2 .
Figure2.A track passing through the volume of the detector.An intersection of this track with a selected pad plane forms a track segment and is highlighted in the picture.The angles that define its direction are marked.

Figure 3 .Figure 4 .
Figure 3.Examples of low-level evaluation plots for our model.Shown are profiles for one of the six components of the lower-dimensional signal representation, compared between GAN and the detailed simulation as the function of the three (out of five) parameters defining the track segment.For more details, see[3].

Figure 5 .
Figure 5. Inference time versus quality for various latent space configurations.

Figure 6 .
Figure 6.Inference time versus quality for different network architectures.

Figure 7 .
Figure 7.Examples of low-level evaluation plots for a model generating the low-level metrics directly.Shown are the profiles for one of the low-level characteristics.

Figure 8 .
Figure 8.The squared pad and time widths distribution, with the distortion directions (see text).