Design and performance of the CMS High Granularity Calorimeter Level 1 trigger

The high luminosity (HL) LHC will pose significant detector challenges for radiation tolerance and event pileup, especially for forward calorimetry, and this will provide a benchmark for future hadron colliders. The CMS experiment has chosen a novel high granularity calorimeter (HGCAL) for the forward region as part of its planned Phase 2 upgrade for the HL-LHC. Based largely on silicon sensors, the HGCAL features unprecedented transverse and longitudinal readout segmentation, which will be exploited in the upgraded Level 1 (L1) trigger system. Together with the tracking information, which will also be available at L1, this will open the possibility of pioneering particle-flow-based techniques in the L1 trigger. The high channel granularity results in around one million trigger channels in total and so presents a significant challenge in terms of data manipulation and processing for the trigger, to be compared with the 2000 channels in the endcaps of the current detector. In addition, the high luminosity will result in an average of 140 interactions per bunch crossing that give a huge background rate in the forward region and these will need to be efficiently rejected by the trigger algorithms. Furthermore, 3-dimensional reconstruction of the HGCAL clusters, which will be used for particle-flow, in events with high hit rates is also a complex computational problem for the trigger, unprecedented with the 2-dimensional reconstruction in the current CMS calorimeter trigger. The status of the trigger architecture and design, as well as the concepts for the algorithms needed in order to tackle these major issues and their impact on trigger object performance, are presented here.


Introduction
The high luminosity (HL) phase of the LHC, planned to start in 2026, will introduce major changes to the collision conditions with respect to those of the current the LHC Phase 1. The instantaneous luminosity will be increased by up to a factor of four and the number of interactions per bunch crossing could reach values of up to 200. These are very challenging conditions for the design of the Level 1 (L1) trigger system. Furthermore, the CMS physics programme for Phase 2 will still include the study of rare electroweak processes, that will require similar trigger thresholds as are currently used in Phase 1.
To face these challenges, the CMS collaboration is undertaking an ambitious upgrade of the detector. Part of this upgrade is the installation of two new endcap calorimeters, the high granularity calorimeters (HGCAL) [1]. They will be more radiation tolerant and have a finer granularity than the current endcap calorimeters. Each of the new calorimeters will have approximately three million readout channels, split over 52 layers to provide both lateral and longitudinal information of the showers. In operation, they will produce a very large volume of data that will need to be processed by the trigger system.
The overall CMS L1 trigger system will be upgraded for Phase 2, including, in particular, an increase in its acceptance rate from 100 to 750 kHz, and an increase in the latency from 3.2 to 12.5 µs, allowing for more complex triggers at L1. In addition, the new L1 trigger system will use information from the tracker, opening up for the first time the possibility of an implementation of particle-flow algorithms [2] at the L1 trigger level. For these algorithms good position resolution and good shower separation of the calorimeter clusters are needed to associate tracks with showers in the HGCAL. However, tracking information at L1 will be limited to pseudorapidities |η| < 2.4, thus the trigger for the region 2.4 < |η| < 3.0 will be based only on information from the HGCAL.

HGCAL trigger processing
The trigger processing for HGCAL will consist of both on-and off-detector components and the hardware structure is illustrated in Fig. 1. The on-detector processing is performed in the front-end ASICS, which send trigger data at 40 MHz to the off-detector electronics for back-end processing. The front-end electronics is mostly used to reduce the data rate and is kept relatively simple, both to minimize power consumption and to maximize the flexibility of the system. Also, only half of the layers from the electromagnetic calorimeter are used in the trigger to reduce the data volume. In the front-end electronics, trigger cells are formed by summing the energy of four, or nine, neighbouring sensor cells, depending on the local granularity of the detector, corresponding to an area of about 4.5 cm 2 . Only trigger data from trigger cells associated with transverse energy above some configurable threshold are sent to the off detector. In addition, energy sums are formed from the energy of all the trigger cells covering an area of around 36 cm 2 and transmitted to the off-detector electronics. These are used in the estimation of event-level energy sums, such as the missing transverse energy. In the first stage of the off-detector, or back-end, processing, two-dimensional clusters are formed for each layer, using a dynamic clustering method, as used in the CMS Phase 1 calorimeter trigger [3]. The algorithm first identifies the seed trigger cells associated with high energy deposits and clusters the topologically connected trigger cells within a limited radius. Topological variables, quantifying the transverse extension of the shower, are computed at this stage to be used for background discrimination.
Three-dimensional clusters are formed in the second stage of the back-end processing by combining the two-dimensional clusters along the depth of the calorimeter, with the transverse energy defined as a weighted sum of the transverse energy of the two-dimensional clusters. The detailed implementation of this clustering is currently under study and several options are being investigated, from a simple geometrical cone-based implementation to a more complex likelihoodbased approach. Additional topological variables are computed at this stage to determine, for example, the length of the shower or the layer associated with the highest energy deposit. Also, different energy estimates can be computed for a three-dimensional cluster, depending on the identification of the shower as electromagnetic or hadronic, and potentially pile-up subtraction could be included by assigning negative weights to some layers.
Various energy thresholds are used in the processing chain, both to limit the impact of electronic noise and pile-up, and to keep the number of objects produced within the bandwidth constraints. The lower energy response of the clusters due to the components below the energy thresholds can be corrected with the cluster calibration, while the impact on the energy resolution of hadronic objects could be recovered by combining the information from the energy sums, which are not impacted by those thresholds.

Trigger object performance
In order to assess the impact on the trigger performance of the design choices made for the HGCAL trigger, reconstruction and identification algorithms have been developed using only information from the calorimeter.

Electrons and photons
In the algorithm used to select electrons and photons, single three-dimensional clusters are combined with topological variables, used to discriminate the electromagnetic from the hadronic objects. The variables used are the shower width along the radial direction, the first layer of the shower, the layer with the maximum energy deposited and the shower length. They are combined into a Boosted Decision Tree that is used to define the working points. These variables can be later complemented with variables derived from the tracker in the central L1 trigger.
The performance of the electromagnetic (e/γ) trigger derived from the HGCAL is illustrated in Fig. 2 and 3 where the sharp turn-on curve and a 99% plateau efficiency can be seen. The expected increase in rate between pile-up of 140 and 200 scenarios is small due to the limited size of the two-dimensional clusters, in particular in the forward region of the detector where there is a large hadronic activity.

Jets
In the jet algorithm, jets are built using the anti-k T clustering algorithm, with the threedimensional clusters as input, with a small radius parameter R = 0.2 used to reduce the effect of pile-up. In the algorithm, energy corrections are applied in two steps: an η-dependent pileup subtraction is applied, and then a p T -dependent calibration is used to correct the energy response with respect to generator-level jets that are formed with a radius parameter R = 0.4. The performance of the jet triggers derived from HGCAL without pile-up and with pile-up of 200 for a 150 GeV jet p T trigger requirement are shown in Fig. 4. The limited impact of pile-up can be seen in the figure. The jet energy resolution will be further improved with information from the tracker in a particle-flow event reconstruction. The rates for different jet algorithms are shown in Fig. 5, demonstrating the benefit of topological requirements based on the di-jet invariant mass, which is relevant for vector boson fusion events with two forward jets.

Conclusion
The high luminosity conditions during LHC Phase 2 place significant demands on the design of the new CMS trigger system. The new HGCAL detector, due to its very large number of channels, will present new challenges in trigger data bandwidth and processing. We have developed effective data reduction strategies to contend with these novel conditions, and the HGCAL detector presents new opportunities to be exploited in the trigger design. We have shown that the longitudinal development of showers can be exploited to mitigate pile-up and to reduce the data rate. The fine granularity of the detector can be expected to play a major role when the information of all the subdetectors is combined for the trigger reconstruction. The object performance of the HGCAL trigger obtained is already very promising on its own. Our design studies have proved to be very useful in assessing the impact of different options for the architecture of the HGCAL trigger.