Operational experience with the ALICE Pixel Detector

The ALICE Silicon Pixel Detector (SPD) constitutes the innermost detector of the ALICE experiment, which is the LHC experiment dedicated to the investigation of strongly interacting matter in heavy-ion collisions. The SPD consists of ∼ 10 million pixels organized in two layers at radii of 39 mm and 76 mm that cover a pseudorapidity range of | η | < 2 and | η | < 1.4, respectively. It provides the position of the primary and secondary vertices, and it has the unique feature of generating a trigger signal that contributes to the L0 trigger of the ALICE experiment. Installed in 2007, the SPD started to record data since the first LHC collisions. This contribution presents the main features of the SPD, the detector performance and the operational experience, including calibration and optimization activities, since installation in ALICE. The ongoing consolidation activities carried out to prepare the detector for the data taking during the RUN2 of LHC will be also described.

data are transmitted on optical fibers to the off-detector electronic boards (called Routers) located in a counting room.
Each of the front-end chips of the detector has the possibility of generating a prompt signal that contributes to the first level of trigger (L0) of the experiment. The trigger signal, called Fast-OR, is generated when at least one pixel inside a chip is hit by a particle. The Fast-OR bits are processed by a Pixel Trigger system, which generates 10 programmable outputs and sends them to the Central Trigger Processor of the experiment within 800 ns from the collision.
The trigger algorithms are based on boolean functions of different multiplicity thresholds set on the number of hit chips in the inner layer, in the outer layer, or in the overall detector. All the thresholds are programmable, depending on the trigger requirements of the experiment.

Detector operation
The pixel detector and its trigger system have been operated in the experiment since 2008, for the acquisition of cosmic ray tracks used for the offline alignment of the detectors before the first circulating beams in the LHC. The operation of the powered modules has continued since then without major problems and with an efficiency of ∼99%.
Since the pixel detector is the closest detector to the interaction point and to the beam, a safety policy has been implemented to protect the sensor from the effects of beam instabilities and potential beam losses. When the beams are being injected or in adjustment, the voltage of 50 V applied as reverse bias on the sensor is lowered to 2 V and the sensor is not depleted. This prevents a high charge deposition from creating a short circuit between the sensor bias voltage and the input of the preamplifiers in the front-end chips. Full depletion voltage of 50 V is then applied when beams are declared stable and their energy is ramping up.
At the start of each run, the detector and trigger configurations are checked. In a specific database called ALICE Configuration Tool (ACT) [3], the different detectors store their configurations and associate them to various trigger conditions. The SPD uses the ACT to store the latest configuration of the front-end chips (42 pixel DACs and 8 reference DACs, as described above) and the mask of noisy pixels; the Pixel Trigger uses the ACT to store the parameters of the trigger outputs and the chips included in the trigger logic. This reduces the probability of introducing -2 - manual mistakes: even if some settings are changed by the operator for test purposes, the ACT tool automatically restores the good configurations.
There is no difference in terms of detector operation between pp and Pb-Pb. The configuration of the front-end chips remains unchanged -only the parameters of the trigger algorithms are modified for the new trigger requirements.

Run statistics
Since the first collisions, the pixel detector has been included in more than 96% of the runs with colliding beams (see figure 2) considered for the physics analysis.
Since 2011, the reasons that caused the runs to end are recorded in the electronic database that stores the details of all the runs taken in the experiment. Collecting all the data available there, more than 40% of the End-of-Run reasons are related to the normal beam operation, and are due, for instance, to beam dump of interlocks. Among the remaining abnormal End-of-Runs that are due to problems with the 18 ALICE sub-detectors, the pixel detector is responsible for only 3.4% of them (see figure 2).

Error handler
An error handler [4] has been regularly used during the data acquisition to identify and store the errors of the pixel detector system. This tool is able to detect errors coming from different sources: • from the pixel detector, such as missing communication from a half-stave, or errors of overflow/underflow of the FIFOs in the pixel chips; • from the trigger board, such as a missing Fast-OR bit in the output data of a chip that has a hit in the pixel matrix; • from the off-detector electronics (Routers), such as errors in the format of the event data detected before sending the event to the Data Acquisition system (missing header, wrong chip number, missing trailer, etc.); • from the optical connections, such as missing communication across the trigger and clock optical link, or data acquisition link not ready.
-3 -When the errors are detected, they are formatted and stored in a memory inside the Router. Every class of error has a priority assigned to it; this is used to prioritize the errors and to avoid a possible cascade of secondary errors that could soon fill the memory. The driver layer of the pixel detector reads the errors from the Routers, and stores them in an Oracle database. This happens in parallel with the data acquisition, and the data taking is not disturbed by the error handling procedure. The errors are also automatically notified to the shifters in the control room through the standard interface for the alarms.
The errors can be later fetched from the database, and they are used either for debugging or for statistics.

Detector calibration
In order to achieve a good detector performance (efficiency above 99%) it is important to check the response of the matrix and ensure it is uniform throughout the detector. This is a check that is regularly done, at every technical stop of the LHC (every 2 months approximately). The matrix uniformity response is checked using an internally generated pulse: an automatic scan injects in every pixel a pulse of known amplitude, which corresponds, approximately, to the signal generated by a Minimum Ionising Particle (MIP).
Two DACs are used to set voltage references corresponding to the value of the voltage supply of the pixel chips (1.85 V), and the midpoint of this value (900 mV). Every half-stave has a slightly different optimal working point around these two values in terms of efficiency and noise; the two voltage references are, therefore, adjusted by a few mV to optimize the half-stave performance.
Two other parameters that can be changed to optimize the matrix uniformity response of the chip are the threshold of the discriminator at the chip level and the bias of the first preamplifier stage on the in-pixel electronics.
The threshold is set with an internal DAC; since its value has a direct impact on the trigger rate generated by the pixel detector, this parameter is regularly monitored both through the trigger rate at every data acquisition and with a dedicated scan at every technical stop.
In 2010 there was an extensive study and optimization of the threshold [5], that now undergoes only minor adjustments; the default setting before the main optimization was around 3100 electrons, while after the optimization the average threshold is around 2500 electrons, as shown in figure 3. An increase of the DAC value corresponds to a lower threshold and thus to higher detection efficiency; the optimum value, which gives the best compromise between efficiency and noise, is 2500 electrons.
During the threshold adjustment, also the noisy pixels are checked. A pixel is defined noisy if it fires more than the 0.2% of the total number of triggers during a run in self-triggering mode. With the reduction of the threshold, the number of noisy pixels increased from 0.006% to 0.01%, however this value is still a negligible fraction of the total number of pixels.
The average noise of the pixels is around 300 electrons.

Trigger calibration
The rate of the trigger signals coming from the pixel detector is constantly monitored by the shifters in the control room during the whole data taking. Depending on the physics that the experiment -4 - wants to address and on the beam configuration, different thresholds can be configured for the trigger algorithms. Every time the detector is switched on again, a more detailed verification of the trigger conditions is performed. The purity of the trigger is very high, with more than 99% efficiency.
In addition to the physics data taking, cosmic rays runs are periodically done, to verify the correct functioning of the full experiment. In these runs, the trigger rates are checked, and compared to the reference value of 0.3 Hz expected with cosmic rays in the cavern.

Cooling interventions
Though the pixel detector works at room temperature, a cooling system is needed in order to prevent the temperature from rising due to the total power consumption of the detector of 1.35 kW over a low material budget of ∼1.1% X 0 per layer. The detector has an embedded evaporative cooling system using freon (C 4 F 10 ), based on cooling pipes running under each half-stave and in thermal contact with the back side of the pixel chips.
After the installation in the ALICE experiment, the cooling system started to show a reduced performance [6], which was tackled with a number of corrective actions as the installation of new input lines, periodical counter-flow-wise cleaning of all the lines, and installation of additional monitoring and tuning devices. However, these actions were only partially successful, and the cooling flow in some lines decreased to a level that in 2011 only 63% of the detector could be powered on.
Many studies were carried out, and it became clear that some filters in the cooling lines were partially clogged. These filters cannot be replaced, because they are located in a patch panel that is not accessible unless the TPC is moved from its position, which requires many weeks of interventions. Other filters located upstream in an accessible point have been removed and studied with SEM analysis, and traces of metals and graphite have been found, which may explain the clogging of the inaccessible filters downstream.
A stable solution to overcome this problem was to drill the filters, in order to re-establish the correct flow of cooling, after installing new clean and accessible filters at the pump level. The drilling procedure has been particularly challenging, because the last accessible point before the clogged filters is located at a distance of 4.5 m from it, as shown in figure 4 and the inner diameter of the cooling pipe is only 4 mm.
-5 - The team working on the cooling assembled a dedicated tool constituted by a tungsten carbide tip welded on a stainless steel wire 5 mm long and 2.5 mm thick. After the drilling, a long cleaning procedure has also been carried out, to remove all the fragments generated by the drilling: the cleaning included the usage of a vacuum pipe, a magnet inserted up to the filters, and counter-flowwise cleaning.
After this operation the nominal flow of 2.1 g/s was established on all the 10 cooling lines, well above the value of 1.8 g/s that is the minimum value required for a total drain of the heat in case all the half-staves of a sector are powered on. During 2012 and the p-Pb run in 2013, only 5% of the detector, i.e. 6 half-staves, was off due to a high temperature, including 2 half-staves with a bad thermal contact with the cooling pipe.