Commissioning and operation of the readout system for the SoLid neutrino detector

The SoLid experiment aims to measure neutrino oscillation at a baseline of 6.4 m from the BR2 nuclear reactor in Belgium. Anti-neutrinos interact via inverse beta decay (IBD), resulting in a positron and neutron signal that are correlated in time and space. The detector operates in a surface building, with modest shielding, and relies on extremely efficient online rejection of backgrounds in order to identify these interactions. A novel detector design has been developed using 12800 5 cm cubes for high segmentation. Each cube is formed of a sandwich of two scintillators, PVT and 6LiF:ZnS(Ag), allowing the detection and identification of positrons and neutrons respectively. The active volume of the detector is an array of cubes measuring 80× 80× 250 cm (corresponding to a fiducial mass of 1.6 T), which is read out in layers using two dimensional arrays of wavelength shifting fibres and silicon photomultipliers, for a total of 3200 readout channels. Signals are recorded with 14 bit resolution, and at 40 MHz sampling frequency, for a total raw data rate of over 2 Tbit/s. In this paper, we describe a novel readout and trigger system built for the experiment, that satisfies requirements on: compactness, low power, high performance, and very low cost per channel. The system uses a combination of high price-performance FPGAs with a gigabit Ethernet based readout system, and its total power consumption is under 1 kW. The use of zero suppression techniques, combined with pulse shape discrimination trigger algorithms to detect neutrons, results in an online data reduction factor of around 10000. The neutron trigger is combined with a large per-channel history time buffer, allowing for unbiased positron detection. The system was commissioned in late 2017, with successful physics data taking established in early 2018.


A
: The SoLid experiment aims to measure neutrino oscillation at a baseline of 6.4 m from the BR2 nuclear reactor in Belgium. Anti-neutrinos interact via inverse beta decay (IBD), resulting in a positron and neutron signal that are correlated in time and space. The detector operates in a surface building, with modest shielding, and relies on extremely efficient online rejection of backgrounds in order to identify these interactions. A novel detector design has been developed using 12800 5 cm cubes for high segmentation. Each cube is formed of a sandwich of two scintillators, PVT and 6 LiF:ZnS(Ag), allowing the detection and identification of positrons and neutrons respectively. The active volume of the detector is an array of cubes measuring 80 × 80 × 250 cm (corresponding to a fiducial mass of 1.6 T), which is read out in layers using two dimensional arrays of wavelength shifting fibres and silicon photomultipliers, for a total of 3200 readout channels. Signals are recorded with 14 bit resolution, and at 40 MHz sampling frequency, for a total raw data rate of over 2 Tbit/s. In this paper, we describe a novel readout and trigger system built for the experiment, that satisfies requirements on: compactness, low power, high performance, and very low cost per channel. The system uses a combination of high price-performance FPGAs with a gigabit Ethernet based readout system, and its total power consumption is under 1 kW. The use of zero suppression techniques, combined with pulse shape discrimination trigger algorithms to detect neutrons, results in an online data reduction factor of around 10000. The neutron trigger is combined with a large per-channel history time buffer, allowing for unbiased positron detection. The system was commissioned in late 2017, with successful physics data taking established in early 2018.

K
: Data acquisition circuits; Trigger algorithms; Front-end electronics for detector readout; Control and monitor systems online 1 Introduction

The SoLid experiment
SoLid is designed to measure neutrino oscillations at very short baselines, O(10) m, from a nuclear reactor core. Hints of reactor neutrino oscillations at this energy and distance scale arise from both the reactor and gallium anomalies [1]. Measurements of the rate of ν e emitted by reactor cores show a deficit at ∼ 3σ significance when compared with expectations. Deficits of a similar significance are also observed in measurements of the ν e rate emitted by radioactive sources. One proposed explanation is the existence of an additional non-interacting 'sterile' neutrino state, and a corresponding mass eigenstate; this fourth neutrino could influence the neutrino flavour transitions (oscillations) at very short distances. The existence of such oscillations can be tested using measurements of the ν e energy spectrum as a function of distance from a neutrino source.
The BR2 reactor core is especially suitable given its small core diameter of 0.5 m (see figure 1). The fuel composition is predominantly (> 90%) 235 U, which is of particular interest to the nuclear physics community, who await updated ν e energy spectrum measurements from highly enriched fuels to resolve the 5 MeV distortion observed by previous reactor experiments [2]. The reactor is powered to around 60 MW for typically half the year in evenly spread 1 month cycles. The space in the reactor hall is sufficient for a relatively compact detector to be placed with modest passive shielding.
Compared to previous neutrino oscillation experiments that operate at longer baselines, this environment is particularly challenging. The detector has to be placed on the surface, with negligible overburden to shield from cosmic-ray backgrounds. Additionally, the reactor itself produces a large rate of gamma rays during operation that can further contribute to backgrounds. Previous experience of running a 288 kg (20% scale) prototype of the detector in spring 2015 [3], demonstrated the need for a controlled temperature environment to reduce and stabilise the SiPM dark count rate, as well as a need for electromagnetic shielding to prevent pickup of electronic noise. The detector is scheduled to run for around three years. Efficient online signal tagging is required in order to reach the experiment's physics aims in this time period.

Detector design
Anti-neutrinos are detected in SoLid via inverse beta decay (IBD), resulting in a positron and neutron signal that are correlated in space and time. To take advantage of this spatial correlation, the SoLid detector is highly segmented, with its detection volume formed of 12800 5 cm cubes. This corresponds to the scale of the mean separation between the positron and neutron interaction. The bulk of each cube is polyvinyltoluene (PVT) based scintillator that offers high light output and a linear energy response. Sheets of 6 LiF:ZnS(Ag) are placed on two faces of each cube to facilitate neutron detection. In order for each cube to be optically isolated, it is wrapped in white Tyvek ®. Neutrons may be captured on the lithium via the interaction: The alpha and tritium particles deposit energy in the ZnS(Ag) causing scintillation. These heavy nuclei scintillations are referred to as nuclear signals (NS). Crucially, the scintillation timescale of nuclear signals is considerably slower at O(1) µs, than the PVT scintillation at O(1) ns. Nuclear -2 - The detector itself is placed in a customised shipping container, and surrounded by a 50 cm wall of passive water shielding. A roof structure supports 50 cm of High-density polyethylene (HDPE) passive shielding. signals are characterised by a set of sporadic pulses emitted over several microseconds. Pulse shape discrimination (PSD) techniques can be used to identify the nuclear signals with high efficiency and purity. These are used both in offline data analysis and, in simplified form, in the trigger. The full signal waveforms are therefore required for offline analysis. A sampling speed of around 40 MHz is sufficiently fast in order to perform effective PSD, whilst providing adequate time resolution. This has been demonstrated with smaller scale lab setups and by a large scale prototype of the full detector [3].
In IBD interactions, the positron is detected immediately via scintillation in PVT, and the neutron is detected after thermalisation and capture. The separation of the positron and neutron is 2 cubes or less in 90% of IBD interactions. The gamma rays resulting from the annihilation of the positron travel up to 30 cm, and can deposit energy in other neighbouring cubes. The mean time interval between the positron and neutron scintillation signals is around 60 µs, and the neutron capture efficiency of this configuration is around 66%. For a reactor power of 60 MW, the expected rate of neutrino captures in the detector is approximately 1200 per day (around 1 per 100 seconds).
The cubes are arranged in 50 planes of 16×16 cubes. Light from each detector plane is read out via a 2D array of vertical and horizontal wavelength shifting fibres that sit in grooves machined in the PVT. Two fibres sit along each row and column of cubes, giving 64 fibres per plane. The use of two fibres per vertical/horizontal direction enhances the overall light collection efficiency, as well as providing channel redundancy. One end of each fibre is coupled to a second-generation Hamamatsu silicon photo-multiplier (SiPM), whilst on the other end there is a mirror. The light yield has been measured to be 15 pixel avalanches (PA) per fibre per MeV of deposited energy, for a typical operational SiPM over-voltage of 1.5 V. At this over-voltage, in order to avoid saturated signals in the digitised waveforms for high energy signals such as vertical muons, a dynamic range of O(10 4 ) ADC counts is required (i.e. 14 bit resolution).
The entire detector is placed in a light-tight and thermally insulated shipping container. The inside of the container is cooled to 10 • C, which reduces the dark count rate of the SiPMs, whilst also providing a thermally controlled environment.

Readout system requirements
The physics program described above results in the following requirements for the SoLid readout system:

Functional requirements
• Efficient online tagging of the anti-neutrino signal, with effective discrimination of signal from background • High reliability for unattended operation and restricted access This paper describes a readout system for the SoLid experiment that fulfils all these requirements. Section 2 describes the hardware design of the system. First results and characterisation of the detector channels are presented in section 3. Section 4 describes the strategy and implementation of online triggers and data reduction techniques, including their optimisation. Finally, section 5 gives in situ performance results of the system as a whole, including the life-time of the detector as a function of trigger rate, and stability over time.

Readout system design
In order to achieve the above requirements, particularly for mechanical flexibility at a low cost, most components of the readout system are of custom design. This allows the system to be integrated with the detector directly inside the chilled container, removing the need for long analogue cabling, which both reduces the risk of electromagnetic interference and improves the ease of installation. The system modularity is split into three levels: a single detector plane, a set of ten detector planes (i.e. a detector module), and multiple modules (i.e. the full detector). Partitioned running of the system in each of these modularities is supported. Each level is now discussed in turn.

Detector planes
For each detector plane, the 64 SiPMs are placed in a hollow aluminium frame. One side of the frame has an interface plate with connectors for the SiPMs. The SiPMs connect to this plate using twisted-pair ribbon cables that terminate into insulation displacement connectors (IDC). The other side of the interface plate connects to a aluminium electronics enclosure, which is mounted directly on the frame side, and houses all required electronics to run a single detector plane (see figure 2). Each enclosure is electrically connected to its frame, and so each detector plane acts as a Faraday cage, providing shielding from outside electronics noise contributions. The enclosures can be quickly attached and detached from each frame, allowing for ease of integration and replacement.
Each enclosure contains two analogue boards, each containing amplification and biasing circuitry, and a single digital board (see below). Additionally, there are two smaller boards: one for I 2 C communication with four in-frame environmental sensors, and another board for power distribution. A picture of a complete assembled electronics enclosure is shown in figure 3. The top and bottom sides of the enclosure have openings to allow air flow to cool the electronics.

Analogue electronics
The 64 frame SiPMs connect to two 32-channel analogue boards. The cathodes of all SiPMs connected to one board are biased from a common 70 V supply. The breakdown voltage and corresponding gain variations of the individual SiPMs are equalised by applying individual trim bias voltages (0-4 V) to each SiPM. The setting of SiPM over-voltage is a compromise between photon detection efficiency, pixel cross-talk and thermal dark count rate. This can have a significant impact on the neutron detection efficiency, given the low amplitude and sporadic nature of nuclear signals. Experience from prototype setups suggests values between 1.5 V and 1.8 V are suitable, and this is optimised later for neutron efficiency.
Since the SiPM pulses are very fast (a few ns) compared to the digital sampling (25 ns), the pulses are shaped to slow the signal over several samples, allowing a more accurate timestamp and amplitude to be measured offline. Due to the AC coupling between the SiPMs and the analogue board amplifier, the mean signal over long time periods is forced to be zero, and as a consequence, the pedestal of each channel depends on that channel's signal rate. In practice, it was observed that large changes (>0.5 V) to the bias voltage, which can change the SiPM dark count rate, can cause significant changes in the pedestal, requiring the pedestals to be re-measured for input to the trigger. However, other factors that affect the signal rate, including the presence of intense  calibration sources, or the reactor on-off transition, resulted in less significant changes, and do not require a re-calibration.

Digital electronics
The two analogue boards connect directly to a 64-channel digital board for digitisation and online data processing. Each digital board has eight 8-channel 14 bit 40 Msample/s ADCs, and is controlled and read out over a 1 Gb/s optical Ethernet connection. A Phase-Locked Loop (PLL) is included, which allows the digital boards to operate in standalone mode using an internally generated clock, or run synchronised to an external clock signal. Two duplex 2.5 Gb/s links carried on copper cables allow data (e.g. trigger signals) from each digital board to be propagated to all other detector planes along a daisy chain. An on-board over-temperature shutdown is included. JTAG connectors are included for remote firmware programming.
The trigger and readout logic is implemented in a Xilinx Artix-7 FPGA device (XC7A200). The sizing of the FPGA device is driven by the buffer RAM requirement, which is in turn dictated by the trigger rate and space-time region read out, convoluted with the waveform compression factor due to zero suppression. This device offers sufficient block RAM resources for likely running conditions at a modest cost, and was conveniently available in commercial sub-assemblies [4].

Detector modules
Ten planes are grouped together to form a mechanically independent detector module (see figure 4). A module clock-board, which can operate in both master and slave mode, provides a common clock fan-out, synchronising the ten associate digital boards. The clock-board is placed in a module services box that is mounted above the ten electronics enclosures. The services box also include a DC-DC converter to power the module, as well as a Minnow single board computer to program the clock-board and detector plane FPGA firmware via a JTAG fan-out.
In order to cool the enclosures, fans are mounted between the services box and the plane electronics enclosures, pushing air downwards. A heat exchanger is placed below the enclosures, which removes heat generated by the electronics (∼200 W per module), and also acts as the cooling source for the enclosing container.

Full detector configuration
Finally, modules are grouped and operate synchronously once connected to an additional master clock-board, which drives the module slave clocks. The final SoLid detector configuration comprises of five detector modules. Table 1 shows the hardware costs of the readout system. The final cost of the system is around $220 k, or $70 per channel, with 80% of the budget dedicated to the per plane electronics.

Prototyping
The digital and analogue electronics underwent two stages of prototyping. A small 8-channel system served as the first prototype [7], and a second prototyping round saw full-scale boards. Once electronically commissioned, a 64-channel setup was used for detector quality assurance (QA) tests of each 256 cube detector plane. In addition to testing the long term stability of the electronics, this QA setup served as a test platform for trigger development, data acquisition, and detector monitoring. Each cube of each detector plane was studied with several radioactive sources including 22 Na and Cf, which were moved across the plane face using a dedicated calibration robot: Calipso [5]. A picture of the setup is shown in figure 5. Figure 6 shows the SoLid data flow, beginning with the digital board ADCs, through to data recorded on disk at the on-site DAQ server. Each digital board can be read out over a 1 Gb/s optical ethernet port. During normal data taking operations, the data rate is considerably lower, at around 2 Mb/s.  During source runs, planes near the source can reach the maximum limit. The 2 Tb/s total data rate out of the ADCs is reduced down to ∼200 Mb/s recorded to disk.

Readout software and communication
The FPGAs are configured and read out using the IPbus protocol, which operates on top of UDP/IP and is designed for reliability and high performance [6]. The readout software is written in the Golang programming language, to take advantage of the language's intrinsic multi-threading and memory management tools. As a result, a Golang implementation of the IPbus software is now available: goIPbus. The software can configure all readout electronics components for data taking, and performs continuous retrieval of data from all fifty detector planes in parallel. A minimal parsing of the data is performed to check the format is as expected, and data from all planes are concatenated into a single data file, in the order of retrieval from the FPGAs. The readout software has been tested up to 1 Gb/s -test results are shown in section 5.1.

ADC alignment
For the FPGA to be able to correctly receive the serial data streams from each ADC channel, the delay of each digital stream must be tuned so that the received data from all channels is aligned. Misalignments can be caused by the different placements of these devices on the large digital board, and are not expected to change significantly board-to-board. The alignment of each channel can be set using two parameters known as slip and tap. The slip parameter allows data to be delayed by an integer number of bits, and the tap parameter allows for finer increments. The ADCs can be configured to return a fixed value (i.e. a test pattern). Suitable alignment parameters can be found by comparing the data, which is read out via the FPGA, with the input test pattern. A typical example output from a 2D scan of tap and slip is shown in figure 7 for the first detector channel. There is a wide range of alignments that consistently return the input test pattern; the central value is chosen. This alignment procedure has been repeated for all detector channels. The spread in alignment parameters is small within both individual ADC chips, and all chips in the same position on the digital boards. A default set of alignments can be used for the majority of detector channels based only on the position of its ADC, although approximately 10% of channels required specific per-channel alignments.

SiPM characterisation
Example SiPM waveforms are shown in figure 8, and the distribution of all waveform samples from a single detector channel is shown in figure 9. The quantisation of the pixel avalanche peaks is -9 - Figure 6. SoLid data flow from ADCs to disk. The rates stated are typical for physics data taking mode. The upper limits are also marked: the network sets the upper limit for each detector plane, whereas the recorded data rate limit is due to the readout software.  clearly visible, particularly when a local maximum filter is applied. The width of the pedestal is relatively small at around 2 ADC counts rms, implying a low level of external noise pickup.
In addition to 15 channels masked due to difficulties in ADC alignment, 18 other channels are masked due to broken connections between the SiPMs and the readout electronics, and due to one damaged ADC on one of the digital boards. This results in a total of 33/3200 masked channels (1%). There are no cases where a pair of SiPMs coupled to fibres along the same column/row of cubes are both masked, allowing all cubes to be read out with at least 3 active fibres.

Gain equalisation
The SiPMs are equalised using the per-channel trim voltage to achieve a uniform single-photon amplitude response. The dominant source of non-uniformity is due to the large spread in SiPM breakdown voltage, where the standard deviation is around 2 V for those in use in the detector (cf. an operational over-voltage of around 1.8 V). Additional non-uniformities can be caused by variation  Gauss Fit Data Figure 9. Left: spectrum of ADC samples for the first detector channel, with and without a local maxima filter applied (pedestal subtracted). The first pixel avalanche peak can be parameterised using a Gaussian curve, and the mean parameter is used as the channel gain measurement. Right: spread of gain values across all operational SiPMs after the final iteration of the equalisation procedure.
in the differential gain with respect to over-voltage. This function is measured individually for every detector channel. The gain is taken as the position of the 1 PA peak of the waveform amplitude distribution with a local maximum filter applied. This peak can be well fitted with a Gaussian curve, as demonstrated in figure 9, and the mean of the Gaussian is taken as the gain value. The amplitude of the fit is constrained by the number of entries in the distribution, and the width is constrained by the typical fit width. These constraints result in a highly robust gain finding technique, with a failure rate of around 0.1% in the range 1-3 V over-voltage, and precision around 0.5 ADC counts. The variation in breakdown voltage and the over-voltage sensitivity necessitated an iterative voltage scanning procedure, with the appropriate parameters for a given SiPM channel successively refined between scans. Figure 10 shows the resulting mappings for the channels with extremal cases of gain differentials (i.e. max and min gradient of gain vs. voltage). A linear fit is performed for each channel, and the output parameters stored in a database. Upon setting the bias voltage, the database -11 -is queried to look up the required bias voltage for a desired gain. Figure 9 shows that the spread in gain across channels after this procedure is 1.4%, where the dominant uncertainty is the precision of the gain-finder itself. The desired gain is set to 32 ADC counts per PA, which corresponds to a mean over-voltage of 1.8 V. The spread increases to around 7% when applying a bias voltage that depends only on the breakdown voltage (i.e. a fixed over-voltage for all channels, without taking into account the per-channel gradient of the linear fit). The whole procedure is 100% successful for all non-masked detector channels. The measured breakdown voltages are highly correlated with manufacturer values supplied by Hamamatsu, with a residual standard deviation of 0.1 V.

Dark count rate
The mean and spread of both the dark count rate and SiPM pixel cross-talk probability across detector channels is shown in figure 11 as a function of over-voltage. The mean dark count rate is 110 kHz per channel at an over-voltage of 1.8 V and at 11 • C, which is around a factor of three reduced compared with room temperature (20 • C), using the same over-voltage. The corresponding cross-talk probability is 20%. These values are consistent with expectations from studies of SiPMs in environmentally controlled lab setups [8]. The spread in dark count rate is approximately uniform across the detector volume, with no apparent hot spots.

Zero suppression reduction factor
The majority of waveform sample values are near the pedestal, and are thus free of any SiPM signals. The fraction of samples above an applied lower threshold is shown in figure 12 for different over-voltages. A zero suppression (ZS) value at 0.5 PA is found to remove the pedestal contribution, whilst retaining all SiPM signals, resulting in a waveform compression factor of around 50. Increasing the threshold further to 1.5 PA can provide another order of magnitude of waveform compression, at the expense to removing the single PA signals.

Trigger strategy and implementation
The required IBD signal efficiency and data reduction factor can be achieved with a single level of triggering and event building, which can be implemented in the per-plane FPGAs. This section begins with a description of the different per-plane trigger algorithms, followed by a description of the options for readout and event building. The section ends describing the implementation in firmware, and presents a summary of default settings used during physics data taking mode. The trigger rates stated are the sum across all detector planes.

IBD (i.e. neutron)
The sources and magnitudes of rates for various detector signals in physics mode are shown in table 2. During reactor operation, the rate of gamma rays from the reactor that interact in the cube PVT, which can mimic a positron signal, is too high to allow efficient self-triggering on IBD positrons. Instead, the trigger strategy for IBDs relies solely on triggering on the IBD neutron. For each neutron trigger, a space-time region is read out, which is set large enough to encapsulate   all signals from the IBD interaction. This allows the positron signals to be recorded without any trigger bias. The readout region recorded is considerably larger than the expected extent of the IBD interaction in both space and time. This allows the recording of very low amplitude associated signals, which can be used in offline analysis to aid discrimination of signal (e.g. annihilation gammas) and background (e.g. proton recoils). This strategy is possible if neutron signals can be efficiently identified at a sustainable rate. The amplitude of neutron ZnS(Ag) signals is low and similar to that of both reactor gammas and SiPM dark counts. The scintillation signals themselves are sporadic, and spread over several microseconds -an example waveform is shown in figure 13. Several trigger algorithms, each suitable for FPGA implementation, have been compared [7]. The algorithm offering best discrimination involves tracking the time density of peaks in the waveforms. Specifically, for each detector channel, the number of waveform local maxima, whose amplitude are above a peak-finding threshold, is counted in a rolling time window. The algorithm triggers when the number of peaks in the window exceeds a threshold. The algorithm has the following parameters to be set during deployment: • T: amplitude requirement on waveform local maxima to be counted as a peak (0.5 or 1.5 PA).
• W: the size of the rolling time window. The window width should be set at a scale near the characteristic decay time of the ZnS(Ag) (i.e. a few microseconds).
• N peaks : the number of peaks required in the window to trigger.
W is fixed at 256 waveform samples (6.4 µs). Scans of trigger efficiency vs. trigger purity (i.e. ROC curves) for different values of N peaks are shown in figure 14. An offline neutron ID algorithm, using pulse shape discrimination techniques, has been used to calculate the ROC curve values. This algorithm facilitates highly accurate tagging at negligible efficiency loss, with values of efficiency and purity that far exceed that of the trigger. Additional curves are included to scan both T peak and -14 -2019 JINST 14 P11003  detector over-voltage. Optimal performance is achieved with the lower value of T that includes counting 1 PA peaks, and with over-voltage at 1.8 V. Further increases to the over-voltage result in unacceptable increases in SiPM pixel cross-talk. For N peaks , the physics mode default values are taken from near the cusp of the optimal ROC curve, on the side of higher efficiency with a sacrifice in purity, corresponding to a trigger efficiency of 81 ± 3% (using measurements from a single position only). The neutron efficiency measurement arises from a comparison of the detected neutron rate with that expected from simulations, where the uncertainty is dominated by the systematic error assigned given the level of agreement between repeated measurements using multiple neutron sources. The corresponding purity is around 40%. Further decreases in this threshold result in a too high data rate given long term data storage limitations. The distribution of the trigger variable N peaks for this configuration is shown in figure 15. This study used neutrons tagged using the offline algorithm, for a data sample obtained using a less restrictive trigger threshold where N peaks > 14. The muon identification used another offline ID based on topological information and energy deposits. A significant fraction of fake triggers are coincident with muon signals, particularly those muons whose incident angle is near the vertical or horizontal, and so deposit a large amount of energy along one particular fibre. Upon inspection of the waveform data, the most likely explanation is that these triggers arise from after-pulsing of the SiPMs due to the extremely large signals. Simulations of neutrino IBDs and dominant backgrounds suggest that the majority of induced detector signals are contained within a few planes either side of the triggered detector plane. The default number of planes read out is three planes either side of the triggered plane. The neutron capture and thermalisation time follows an exponential distribution, with a mean lifetime of around 60 µs. However, some backgrounds have slower decay times around 120 µs. In order to study these backgrounds without bias from signal, a large time window of 500 µs before the trigger is read out. Finally, in order to study accidental backgrounds, a window of 200 µs after the trigger is also read out. The large time windows compared to the neutron thermalisation time also allow for isolation selections to be applied, which can be used for background tagging. An example IBD candidate event is shown in figure 16. The corresponding data rate of the neutron trigger is 15 MB/s for a trigger rate of around 40 Hz, and does not change significantly depending on reactor operations.

High energy
High energy signals, such as muons that enter the detector preceding IBD candidates, can be used to discriminate backgrounds from signal. An amplitude threshold trigger, targeted at PVT signals, has been implemented. An X-Y coincidence condition is also imposed to remove triggers caused by SiPM dark counts. The default physics mode threshold is 2 MeV, which gives a trigger rate of 2.1 kHz and data rate of 2 MB/s during reactor-on periods. These rates decrease by around 10% during reactor-off periods.  -17 -

Zero-bias trigger
A free running periodic trigger is used to allow continuous unbiased monitoring of the stability of the SiPMs, and any external noise contributions. The default trigger rate is 1.2 Hz, and upon triggering, the entire detector is read out for a time window of 512 samples (non-zero-suppressed), giving a data rate of 3.9 MB/s.

Event building and zero suppression
By default, all detector channels in the triggered detector plane are read out for each trigger. This is useful to reconstruct complicated event topologies offline, which include low amplitude signals that are shared between multiple fibres. A history time buffer is present for each channel, allowing data from both before and after the trigger to be read out. The exact width of the time window can be set for each trigger type, with limits depending on the buffer size and waveform compression rate due to zero suppression. The readout space region is set by propagating trigger signals to neighbouring detector planes, where the number of planes read out either side of the triggered plane depends on the trigger type. In cases of overlapping space-time regions due to triggers in close proximity, the readout time window is extended as required for each affected detector plane.
All channels are zero-suppressed with a default threshold of 1.5 PA in the absence of a trigger. The threshold can be lowered to 0.5 PA for a short time region before and after a trigger for all plane channels depending on the trigger type. In cases of overlapping triggers, the lowest of the ZS thresholds is applied. This is especially useful for the neutron trigger, since a substantial fraction of the waveform is formed of very low amplitude signals, which can thus be retained for offline PSD. This is demonstrated in figure 13, which shows the regions where the ZS was reduced to 0.5 PA from 1.5 PA.

Firmware implementation
A block diagram of the per-plane FPGA firmware implementation is shown in figure 18. Data from the ADCs are deserialised and first placed in a FIFO latency buffer, which is non-zero-suppressed (NZS) and can store up to 512 waveform samples. This buffer is used to delay while a trigger decision is made. Alternative data sources are possible for debugging, such as a playback mode for pre-recorded data, as well as a signal generator capable of producing positron and neutron like signals. In parallel to filling the NZS buffer, the channel trigger operates for the threshold and neutron triggers, producing trigger primitives quantities on a sample-by-sample basis.
Each detector plane has a trigger sequencer that is responsible for the handling of triggers. In order to save FPGA resources, instead of triggering on a sample-by-sample basis, plane-level trigger decisions are made at a reduced rate based on the channel triggers primitives. A grouping of 256 sequential waveform samples is known as a 'block', and trigger decisions operate on a block-by-block basis. The use of blocking has no effect on the physics, but substantially reduces the complexity and resource requirements of the firmware. A trigger occurs when a channel trigger fires in that block, or if the per-plane random trigger fired. Each trigger type can be configured to send a remote trigger to a pre-set number of detector planes either side of the trigger plane, up to the full detector. Given the long decay time of neutron signals, the neutron trigger often causes two -18 -sequential blocks to trigger. In these cases, the readout region is extended in time, and the trigger rates quoted previously are corrected for this double counting effect.
Data leaving the NZS buffer are zero-suppressed and enter the window buffer. A default ZS threshold is always applied, but the trigger sequencer can change the threshold depending on the trigger type. The duration stored in this buffer depends on its size and the ZS compression ratea compression factor of 50, due to ZS at 0.5 PA, allows up to 2 ms to be stored. In practice, to avoid buffer overflow, the buffer is limited to 500 µs, which is suitably large for the IBD buffer for each neutron trigger.
Each plane has a readout sequencer that determines which data are recorded for each trigger (either local or remote triggers). This sequencer sets the duration of data readout from both before the trigger (i.e. stored in the ZS buffer), and after the trigger (i.e. not yet entered the ZS buffer) in units of blocks. The smallest amount of data that can be read out is therefore a single block, corresponding to 256 waveform samples (6.4 µs). Blocks are discarded or transferred to a perchannel derandomiser buffer depending on the decision of the readout sequencer. Finally, for each block, data from all channels are concatenated and stored in a data buffer for read out over IPbus by the local DAQ disk server. In cases where the derandomiser or data buffer overflows (e.g. due to a high trigger rate), triggers are halted and the detector plane is in plane dead time. Should the overflow occur during the concatenation of channel data, some channels may be excluded, incurring channel dead time. Once the overflow has been cleared, triggers resume and the dead time periods are encoded in the data steam. Table 3 gives a summary of the different trigger settings presented. The mean recorded data rate of the experiment is 21 MB/s during physics mode, which is dominated by the neutron trigger, and corresponds to around 1.6 TB/day. A potential future upgrade of the experiment would be the addition of a software-level trigger, which could perform online event filtering and more sophisticated neutron ID techniques. This would increase flexibility during the optimisation of readout space-time regions and zero suppression thresholds, as well as providing further online data reduction.

DAQ performance
The readout system has been stress tested to find the maximum data rate limit before incurring significant dead time. This has been performed for each trigger type separately by adjusting the respective trigger thresholds to increase data flow. The data rate as a function of dead time fraction is shown in figure 17. The default readout options, as presented in the previous section, were used. In all trigger configurations tested, the dead time fractions are negligible below a data rate of 50 MB/s. During default physics mode running, the combined effect of plane and channel dead time results in a data loss of < 3%. At higher data rates, the bottleneck currently arises in the readout software. Further optimisations are being investigated, the current rates are acceptable given the system requirements.    Table 3. Summary of trigger settings during reactor-on physics data taking. The only value that changes with the reactor power is the threshold trigger rate, which decreases by around 10% for reactor-off. The ZS threshold values are valid in the time window ± 2 blocks around the trigger -otherwise, such as the IBD buffer, the default ZS threshold of 1.5 PA is applied.

Run control
During physics mode, runs are taken sequentially and typically last 10 minutes in duration. This is set for convenience of offline file transfer -the size of the single file for a run is around 10 GB. A hot start option is implemented for a quick restart between runs where no re-configuration is requireda firmware soft-reset is performed, which resets the data pipeline and empties the firmware buffers, and the next run begins. In this configuration, the up-time of the detector is around 95%, with 5% loss due to run hot starts. The data acquisition system is robust, with no major technical failures since physics running was established. The readout software runs on a disk server located next to the detector, which provides 50 TB of local storage. This is split into two data partitions, which are periodically swapped and cleared. Data is first transferred to the Brussels VUB HEP Tier 2 data centre, and subsequently backed up at two further sites in France and the U.K. GRID tools [9], originally developed for LHC experiments, are utilised for data transfer and offline processing tasks.

Online monitoring
Run control operations are controlled via a dedicated python based web application, the SoLid Data Quality Monitor (SDQM), which also provides an interface to inspect data quality. A small fraction of each run is processed online using the SoLid reconstruction and analysis program (Saffron2), and the output measurements and distributions can be inspected via the web application. Long term trends of certain measurements, such as SiPM gains and detector deadtime, are also stored in an online database, and input to an automated alarm system.
In addition to detector measurements, several in situ environmental sensors are placed in the detector container and immediate vicinity, which are read out periodically via the SDQM and stored in the online database. Figure 19 shows long term trends of both trigger rates and SiPM measurements. The transition between reactor-on and reactor-off can be seen in the small changes in the threshold trigger rate. Whilst the reactor is on, the standard deviation of the trigger rates for the neutron and threshold triggers, measured over a 1 hr period, are 2% and 1% respectively. The SiPM measurements show small changes over the long term, as well as day-night variations. These changes are correlated with temperature changes up to 0.5 • C inside the detector containerthe small increase in temperature increases the dark count rate of the SiPMs, causing the pedestal to slightly decrease, although the trigger rates remain unaffected.

Conclusion
The SoLid detector readout system was successfully commissioned in late 2017. Custom electronics have been designed, constructed, checked for quality assurance, and deployed at the experiment site. Multiple triggers and data reduction techniques have been implemented at the FPGA level, including a novel PSD algorithm to trigger on neutron-like signals, with a trigger efficiency around 80% after optimisation. The corresponding data reduction factor is 10 4 , and includes storing SiPM waveforms for offline analysis. Flexible space-time regions can be readout for each trigger type. Of the 3200 detector channels, 99% are operational, with a pedestal rms of 2 ADC (6% compared to a gain of 32 ADC/PA). The detector response is highly uniform across all detector channels. Physics data taking was established in early 2018. The total cost of the system is $220 k, or $70 per channel. The system is highly stable, with an up-time of 95% during physics running.