Biologically plausible information propagation in a complementary metal-oxide semiconductor integrate-and-fire artificial neuron circuit with memristive synapses

Neuromorphic circuits based on spikes are currently envisioned as a viable option to achieve brain-like computation capabilities in specific electronic implementations while limiting power dissipation given their ability to mimic energy-efficient bioinspired mechanisms. While several network architectures have been developed to embed in hardware the bioinspired learning rules found in the biological brain, such as spike timing-dependent plasticity, it is still unclear if hardware spiking neural network architectures can handle and transfer information akin to biological networks. In this work, we investigate the analogies between an artificial neuron combining memristor synapses and rate-based learning rule with biological neuron response in terms of information propagation from a theoretical perspective. Bioinspired experiments have been reproduced by linking the biological probability of release with the artificial synapse conductance. Mutual information and surprise have been chosen as metrics to evidence how, for different values of synaptic weights, an artificial neuron allows to develop a reliable and biological resembling neural network in terms of information propagation and analysis.


Introduction
Neuromorphic technologies have been designed to support large-scale spiking neural networks (SNNs) encompassing bioinspired mechanisms. Unlike conventional artificial intelligence systems, these networks base their activity on the transfer of binary units (spikes) through synaptic contacts. These latter in turn can undergo persistent changes of their strength upon specific patterns of stimulation. The modifications expressed by synaptic contacts following the induction of long-term plasticity [1,2] can be reliably reproduced by memristors [3][4][5], which can be designed to change their conductance according to their past activity [6]. In this respect, recent advancements in memristive device technology development have brought them closer to full integration in standard complementary metal-oxide semiconductor (CMOS) platforms, which is per se a tough challenge, as these devices must fulfill very stringent requirements for integration with current integrated circuits. Among these requirements are integration densities of up to 1 gigabyte mm −2 , writing voltages <3 V, switching energy <10 pJ, switching time <10 ns, writing endurance >10 10 cycles (or full potentiation/depression cycles), dynamic range >10, and low conductance fluctuations over time if no bias is applied (<10% for >10 years) [7]. Notably, some memristive devices have fulfilled such stringent criteria [7,8], but they still exhibit high manufacturing costs despite the simplicity of individual memristive cells due to the need for additional elements (series transistor, selector, or resistor) and to specific beyond-CMOS back-end-of-line interconnects. Still, these devices are gradually triggering the interest of the semiconductor industry and are currently considered front-runners in the race to realize a CMOS-compatible cost-effective synaptic element for hardware SNNs.
Interestingly, several network architectures have been developed to embed in hardware the bioinspired learning rules that are required to exploit SNN functionalities, such as spike timing-dependent plasticity, rate-based plasticity, and the Bienenstock-Cooper-Munro learning rule [6]. Nevertheless, while a significant amount of work has been published in this domain, it is still unclear if currently proposed neuromorphic hardware architectures for SNNs have the capacity to handle and transfer information in a way that resembles what happens in the corresponding biological networks. In a neuronal microcircuit, in fact, the information is exchanged between neurons in the form of input spike series that are conveyed as output temporal spike series. Therefore, the amount of transferred information can be estimated by looking at the input-output relationship, which is computed by analyzing the stimulus patterns and the neural responses. This dependency can be formalized in several ways, like tuning [9,10], gain [11], and selectivity curves [12,13], and these approaches allow us to provide quantitative estimates of the information content independently from the neural code. The language employed by neurons to communicate can be cracked by adopting parameters of communication and information theory [14]. Among these, mutual information (MI) has been already adapted to neuroscience to estimate the information transmitted by circuits [15], neurons [16], or single synapses [17] without specific knowledge of neural code semantics. MI is directly derived from the response and noise entropy [18], which are correlated to the variability of responses to different inputs or to the same input. Given this premise, the calculation of MI allows us to evaluate the capacity of a neuronal system to separate different inputs, and thus transmitting information [19,20]. For this reason, MI has been used consistently in neuroscience to show the modalities of information propagation in biological neural networks. On the other hand, much of the effort in neuromorphic electronics has been devoted to the design, development, and implementation of artificial CMOS [21] or memristive [22] neurons and either CMOS or memristive synapses [23][24][25] in circuits that embed specific learning rules and electrophysiological properties, paying less or no attention to the overall performance of the system under investigation from the standpoint of information transmission. In fact, understanding whether the currently proposed artificial SNNs can at least qualitatively replicate the extreme efficiency with which biological networks handle and transfer information represents an important step toward the development of brain-inspired and ultra-low-power artificial processing systems.
In this work, we provide for the first time an in-depth analysis of the information transmission in an SNN that encompasses CMOS leaky integrate-and-fire (LIF) neurons and memristive synapses by focusing on how MI is transmitted through the network. Specifically, we focus on a simplified network, with the neuron circuit mimicking a cerebellar granule cell (GC), found to be the optimal benchmark to calculate MI [26]. In fact, in the case of the cerebellar GC, the input-output combination is particularly convenient, given the small number of dendrites (four) and the limited amount of inputs received, compared to the thousands of contacts received, for instance, by cortical and hippocampal pyramidal neurons. In addition, besides the few dendrites, GCs when activated respond with a limited number of spikes (typically two or less [27]) confined in a narrow time window regulated by synaptic inhibition [28]. This peculiarity reduces the complexity of calculations and suggests the use of this microcircuit as a model to investigate changes in the transmission properties by internal or external agents. We compare how MI changes not only with specific stimuli and input patterns, but also how it evolves with changes induced by altering synaptic strength (i.e. upon learning), finding a striking qualitative resemblance with the results found experimentally in biological networks [29,30] and in simulations with biologically realistic neurons [26]. This paper is organized as follows. In section 2, we illustrate the methods used to compute specific quantities related to information propagation. In section 3, we report the details of the synaptic device used in this study, clarifying how the latter were characterized and modeled; the details of the electronic neuron model are given as well. In section 4, we provide the details of the proposed artificial network and of the analogies and differences as compared to its biological counterpart. In section 5, results are reported and discussed. Conclusions follow.

Methods
Information theory has been extensively used in neuroscience to estimate the amount of information transmitted within neuronal circuits [14,16,20,26], where a set of input stimuli can be correlated with output responses to estimate the information conveyed by neurons. The level of correlation primarily depends on the input variability, which, in turn, is expanded by the number of afferent fibers. In the central nervous system, there is a large variability in the number of input synapses a neuron can receive: from a few units (cerebellar GCs [29]) to hundreds of thousands (200 000 in cerebellar Purkinje cells [30]). In terms of information transfer, only neurons with limited fan-in connections can be efficiently analyzed, avoiding the explosion of combinations according to the input space size. Following our recent work [31], we have simulated an artificial architecture composed of individual neurons with only four synaptic inputs, and the level of correlation was estimated by first dividing neuronal responses in temporal bins, which were digitized in dependence on the presence of a spike. This discretization allowed to convert a spike train into a binary word of length N = T/∆t (where ∆t is the temporal bin and T is the spike train duration) containing only digital labels (where '0' means no spike and '1' means spike). Neurons respond to input stimuli with a variety of binary words generating neuronal vocabulary that can be explored by varying input stimuli. The larger the vocabulary is, the richer is the conveyed information. However, efficient communication is ensured by a correlation between input stimuli and output words. In information theory, two factors determine the amount of information a neuron conveys about its inputs, namely the response entropy (i.e. the neuronal vocabulary size) and the noise entropy (i.e. the reliability of responses when stimuli are given). The quantity that considers these two factors simultaneously by subtracting the noise entropy from the response entropy is Shannon MI, which is measured in bits and can be calculated through the following equation: where r and s are the response and the stimulus pattern, respectively; p(r) and p(s) are the probabilities that r and s occur within a single acquisition. Finally, p(r|s) is the probability of obtaining the response pattern r given the stimulus pattern s. MI is intrinsically an average property of all inputs, and it can be interesting to decompose MI into a single stimulus contribution (stimulus specific surprise (SSS)) or even a single spike contribution (surprise per spike (SpS)). These two quantities can be computed as: Experimental issues associated to high data dimensionality limit the estimation of all the probabilities in the MI formula. Estimating the conditional entropy requires determining the response probability given any input stimulus. If the neural response shows sufficiently low variability, the response probability can be assessed with a tractable amount of data [26,31].

Synaptic devices and experiments
According to the configuration adopted for biological experiments [31] and simulations with biologically realistic neurons [26], we investigated how information is propagated in a cerebellar GC-like artificial CMOS neuron with four memristor-based synaptic inputs. In this respect, we run circuit simulations using Cadence Virtuoso software, in which the response of the artificial CMOS neuron was abstracted by using a Verilog-A behavioral description of its constituent building blocks (as specified in section 3.2), while the characteristics of the memristive elements (i.e. the artificial synapses) were carefully reproduced by means of a compact model developed internally, i.e. the UNIMORE resistive random access memory (RRAM) compact model [32]. The latter is a physics-based compact model supported by the results of advanced multiscale simulations [33] that has been shown to reproduce both the quasi-static and dynamic behavior of different memristor technologies with a single set of parameters [34] and considers the intrinsic device stochastic response, thermal effects, and random telegraph noise [35]. Specifically, the memristive elements adopted in this study are commercially available C-doped self-directed channel (SDC) memristors by Knowm [36], available in a dual in-line package. These devices were chosen because, to the best of the authors' knowledge, they are the only commercially available packaged RRAM devices to date. This choice allows us to show that MI propagates through an SNN with CMOS LIF neurons and memristive synapses akin to what happens in biological networks, and that such behavior can be achieved with available commercial-grade RRAM devices, requiring no specific advancements in technology development.
As shown in figure 1(a), the SDC memristor consists of a stack composed of W/Ge 2 Se 3 /Ag/Ge 2 Se 3 /SnSe/ Ge 2 Se 3 :C/W, where Ge 2 Se 3 :C is the active layer [36]. During fabrication, the three layers below the top electrode are mixed and form the Ag source [36]. The SnSe layer acts as a barrier to avoid Ag saturation in the active layer and is responsible for the production of Sn ions and their migration into the active layer during the initial operation of the device (typically addressed as 'forming'), which promotes Ag agglomeration at specific sites [36]. The details of the mechanism at the basis of the resistive switching in these devices are available in [36]. To fully capture the behavior of these devices in circuit simulations, we carefully calibrated the parameters of the UNIMORE RRAM compact model against experimental data, as elucidated in figure 1. The electrical measurements were performed using the Keithley 4200-SCS. To analyze and then model the behavior of the memristors, we performed a sequence of quasi-static I-V measurements by applying voltages sweeping from −0.8 to 0.4 V with a current compliance enforced to 10 µA by the Keithley 4200-SCS. These measurements drive the device to a low resistive state (LRS) with a SET operation (V > 0) and to a high resistive state (HRS) with an ensuing RESET operation (V < 0). Results are shown in figure 1(b) (red traces) and reveal that the RESET curves are characterized by an abrupt transition from the LRS to the HRS and a strong cycle-to-cycle variability of the switching voltage, while the SET operation is associated with a more predictable and gradual transition from HRS to LRS. Then, to experimentally evaluate the synaptic functionality of the memristors (i.e. the capability to respond to spike-like voltage stimuli rather than to quasi-static voltage sweeps), we designed a suitable pulsed voltage sequence ( figure 1(c)), which gradually drives the device resistance toward higher or lower resistance (or, equivalently, conductance) states. In this experiment, a 10 kΩ resistor was connected in series with the device to prevent accidental current overshoots because Keithley 4200-SCS does not support the enforcement of current compliance when performing pulsed tests. (The series resistor can then be removed in the actual circuit implementation.) The device was initially driven in LRS by means of 20 rectangular pulses (V = 0.6 V; T = 100 µs, initial set). Then, long-term depression (LTD) and long-term potentiation (LTP) were obtained by applying trains of 20 depression pulses (V = −0.2 V; T = 10 µs) followed by 20 potentiation pulses (V = 0.55 V; T = 30 µs). To evaluate the transition smoothness, each potentiation or depression pulse is followed by a small reading pulse (V READ = 50 mV; T READ = 50 µs) that is used to retrieve the evolution of the resistance values during LTD and LTP. Figure 1(c) reports the resistance evolution for 15 identical depression-potentiation cycles, revealing that a smooth and reproducible synaptic analog behavior is achievable with these devices.
Although SDC memristors are ion-conducting devices that change their resistance due to the movement of Ag + ions into the device structure [36], their behavior is well replicated (figures 1(b) and (d), black traces) by the modulation of an equivalent conducting filament (CF) barrier (figure 1(e)) [32][33][34][35], which is the typical behavior of filamentary memristive devices. The barrier thickness (x in figure 1(e)) is in fact directly correlated to the memristor conductance, which represents the synaptic strength. Further details of the compact model and the extracted parameters for this technology are reported in [6]. Figure 2(a) shows the model of the LIF neuron [6] supporting a rate-dependent plasticity rule on the synaptic memristive devices that was designed and simulated in this work. In this neuron model, the input terminal (see neuron input in figure 2(a)) is kept at virtual ground, and input spikes from presynaptic neurons are integrated into a capacitor. When the voltage across the capacitor passes a threshold, an output spike is generated at the neuron output, and after a predefined delay (i.e. T spike delay in figure 2(a)), the capacitor is discharged to reset the system to its initial state. The rate at which the capacitor charges depends both on the input spikes' rate and the presynaptic strength. Because the neuron is leaky, in the absence of input spikes the capacitor discharges with an appropriate time constant. This feature mirrors the finding that biological neurons have a leaky membrane, which contributes to depolarization when no spikes are fed at their inputs. The effect of the time constant value in the LIF model was already evaluated in [37], in which it is clearly shown that the presence of leakage provides enhanced noise robustness of SNNs while decreasing The synaptic plasticity mechanism implemented in the neuron model is purely rate based and depends only on the rate of the presynaptic stimulation. Thus, a high (low) rate of presynaptic stimulation leads to the potentiation (depression) of the associated synapse. In the adopted neuron model, this learning rule is implemented by appropriately designing the shape of the spike (see figure 2(c)), so that each presynaptic spike results in a small potentiation of the associated synaptic memristor device, and by introducing a back spike, which results in a small synaptic depression; see figure 2(c). It is worth noting that this back spike only serves the purpose of properly implementing the rate-based learning rule and is not related to the firing activity of the postsynaptic neuron and does not appear at its output.

Neuron model and simulations
When firing the back spike from its input terminal, the postsynaptic neuron also outputs a Dep and Dep control signals that are connected to the gate of two MOSFET devices (see figures 2(a) and (c)), which disconnect the synapses from their presynaptic neurons and connect their bottom electrodes to ground; see figure 2(c).
When the neuron fires the back spike, the propagation of information from the presynaptic neurons is therefore temporarily disabled. The occurrence of simultaneous presynaptic spikes and back spikes is minimized by modeling the time interval between the back spikes as a random variable following a Poisson distribution (i.e. λ = 1 s was used in the simulations) and by designing a back spike with a short duration. Because the back spikes do not propagate any information, their shape can be designed with some degree of flexibility and can be adjusted as required. The back spike used in the simulation is shown in figure 2(c) and has a pulse width of 300 µs, which is much shorter than the presynaptic spike. The mean time interval between successive back spikes determines the characteristic of the implemented rate-based learning rule. As shown in figures 2(b), a stimulation rate ν 0 exists at which potentiation and depression effects balance out, leading to no average synaptic strength change . Presynaptic stimulation rates higher than ν 0 result in a net synapse potentiation, while lower stimulation rates result in a net depression, as shown in figure 2(b). Although the spikes used in this work were designed to provide a system response in a similar timescale to that of biological neurons and to be compatible with the employed memristor technology, it is worth noting that the pulse shape and the parameters of the neuron circuit (e.g. threshold, integrator time constant) can be scaled appropriately to adapt the system response to satisfy possible application requirements, highlighting the flexibility of the electronic implementation of the bioinspired neural network.

Simulated network
To understand whether the artificial SNN features information transmission properties akin to those found in biological systems, we implemented a circuit that resembles the GC morphology, shown schematically in figure 3(a), as well as the spike digitization principle used in [26,31]. GCs are typically studied because they constitute more than half of the neurons in the brain, and in particular, they present an exceptionally low number of synapses (four on average) [38,39], which constitutes ground for a slender electronic implementation. In neuroscience experiments, the stimulation is typically performed by applying spike trains through the mossy fibers (MFs) [40], and the experiment time is divided into temporal bins, where the presence of a spike is coded as logic 1 (0 otherwise). Figure 3(b) reports the schematic of the implemented artificial neuron, with the related memristor-based synapses emulating the structure on the left. Each synapse may receive up to four spikes over time, i.e. the experiment time during which the inputs are delivered to the network is composed of four temporal bins. In [26,31], time bins of 10 and 6 ms have been used to stimulate the input and digitize the output train, respectively. Due to the technological constraints of our memristors and design choices, spikes with a longer duration are needed, which lead to define time bins of 50 and 10 ms for input and output, respectively, however with no loss of generality. As in [31,36], we used four time bins on four inputs for the stimulation (as in figure 3(b)), giving 2 N bin ·N input = 2 4·4 = 65536 possible input combinations and with a total stimulation period of 50 ms × 4 = 200 ms. Spike stimulations (10 ms long) are applied at random time (jitter) within each time bin (50 ms long), consistently with the idea of digitization of random input spike trains and with the need to introduce in an otherwise deterministic artificial network the stochastic features observed in the biological counterpart. Specifically, in vitro experiments on GCs consist in applying specific (coded) stimuli through the neuron's MFs and repeating the experiments several times for each stimulus to sample the intrinsic neuron variability, which leads to a stochastic output. To reproduce the same stochastic response of the neuron using a deterministic neuron model, in circuit simulations each stimulus is delivered with a Poisson distributed delay (jitter) inside each time bin, therefore mimicking that the stimuli are provided by four other presynaptic neurons.
In this study, we aimed at quantifying how theoretical quantities related to information propagation through the network are affected by learning (i.e. plasticity on synaptic weights) to confirm analogies between biological and artificial frameworks. To do so, it is imperative to understand properly how synaptic strength is represented in the two frameworks. Indeed, although in biological experiments the synaptic efficacy (weight) is measured in terms of release probability p, a stochastic parameter quantifying the synaptic strength [26], in memristor-based neuromorphic networks the weight is represented by the memristor conductance. In fact, in biological synapses, the efficacy is increased (decreased) by applying LTP (LTD) theta burst trains, which, in our approach, will result in an increased (decreased) memristor conductance. Therefore, in this study, we simulated the system for different values of memristor conductance, and we looked at how MI, SSS, and SpS are affected by synaptic plasticity. To keep consistency with what was observed in [26], in which the release probability was equal for all four MF synapses at the GC (i.e. any permutation of the four inputs was equivalent), we reduced the number of different stimuli from 65 536 (all possible input combinations) to 3876, making the four synaptic inputs equal. Also, to repeat simulations for different values of memristor conductance while trying to follow bioinspired protocols, the memristor conductance values used in simulations (and related to p) have been increased in-between consecutive simulations by applying specifically designed LTP theta burst (as depicted in figure 4(a)) and consequently updating the synaptic weights based on the conductance variations ( figure 4(b)). For each synaptic weight, the 3876 stimuli are then delivered 15 times through the inputs and ranked on the basis of their relative SpS. As the goal of this study was to understand how information propagates through an artificial network when synaptic strength values are in a fixed configuration, we excluded possible influence of the synaptic weight variations induced by the input transmission by disabling the plasticity mechanism in the Verilog-A model of the memristor during the application of input spike trains, leaving it enabled only during the application of the theta bursts in figure 4. This implies that the results concerning the information transfer analysis, reported in the following section, can be considered valid regardless of the specific device employed to represent the synaptic weights. Naturally, such a device needs to show potentiation and depression capabilities to fulfill the role of a synaptic element in an SNN. Nevertheless, information propagation through the network will show the features discussed in the following regardless of the technological specifications and of the peculiar learning features of the synaptic device.

Results and discussion
It is now possible to look at the effect of synaptic plasticity on a spiking neuron with memristive synapses by analyzing the key quantities related to information transfer, such as entropy, MI, and surprise when stimulating the network, as described in section 4.

Information transfer analysis
Shannon MI provides a mathematical framework to quantify the amount of information transmitted by a neuron during neural stimulation. Because our aim was to investigate whether the envisaged neuromorphic architecture could reliably reproduce neuronal performances, we explored the dependencies of MI on synaptic efficacy (i.e. memristor conductance). In analogy with [26,31], the reduced number of inputs allowed us to calculate MI, and, as explained in section 2 and shown in figure 3, we have first digitized spike trains, and then a controlled set of stimuli S was chosen. Second, responses r were detected when stimuli with known a priori probabilities p(s) were repeatedly presented. Once all the data were collected, the corresponding joint probabilities p(r|s) and the probability distribution of responses averaged over the stimuli p(r) were estimated. MI was computed with equation (1), and because in biological systems its value has been shown to change according to variations of the release probability [26], we investigated the relationship between MI and the memristor conductance, which in our assumption was the equivalent of the synaptic efficacy. Figure 5 shows the correlation between the calculated MI and the memristor conductance values. The overall information transfer is enhanced upon an increase in synaptic efficacy (memristor conductance) in accordance with the expectations. Furthermore, at visual inspection, the dependence of MI on the memristor conductance revealed a good correlation with the corresponding p-MI curve (p; release probability) obtained with both real biological and simulated neurons (see figures 2(b) and 3(b) in [26]). These results, besides the demonstration of the validity of this approach in mimicking neuronal information transfer, also indirectly support the close relationship between the release probability (p) and memristive conductance. We were also interested in identifying stimuli that were best encoded by the electronic neuron. The stimulus specific contribution to the MI (SSS; equation (2)) and the SpS (equation (3)) were therefore computed, allowing to identify the most informative set of stimuli ( figure 6). Specifically, for a given value of synaptic strength, the network was stimulated with the 3876 inputs, and the latter were then ranked by descending SpS (black curves in figure 6). The same procedure has been replicated for different memristor conductance values (2.6, 6.2, and 13.6 µS), and the results are reported in figures 6(b) and (c). We initially focused on the results obtained when using the lowest memristor conductance value. As shown in figure 6(c), we identified the stimuli with the highest and lowest SpS, respectively (i.e. the blue and red markers on the bottom black curve in figure 6(c)). We then tracked how these specific stimuli changed their ranking when the simulations were repeated after synaptic potentiation (achieved by means of theta bursts, as explained in section 4). Both markers moved in qualitative agreement with what was reported in [26], which is conveniently reported also in figure 6(a). Although the stimuli with the highest and lowest SpS at  the lowest memristor conductance were not coincident with those found in [26] at the lowestp value, the qualitative trend was found to be the same. In addition, we verified that the same trend is obtained when tracking exactly these stimuli in our simulations, as reported in figure 6(b). Both cases confirmed the expected trends, revealing the dependability of an artificial memristor-based neuro-synaptic circuit in quantifying the information content of a specific spike train given a determined network strength.

Discussion
The results shown thus far confirm the similarities between a neuromorphic microcircuit composed by a neuron with a limited number of synapses endowed with a rate-based learning rule and its biological counterpart. Bioinspired experiments have been reproduced by assuming a dependency between the release probability and the conductance of an electronic synapse. Three parameters, namely MI, SSS, and SpS, have been computed for different synaptic weights, with the aim to analyze the ability of the implemented neuron to retrieve and quantify information content from a specific stimulus based on the synaptic strength and input sparseness. Despite the differences between the neuromorphic and the biological neuron, like (a) the noise/variability level, (b) the stochastic mechanisms underlying the opening of ion channels, or (c) the stochastic processes involved in the neurotransmitter release process, these results demonstrate from an information transmission perspective that artificial neurons can be adopted as elements performing complex computational tasks, such as those performed by biological neurons. This proof of principle is a first milestone in the development of advanced neuronal networks with performance compatible with brain circuits, given their capability to compute sparse and temporally uncorrelated information. Furthermore, differently from conventional hardware, neuromorphic electronic circuits [41] can be designed to operate with a limited power consumption in multiple time domains according to circuit architectures. These advantages, deriving from an electronic implementation of biologically plausible SNNs (e.g. multiple timescales and reduced area and power consumption), prove remarkably useful for different applications.

Conclusions
In this work, we investigated the analogies between an artificial neuron combining memristor synapses and rate-based learning rule with biological neuron response in terms of information propagation from a theoretical perspective. Bioinspired experiments have been reproduced by linking the biological probability of releasep with the artificial synapse conductance. MI, SSS, and SpS have been computed for different synaptic weights, with the aim of analyzing the ability of the implemented neuron to retrieve and quantify information content from a specific stimulus based on the synaptic strength. The results highlight that an artificial neuron allows to develop a reliable and biological resembling neural network in terms of information analysis. Advantages deriving from an electronic implementation (e.g. timescale and area) provide a remarkably useful tool for different applications.

Data availability statement
The data cannot be made publicly available upon publication because they are not available in a format that is sufficiently accessible or reusable by other researchers. The data that support the findings of this study are available upon reasonable request from the authors.