This site uses cookies. By continuing to use this site you agree to our use of cookies. To find out more, see our Privacy and Cookies policy.
Paper The following article is Open access

Qualitative switches in single-neuron spike dynamics on neuromorphic hardware: implementation, impact on network synchronization and relevance for plasticity

, , , , , and

Published 6 March 2024 © 2024 The Author(s). Published by IOP Publishing Ltd
, , Focus Issue on Spike-Based Plasticity Citation Liz Weerdmeester et al 2024 Neuromorph. Comput. Eng. 4 014009 DOI 10.1088/2634-4386/ad2afc

2634-4386/4/1/014009

Abstract

Most efforts on spike-based learning on neuromorphic hardware focus on synaptic plasticity and do not yet exploit the potential of altering the spike-generating dynamics themselves. Biological neurons show distinct mechanisms of spike generation, which affect single-neuron and network computations. Such a variety of spiking mechanisms can only be mimicked on chips with more advanced, nonlinear single-neuron dynamics than the commonly implemented leaky integrate-and-fire neurons. Here, we demonstrate that neurons on the BrainScaleS-2 chip configured for exponential leaky integrate-and-fire dynamics can be tuned to undergo a qualitative switch in spike generation via a modulation of the reset voltage. This switch is accompanied by altered synchronization properties of neurons in a network and thereby captures a main characteristic of the unfolding of the saddle-node loop bifurcation—a qualitative transition that was recently demonstrated in biological neurons. Using this switch, cell-intrinsic properties alone provide a means to control whether small networks of all-to-all coupled neurons on the chip exhibit synchronized firing or splayed-out spiking patterns. We use an example from a central pattern generating circuit in the fruitfly to show that such dynamics can be induced and controlled on the chip. Our study thereby demonstrates the potential of neuromorphic chips with relatively complex and tunable single-neuron dynamics such as the BrainScaleS-2 chip, to generate computationally distinct single unit dynamics. We conclude with a discussion of the utility of versatile spike-generating mechanisms on neuromorphic chips.

Export citation and abstract BibTeX RIS

Original content from this work may be used under the terms of the Creative Commons Attribution 4.0 license. Any further distribution of this work must maintain attribution to the author(s) and the title of the work, journal citation and DOI.

Glossary

ADCanalog-to-digital converter
AdExadaptive exponential leaky integrate-and-fire
CPGcentral pattern generator
CVcoefficient of variation
EIFexponential leaky integrate-and-fire
HOMhomoclinic
HOPFAndronov–Hopf
ISIinterspike interval
LIFleaky integrate-and-fire
PRCphase response curve
QIFquadratic integrate-and-fire
SNICsaddle-node on an invariant cycle
SNLsaddle-node loop
SNNspiking neural net
SNRsignal-to-noise ratio
STDPspike-timing-dependent plasticity
WTAwinner-take-all
MNmotoneuron

1. Introduction

The design of neuromorphic chips incorporates two main observations from biological brains: neurons as analog spiking units and synapses as plastic connections between them. Currently, most on-chip learning paradigms focus on optimizing synaptic weights while keeping the dynamics of individual neurons simple and constant. In this approach, individual units are often described by highly reduced models of spiking neurons, such as the LIF neuron.

This contrasts biological networks, where intrinsic neuron properties are diverse and plastic. The spiking dynamics of biological neurons are a function of intrinsic neuron properties such as ion channel composition and morphology, and vary within as well as between cell types [1, 2]. Additionally, spiking dynamics are subject to fluctuations in environmental parameters such as temperature [3], energy availability [4] and extracellular ion concentrations [5]. Moreover, spiking dynamics can be actively regulated through altered expression levels of ion channels [6]. Neuron-intrinsic parameters have also been shown to adapt in common biological learning paradigms [7]. First studies of in silico SNNs using such heterogeneous neuron-intrinsic properties and the plasticity thereof have correspondingly been shown to improve learning, especially for tasks with a rich temporal structure [8, 9].

A hallmark of parameter variations in nonlinear dynamical systems is that system properties do not always change smoothly, but can undergo critical qualitative transitions. This has been shown in biological neurons [3, 10, 11], but is also expected in neuromorphic hardware with nonlinear single-neuron dynamics. Here, we show that neuron-intrinsic parameter variation on nonlinear neuromorphic hardware has a significant effect on single-neuron computation and thereby on network dynamics.

To demonstrate this, we draw inspiration from a small CPG network in the Drosophila wing motor system. In this system, neuron-intrinsic properties have recently been shown to be crucial to the network synchronization state that allows the animal to fly stably [12]. Specifically, the neurons in this network innervate the muscle that is responsible for the wing down-stroke and thus control the frequency of the wingbeat. The intuition is as follows: If firing of the motoneurons is splayed out in time, i.e. each neuron fires regularly but at times distinct from the firing of all other neurons, their impulses arrive at the muscle one after the other and the wingbeat frequency is stable (figure 1, right). If the network of motoneurons synchronizes, there are periods during which many impulses reach the muscle and periods where no impulses reach the muscle. These periods produce high and low wingbeat frequencies respectively, and thus result in less stable flight (figure 1, left). Recently, it has been shown that the mechanism that generates the splayed-out state for stable flight relies on a particular spike-generating dynamics [12]. When the properties of the cell membrane were genetically manipulated to change the spiking dynamics, in-phase synchrony (i.e. close to simultaneous firing of several neurons) was increased in the network.

Figure 1.

Figure 1. Controlling network synchronisation by neuron-intrinsic properties alone. Schematics show how networks with identical synaptic all-to-all coupling (top) but distinct neuron-intrinsic parameter sets A (blue) and B (green) can display synchronized (middle, blue spikes) and splayed-out firing patterns (middle, green spikes). Phase synchrony of network spikes has behavioural relevance in the Drosophila motor circuit (bottom), for example, where the splayed out firing pattern encourages a stable wingbeat frequency (right), while a synchronized firing pattern does not (left).

Standard image High-resolution image

As an illustration of using advanced spiking dynamics in neuromorphic applications, we implement this neuron-intrinsic mechanism of tuning network synchronisation as observed in Drosophila on-chip. To this aim, we emulate a small network of four all-to-all coupled neurons on a BrainScaleS-2 chip [13], which has sufficiently complex single-neuron dynamics [14] to allow for a qualitative change in spike generation. Instead of genetically manipulating the cell membrane as in the biological system, we here alter the reset voltage of the spiking units. The aim is identical: to change the spiking dynamics of the individual neurons. By exclusive modification of the intrinsic properties, while leaving synaptic connections intact, we completely change the synchronization state of the network. We thus demonstrate the importance and technical implementation of different intrinsic spiking dynamics on the chip and highlight their potential regulation via mechanisms of intrinsic spike-related plasticity.

2. Results

From a dynamical systems' perspective, regular spiking at threshold results from mathematical bifurcations giving rise to so-called limit cycles which are repetitively traversed. The emergence of limit cycles at spike onset, referred to here as the onset-bifurcation, can in any regularly spiking biophysical neuron model only occur via distinct bifurcation classes (and their sub-types) [15]. The resting state of the neuron can either be destabilized or destroyed at the same time limit cycles are born, or with a lag. Prominent among the onset-bifurcation structures are the SNIC, HOM, and HOPF-related bifurcations. Each of these bifurcations leads to a qualitatively different mechanism of spike generation with unique synchronization properties and is related to the concept of excitability classes [16].

The functional properties of such distinct excitability classes have been experimentally observed in the last century [17, 18]. The onset-bifurcation for example affects the firing rate vs. input relations (tuning curve) and also the way neurons shift their spike times in response to short perturbations placed at different phases in their spiking period. The latter is characteristic of the underlying bifurcation, can be experimentally measured, and is commonly referred to as the PRC [15, 19, 20] (for a quick intuition see section 4, figure 5).

PRCs of neurons with a HOPF-related onset-bifurcation, for example, have a negative and a positive component, which means that neurons either respond to excitatory presynaptic spikes by an advance or a delay in spike-timing, depending on the relative timing between the presynaptic and postsynaptic spikes. Neurons with a SNIC or HOM onset on the other hand are characterized by PRCs that are largely positive, and therefore mostly respond to excitatory presynaptic spikes by an advance in spike-timing. A neuron with a SNIC onset is characterized by a symmetric or right-tilted PRC, while a HOM onset-bifurcation is characterized by a left-tilted PRC. The spike-timing of a neuron with a HOM onset-bifurcation is therefore most sensitive to presynaptic spikes that arrive directly after a postsynaptic spike, while a neuron with a SNIC onset-bifurcation is more sensitive to presynaptic spikes that arrive later relative to the postsynaptic spike (for a schematic overview, see figure 1 in [3]). The onset-bifurcation thereby determines the impact of presynaptic spikes on the spike-timing of the postsynaptic neuron and thus influences the relative spike-timing of neurons in a network.

Plasticity in neuron-intrinsic properties allows for transitions in onset-bifurcations of neurons in a network and consequently influences the network state. This means that not only synaptic plasticity but also neuron-intrinsic plasticity serves as a mechanism that alters network states and can alter neural computation. Moreover, it suggests that for learning, next to traditional STDP, also plasticity of the intrinsic spiking dynamics is likely to be of relevance.

In this study, we explore the effects of cell-intrinsic properties on network states on a neuromorphic chip. Specifically, we demonstrate that tuning a single parameter in the AdEx neuron on the BrainScaleS-2 chip [13, 14], configured to display simplified EIF dynamics [21] without adaptation, can mimic a qualitative switch in spike generation found in biophysical models. This switch is accompanied by altered synchronization properties. A neuron-intrinsic parameter can thus be used to control the synchrony of small networks of all-to-all coupled neurons on a neuromorphic chip, which highlights the significance of single-neuron complexity and the plasticity thereof on neuromorphic hardware.

2.1. A switch in spiking dynamics on a neuromorphic chip

In biology, many physiological parameters such as temperature and extracellular ion concentrations, which can be controlled through neuromodulation, can bring about a transition in the onset-bifurcation of a neuron [3, 10]. The qualitative impact of neuron-intrinsic plasticity on a neurons' onset-bifurcation can be modeled by any computational neuron model that is dynamically rich enough to allow for a transition in onset-bifurcation upon certain parameter changes.

The EIF neuron is simple, yet sufficiently complex to allow for such a transition. This may seem counter-intuitive as the EIF model always passes through a local saddle-node bifurcation when the resting state looses stability. However, both SNIC and HOM spike-generating bifurcations in high-dimensional conductance-based models are global bifurcations of trajectories that are linked to different stable and semi-stable manifolds of a local saddle-node bifurcation [16]. It turns out these two onset-bifurcations can be mimicked with the EIF equation by exploiting the reset value. Roughly speaking, low reset values mimic a SNIC-like onset-bifurcation where in a conductance-based model, the spike trajectory would enter and leave through the slow center manifold (i.e. the semi-stable direction of slowest dynamics). In contrast, high reset values mimic a HOM-like onset-bifurcation where the spike trajectory enters fast through one of the stable manifolds and then leaves slowly through the center manifold [22]. The fast manifold is not modeled explicitly, but its effect, which is that only the later part of the EIF dynamics is traversed, is achieved through alteration of the reset value. In this way, low and high reset values preserve the distinct properties of spikes originating from SNIC and HOM onset-bifurcations set by how much of their trajectories' dynamics are spent in the center manifold [23].

Formally, the center manifold reduction of SNIC and HOM bifurcations leads to a quadratic differential equation which is the normal form of a local saddle-node bifurcation, and which with the help of the reset can mimic switches between these onset-bifurcations [24, 25]. The switch itself is termed the SNL bifurcation. The distinction between orbits originating from SNIC and HOM bifurcations is how much of their trajectories' dynamics are spend in the center manifold. In other words the quadratic normal form captures the behaviour of the global homoclinic loops existing near the unfolding of an SNL point [22, 25]. Although the EIF model considered here is not formally the normal form of a biophysical model, it in fact shares the same quadratic center manifold and can, in that sense, produce SNIC-like and HOM-like onset-bifurcations as well as the critical switch between them.

The EIF neuron is described by the following one-dimensional ordinary differential equation for the membrane voltage V [21],

Equation (1)

parameterized by the membrane time constant τ, the effective rest potential E and the variables of the exponential term which are the effective threshold potential $V_\textrm{T}$ and the threshold slope factor $\Delta_\textrm{T}$ (figure 2(I)-(A)). When V reaches the threshold $V_\textrm{threshold}$, a spike is recorded and V is reset to $V_\textrm{reset}$.

Figure 2.

Figure 2. Switch of spiking regimes in an exponential leaky integrate-and-fire (EIF) neuron upon altering the reset voltage $V_\textrm{reset}$. Numerical simulation—I: phase portrait, $\dot V$ versus V (A), and voltage trajectories (B) for the EIF model with low (blue), middle (orange) and high voltage reset values (green). Firing rates increase as the reset approaches the firing threshold. Neuronal response properties also undergo a qualitative switch reflecting a transition from a SNIC-like to an SNL-like onset-bifurcation. (C)–(E): higher voltage reset values effectively truncate the voltage trajectory between two spikes (top). This changes the time/phase advances (dashed grey lines) in response to voltage perturbations (black arrows). These responses are summarized in the PRCs (bottom), which changes from a right-tilted (C), to a symetric (D), to a left-tilted (E) shape. Emulation on a neuromorphic chip—II: the same switch of spiking regimes can be induced on a BrainScaleS-2 neuron. Here, the phase portrait as measured from the recorded voltage trajectories shows that the chosen reset values are either below or on/above the minimum of the voltage derivative where the switch occurs. The switch is confirmed by the predicted alteration of PRC shapes, which in the experimental case were obtained by perturbation protocols (not shown, see section 4).

Standard image High-resolution image

The voltage reset $V_\textrm{reset}$ controls the switch between the aforementioned spiking regimes resulting from SNIC- and HOM-like onset-bifurcations. A directly visible effect of increasing the reset voltage of a spiking EIF neuron is a higher firing rate, because the neuron starts closer to threshold after firing a spike (figures 2(I)-(A–B)). In addition to this quantitative change, the increased voltage reset and the correspondingly "shortened" voltage trace between two spikes alters the response properties to external stimuli in a qualitative way best illustrated by analysis of its PRC.

For a tonically firing neuron, the PRC describes how an input at a time t shifts the timing of the next spike from the expected interspike interval (ISI) T. In terms of the perturbation phase $\phi = \frac{t}{T}$, the PRC for a voltage input, instantaneous and small, thus corresponds to a phase shift $\frac{\textrm{d}\phi}{\textrm{d}v}$. For the EIF neuron, the phase shift is directly related to the voltage trajectory as illustrated in figures 2(I)-(C–E): the phase shift is larger when the input arrives at a time-point where the voltage is changing slowly, and the phase shift is smaller when the input arrives at a time-point where the voltage changes rapidly. Generally, for one-dimensional neuron models, the PRC is a scaled version of the inverse of the time derivative of the membrane voltage $\dot V^{-1}$ between two consecutive spikes (see section 4, figure 5).

Correspondingly, the peak of the PRC, where a neuron is most susceptible to an input, is located at the phase with the smallest slope of the voltage trace. In terms of voltage, this smallest slope translates to the minimum $V_{\mathrm{min}}$ of the right-hand side of the voltage equation (equation (1), figure 2(I)-(A)). Therefore, the changes of the voltage trace induced by going from low and to high reset values lead to distinct PRCs with different peak positions.

For low $V_\textrm{reset}$ values below $V_{\mathrm{min}}$, the PRC peaks at a central phase because the slope of the voltage trace is non-monotonic (figure 2(I)-(A), blue and orange dashed lines). Very low values of $V_\textrm{reset}$ lead to a right-tilted PRC as the minimum voltage slope is reached relatively late (figures 2(I)-(A) and (I)-(C)), whereas a higher $V_\textrm{reset}$ leads to a more symmetric PRC (figures 2(I)-(A) and (I)-(D)). Such monophasic (strictly positive) symmetric or right-tilted PRCs are hallmarks of biological neurons that undergo a SNIC bifurcation when transitioning from a silent to a tonically spiking regime.

For high $V_\textrm{reset}$ on or above $V_{\mathrm{min}}$ (figure 2(I)-(A) dashed green line), the PRC is maximal at phase zero, because the voltage slope is monotonically increasing (figure 2(I)-(E) top). Such asymmetric left-tilted PRCs are characteristic for neurons with a HOM onset-bifurcation. The critical SNL-point at which a neuron transitions from a SNIC to a HOM onset-bifurcation thus occurs when the reset is exactly at the minimum $V_\textrm{reset}^{\mathrm{SNL}} = V_{\mathrm{min}}$.

Hence, tuning the reset voltage, a single parameter in the EIF neuron, is sufficient to change from one spiking regime to another. Next, the switch is demonstrated on-chip, followed by studying a small network to understand how this neuron-intrinsic response property impacts network synchronisation.

As all AdEx parameters on the BrainScaleS-2 chip can be individually tuned for each neuron, the adaptive component can be switched off to emulate neurons with pure EIF dynamics. Each neuron can then be tuned to undergo a qualitative switch in spike generation by changing $V_\textrm{reset}$ (figure 2(II)). As an alternative to measuring each model parameter individually, the voltage equation was determined in a more reliable manner by taking the derivative of voltage traces recorded from the silicon neuron (figures 2(II)-(A–B)). Setting $V_\textrm{reset}$ at dynamically relevant positions with respect to this voltage equation produces qualitatively different neuron dynamics with right-tilted, symmetric and left-tilted PRCs and consequently captures the characteristics of qualitatively different onset-bifurcations (figures 2(II)-(C–E), bottom).

It is noteworthy that due to the intrinsic noise in the analog circuits of the BrainScaleS-2 neurons, the derivative of the raw voltage traces is too noisy to reconstruct the PRC. We therefore opted to reconstruct PRCs of on-chip neurons experimentally by performing perturbation protocols and recording the consequential shifts in spike-timing [26]. Furthermore, PRCs were mildly affected by the small refractory period that BrainScaleS-2 neurons require to spike reliably in the investigated regime.

2.2. Single-neuron dynamics affect synchronization—two weakly coupled neurons

A qualitative switch in spike generation alters the global behaviour of a neuron in a network. This is exemplified by the influence of the PRC of single neurons, which is determined by the onset-bifurcation, on the relative spike-timing of two mutually weakly coupled neurons.

The evolution of the relative spike-timing of two neurons with identical intrinsic frequencies, f, and identical PRCs, $Z(\phi)$, that are mutually coupled via weak excitatory delta synapses with a weight, ε, can be derived from the pairwise phase interaction function

Equation (2)

where,

Equation (3)

Here, ψij describes the phase difference between neuron i and j, where the stable fixed point in $\frac{d\psi_{ij}}{dt}$, $\psi_{ij}^*$, determines the steady state phase locking between the neurons. As the phase interaction function between two weakly coupled neurons with delta synapses is essentially a scaled version of the odd part of the PRC, a right-tilted PRC leads to a stable fixed point in $\frac{d \psi_{ij}}{dt}$ at 0 and more left-tilted PRCs lead to a stable fixed point at 0.5 (figure 3(I)). Altered intrinsic neuron properties, which affect a neuron's PRC shape, can therefore switch the synchrony of two weakly coupled neurons from a synchronous regime ($\psi_{ij}^* = 0$) to an asynchronous regime ($\psi_{ij}^* = 0.5$).

Figure 3.

Figure 3. Spike onset-bifurcation determines pairwise phase synchronization between weakly-connected neurons. Numerical simulation—I: tuning the spike onset type of four exponential leaky integrate-and-fire (EIF) neurons by increasing their reset voltage $V_\textrm{reset}$ (A)–(C) alters their firing frequency (top) and their phase response curves (PRCs) (middle) from which respective pairwise phase interaction functions can be calculated (bottom). The interaction functions describe the evolution of the phase differences between neurons, so that the preferred phase synchrony is set by the marked location of the stable fixed points, e.g. in-phase $\Delta \phi = 0$ for a saddle-node loop on an invariant limit cycle (SNIC)-like neuron (blue) and anti-phase $\Delta \phi = 0.5$ for a saddle-node loop (SNL)-like neuron (green). Emulation on a neuromorphic chip—II: on-chip neurons display inherent noise in firing frequency and single-neuron dynamics, but nevertheless display a switch in synchronisation behaviour (same labelling as in I). Color coding of reset values as in figure 2.

Standard image High-resolution image

Due to the temporal noise on top of the fixed-pattern noise of the neurons implemented on BrainScaleS-2 (see section 4.2), neurons cannot be reliably tuned to have perfectly identical intrinsic frequencies and PRCs (figure 3(II) (top, middle)). The pairwise phase interaction function of two neurons on the chip is therefore slightly more complex

Equation (4)

where fi and Zi are the intrinsic frequency and experimentally recorded PRC of neuron i and fj and Zj are the intrinsic frequency and PRC of neuron j. As experimental PRCs were obtained through perturbation protocols with presynaptic neuron synapses with the same properties as the synapses in the all-to-all coupled network, the synaptic weights ε are already absorbed in the experimental PRCs and do not appear in the experimental pairwise phase interaction equation (equation (4)). Since PRCs could be tuned to be very similar in size and shape, the biggest impact of the heterogeneity of the two neurons on the pairwise phase interaction function is the vertical translation caused by their frequency mismatch fi  - fj (figure 3(II) (bottom)).

The odd part of the right- and left-tilted PRCs is relatively big with respect to more symmetric PRCs. This relatively large odd PRC component renders the fixed points in the pairwise phase interaction function more stable and more invariant to vertical translations caused by the frequency mismatch of the coupled neurons. Two weakly coupled neurons on the BrainScaleS-2 chip with low $V_\textrm{reset}$, a right-tilted PRC, and fixed points in the pairwise phase interaction function close to 0 are expected to fire rather synchronously (figure 3(II)-(A)). Two weakly coupled neurons on the BrainScaleS-2 chip with high $V_\textrm{reset}$, a left-tilted PRC, and fixed points in the pairwise phase interaction function close to 0.5 on the other hand, are expected to fire asynchronously, where the spikes of the two coupled neurons are splayed out in time (figure 3(II)-(C)). Fixed points in the pairwise phase interaction function for coupled on-chip neuron pairs with intermediate $V_\textrm{reset}$ and a rather symmetric PRC are very sensitive to the frequency detuning of the two coupled neurons and would therefore fire more or less synchronously depending on the frequency mismatch (figure 3(II)-(B)).

2.3. Single-neuron dynamics affect synchronization—a small all-to-all coupled network

Spike statistics of a small network of all-to-all coupled neurons consequently differ depending on single-neuron dynamics. We showcase this by implementing a CPG network of four neurons that exhibit the same changes in network synchrony upon the alteration of $V_\textrm{reset}$ as the Drosophila wing motor circuit [12] (figure 1) shows upon genetic manipulations of intrinsic neuron parameters.

If $\psi_{ij}^* = 0$ for all neuron pairs in a network, all neurons in the network will fire in synchrony (figures 4(I)-(A) and (II)-(A)). If $\psi_{ij}^* = 0.5$ for all neuron pairs in an all-to-all coupled network with more than two neurons, a frustrated system arises [27, 28]. As a thought experiment, imagine three all-to-all coupled neurons with $\forall \psi_{ij}^* = 0.5$. If all neurons fire at the same frequency, and the phase difference between neuron 1 and 2 and neuron 1 and 3 is 0.5 however, the phase difference between neuron 2 and 3 has to be 0. As not all phase differences in the network can go to $\psi_{ij}^*$ at the same time, the system is frustrated. One solution to this frustrated system is the splay state, where the phase differences between all neurons are multiples of $\frac{1}{N}$, where N is the number of neurons in the network [29].

Figure 4.

Figure 4. Synchronization of networks as a function of neuron-instrinsic spike onset type. Networks of four all-to-all coupled neurons with increasing $V_\textrm{reset}$ ((A)–(C), values and color coding as in figure 2) change from a synchronous (top, blue) to a splayed-out firing pattern (top, orange and green) as visible in spike raster plots and binned spike counts. (D)–(G) Uncoupled networks serve as a baseline with random phase relations due to phase diffusion. Compared to this baseline, coupling increases in-phase synchrony for a saddle-node loop on an invariant limit cycle (SNIC)-like onset (blue) and splays out spiking for a saddle-node loop (SNL)-like onset (green) as visible in the respective network interspike interval (ISI) (D)–(F), and the coefficient of variation (CV) thereof (G). In the model simulations (I), a mild amount of input noise was added for better comparison to the on-chip experiments (II).

Standard image High-resolution image

We show here that tuning a single parameter in the EIF neuron, which is the reset voltage, is sufficient to alter network synchrony of four all-to-all weakly coupled neurons from a synchronous regime to a splay state. This is valid for a computationally simulated network of four identical EIF neurons with a small noise component (figure 4(I)) but also for the slightly heterogeneous but qualitatively similar EIF neurons on the BrainScaleS-2 chip (figure 4(II)). To visualize the different network states, we show a snapshot of 50 network spikes, and count the total number of spikes in 50 bins evenly distributed over time. In a synchronous regime, all neurons fire approximately at the same time, which results in multiple spikes being counted in the same bin followed by N − 1 empty bins (figures 4(I)-(A) and (II)-(A)). In the splay state, spikes occur splayed out in time, which results in a more uniform distribution of binned network spikes (figures 4(I)-(B–C) and (II)-(B–C)). A more quantitative analysis shows the qualitative differences in network statistics over long simulation times through the ISI distribution of the network and the coefficient of variation (CV) thereof. The network ISI distribution in the synchronous regime is bimodal, with a peak at low ISIs reflecting the small difference in spike times between synchronized neurons and a peak at high ISIs corresponding to the interval between two clusters of spikes, close to the single-neuron firing rates (figures 4(I)-(D) and (II)-(D)). Due to this large separation between peaks in the ISI distribution, the variance is relatively high which leads to a CV greater than one. This exceeds the CV of the same network without synaptic coupling (figures 4(I)-(G) and (II)-(G) (blue)). As in the splay state, the network spikes are splayed out in time, the network ISI distribution is unimodal and the CV is relatively low (figures 4(I)-(E–G) and (II)-(E–G) (orange, green)).

We furthermore show that $V_\textrm{reset}$ is not merely a binary switch for network synchrony, but has a graded effect on the 'splayness' of the network as shown by the CV of the network ISIs which gets increasingly smaller for increasing $V_\textrm{reset}$ (figures 4(I)-(G) and (II)-(G)). One reason for this is that the stable fixed points in the pairwise phase interaction function become increasingly more stable as $V_\textrm{reset}$ increases and the PRC gets increasingly more left-tilted (figure 3). This reduces the effects of frequency detuning, and reduces the noise induced transitioning between different spiking sequences.

We hereby show that guided by the theory of weakly coupled oscillators and computational neuroscience, a single parameter in an EIF neuron on a neuromorphic chip can be tuned in a highly controlled manner to alter network states. This is beneficial i.e. for tuning networks with winner-take-all dynamics, but also has implications for STDP-based learning rules, as neurons that 'wire together' do not necessarily 'fire together'.

3. Discussion

The present study shows that neuron-intrinsic properties that trigger a transition in nonlinear dynamics, in addition to synaptic weights, do affect network activity on neuromorphic hardware. Specifically, control of this neuronal switch on the BrainScaleS-2 chip enabled emulating the splayed-out spiking pattern of the small Drosophila wing motor network. This is a first step towards using plasticity of nonlinear neuron dynamics for learning paradigms on such hardware. Additionally, it opens a discussion about the most suitable level of abstraction of single neurons in neuromorphic hardware, as the often implemented LIF neuron is not dynamically complex enough to allow for a significant effect of intrinsic neuron properties on network dynamics.

On neuromorphic hardware, like in biological networks, neurons are inherently noisy and heterogeneous. Still, our results demonstrate that the impact of the neuron-intrinsic properties on the network is approximately predicted by the noise-free network theory for identical phase oscillators designed for regularly firing neurons. Hence, even in the presence of noise, the single-neuron dynamics plays a significant role in determining the network synchronization state. The presence of unavoidable heterogeneity in neuron properties mimics the challenges of the biological CPG in Drosophila's flight motor circuit. Achieving a splayed out spiking pattern by making use of a frustrated network architecture [28, 30] instead of ring structures with tuned delay-lines [31] suggests that evolution found a more robust implementation of CPGs, which could be of use in neuromorphic hardware.

3.1. Impact of advanced single-neuron dynamics on neuromorphic applications

The neuronal switch demonstrated here on-chip is available to a broad class of biological neurons, but how neural systems can exploit it functionally is only partially understood. Here, the switch between different spiking regimes was used to tune synchronization in a small network as inspired by the observed effect of an experimentally induced switch in spiking dynamics on Drosophila flight.

The mechanism of desynchronizing small networks via neurons that operate close to the SNL point is potentially shared across many flying insects [12] and provides an interesting example how cellular properties (and not just network connectivity) shape network and, consequently, animal behaviour. In terms of neuromorphic applications, such a small and resource-efficient biological CPG can serve as a role model for robotic locomotion control. CPGs from other animal models have been implemented on various neuromorphic architectures e.g. lamprey swimming control networks on SpiNNaker and Loihi [32]. The novel, direct and simple control of phase relations between neurons via a single cell-intrinsic parameter demonstrated here further enriches the repertoire of emulating insect and other resource-efficient neural systems for neuro-robotics [33, 34] or edge processing [35].

For applications with larger networks, the impact of the neuronal switch is more complex and its potential functions, also in biological networks, are actively researched [3, 3638]. The altered response properties and firing behaviour, however, are expected to impact the dynamics and computational properties of larger networks on multiple levels including synaptic plasticity, pathological synchronisation and memory.

Since the single-neuron dynamics shapes the PRC, it influences the relative spike-timing of pre- and post-synaptic neurons in a network and thereby the synaptic updates that result from STDP-like learning rules. For example, if all neurons in a network have a SNIC onset with a right-tilted PRC, excitatory synaptic connections will be reinforced by STDP because neurons that are 'wired together' fire in synchrony [39, 40]. Neurons that operate close to the SNL point however, fire asynchronously when coupled excitatorily. Applying STDP based learning rules to a network of such neurons leads to interesting effects such as clustered and 'wireless' synchrony [39].

The different spike-onset-bifurcations might also be relevant for existent spike-based computations affected by synchronization like in WTA networks. WTA networks are implemented in neuromorphic hardware to aid in the classification of sensory data [41], and the coupling of this processed sensory data to motor dynamics [42]. WTA architectures in neuromorphic hardware are usually built with LIF neurons, which favour in-phase synchrony when excitatorily coupled. This can make WTA architectures in SNNs difficult to tune as phase synchrony of neurons in a population impedes the encoding of inputs [43]. Implementing WTA architectures with slightly more sophisticated single-neuron dynamics that allow to control the onset-bifurcation could alleviate this issue because neurons can then easily be tuned to avoid in-phase synchrony.

Lastly, neuron models such as the EIF considered here or the QIF model, tuned to be beyond the SNL-point in the HOM spiking regime, exhibit bistable firing [44]. This single-neuron bistability is a form of cell-intrinsic memory [45], which is especially useful for learning tasks that involve time delays or other long time-scales [46].

3.2. Implementation of advanced neural dynamics on neuromorphic chips

In this work, we used the reset voltage to switch the single neurons from a SNIC-like to a SNL-like regime. As the BrainScaleS-2 chip provides control over all AdEx parameters for each neuron on the chip, it is natural to ask which other parameters can implement qualitative switches in single-neuron dynamics.

If only the exponential part of the AdEx neuron is switched on, i.e. the adaptive dynamics are disabled, there are only two parameters that can induce a switch in neuron dynamics. These parameters are $V_\textrm{reset}$ and $V_\textrm{T}$ ([44], see section 4.3). To explain the concept of a neural switch in an intuitive way, we chose the rest parameter $V_\textrm{reset}$ which does not alter the voltage equation. The threshold slope factor $V_\textrm{T}$ might however be a more suitable parameter to induce neuronal switches in actual applications as it does not alter the dynamic range of the neuron, which improves the SNR, and also has a less drastic effect on the firing rate.

In this study, the adaptation variable of the AdEx neuron was switched off to emulate an EIF neuron. Without the adaptation variable, the AdEx neuron is a one-dimensional differential equation. Hence, the HOPF bifurcation, also referred to as class 2 excitability [18], cannot occur [20]. If the adaptation variable is used, the system becomes two dimensional and all three main onset-bifurcations can be realized. One can switch between them by tuning a, $g_\textrm{l}$, τw or τm [47]. The influence of intrinsic neuronal parameters in the AdEx neuron on network synchrony has been shown in theoretical work [48]. The full functionality of the AdEx neuron on the BrainScaleS-2 chip could therefore be harnessed to obtain controlled alterations in network synchrony which might be suitable for certain biological learning paradigms. Full control over excitability classes of individual neurons on a neuromorphic chip furthermore enables the study of neural networks with biologically accurate heterogeneous dynamics. This is especially exciting because large scale efforts to map and quantify single cell properties such as the Allen database [49] are currently being extended to uncover statistics of excitability classes in certain brain areas [50, 51]. These statistics can then be used and studied on neuromorphic hardware.

In addition to control over the advanced neural dynamics, the BrainScaleS-2 chip allows to alter network synchronization properties via setting the time constants of the exponentially decaying synapses. To simplify analysis and to isolate the effects of single-neuron dynamics on network synchrony, here synaptic time constants were set as low as possible to approach delta synapses. As the phase interaction function, which determines the synchronization properties of a neuron in a network, depends both on the intrinsic neuron dynamics and on the shape of the synaptic input [29], altering synaptic time constants can also create switches in synchrony [39].

3.3. Comparison between biological and on-chip fly circuit

While the simulations on chip were inspired by the Drosophila wing motor system, there are some important differences to the biological MN1–5 network. Firstly, the synapses on the chip are implementations of an exponential synapse model, mimicking the action of very fast chemical synapses. In the biological fly circuit, the neurons are instead coupled via gap junctions [12], which are ion channels spanning the membranes of two neurons to form a direct electrical conductance between them. In principle, gap junction coupling on the BrainScaleS-2 chip can be emulated via the available resistance linking originally designed for emulating multi-compartment neurons. However, because resistance linking is limited to neighboring units and thus constrained by the two-row layout of the chip, there is no straight-forward way to implement all-to-all gap junction coupling as observed in the fly circuit for more than two neurons. Additionally, as the main aim of this project was to demonstrate the effect of intrinsic properties on synchronization on chips rather than a perfect representation of the biological network, we here chose to use the simpler and perhaps more widely applicable case of exponential synapses.

Another prominent difference is that here we emulate four neurons on chip whereas there are five neurons in the Drosophila network. The fifth motoneuron MN5 is anatomically and functionally slightly different from the other MNs. It innervates two muscle fibers, whereas MN1–4 innervate only one muscle fiber [52, 53]. The MN5 soma is contralateral to the innervated muscle, whereas MN1–4 are ipsilateral to the muscle [52, 53]. Furthermore, the coupling of MN5 to MN1–4 is weaker than the coupling of MN1–4 among one another. Historically, MN5 is not included in some analyses, such as firing sequence statistics [12, 54]. For these reasons and in order to keep it simple, we here emulated only four neurons on chip. For emulations of five motoneurons we would expect qualitatively similar results.

Furthermore, the nature of the manipulations to change neuron-intrinsic properties was different. Instead of changing the reset value, in fruit flies targeted over-expression of ion channels was used to alter intrinsic neuron dynamics [12]. The analysis of a biophysical model of these motoneurons had suggested that increasing the expression of a particular channel called Shab, would transition the neuron from the SNL to the SNIC firing regime. Just as demonstrated here, such a transition is accompanied by a more symmetric PRC and therefore the model predicted an increased amount of synchronization in the MN network following targeted over-expression of Shab channels. Subsequently, this increased synchronisation was confirmed experimentally using pairwise recordings of of MNs [12]. Therefore, although the coupling is different, the biological MN network and our model on the chip show an analogous transition in the network synchronization state based on a change of intrinsic parameters.

Lastly, the recently published model of the MN network assumed identical intrinsic properties for all motoneurons and explored how splay states can be generated with biophysically realistic membrane properties [12]. However, when emulating the network on chip the neuron-intrinsic properties of the individual neurons deviate slightly from one another. For example, the intrinsic frequencies of the neurons in figure 3(II) differ by a few kHz and this causes a change in the fixed points of the coupling function compared to the idealized model (figure 3(I)). Presumably, Drosophila networks are faced with similar heterogeneities and nonetheless need to generate stable splay states. Therefore, the implementation on chip and the associated added degree of realism vis-à-vis numerical integration of ODEs highlighted previously unexplored complexity. The result that splay states can robustly be generated in the presence of these frequency heterogeneities (figure 4(II)-(C)) corroborates that the underlying mechanism is suitable for Drosophila. Therefore, running these emulations does not only allow to study synchronization on chips, but also informs our understanding of insect flight.

Taken together, we have shown that the often overlooked single-neuron dynamics on neuromorphic hardware are relevant for real life applications and that they deepen our understanding of biological networks. We demonstrated (i) that it is possible to implement a switch in single-neuron dynamics through neuron-intrinsic parameter modification on the chip (for example, via the value of the reset voltage after each spike), and (ii) that this switch results in an alteration in the observable network firing statistics. These findings bear implications for our current view on learning in SNNs via synaptic plasticity, underlining that not only synaptic plasticity plays a role in setting network dynamics, but also neuron-intrinsic plasticity.

4. Methods

The following equations describe the network dynamics that were simulated numerically and emulated on the BrainScaleS-2 neuromorphic chip:

Equation (5)

Equation (6)

Here, Vi is the membrane voltage of neuron i, τ is the membrane time constant, E is the reversal potential, $\Delta_\textrm{T}$ is the exponential 'sharpness' parameter, and $V_\textrm{T}$ is the exponential threshold [21]. All neurons are coupled to all other neurons in the network with delta synapses δ with weight w, which transmit a synaptic current when any connected neuron spikes, which is described by spike-time tj . A Gaussian current noise $\xi(t)$ with variance σ reflects the noise that is not only present in biological neurons, but also in the analog circuits on the BrainScales-2 chip.

When the membrane voltage of individual neuron Vi reaches a threshold $V_\textrm{threshold}$, a spike is recorded and Vi is reset to the reset voltage $V_\textrm{reset}$.

4.1. Computational simulations of the EIF neuron

Computational simulations were performed in Brian2 [56] with a simulation time-step of 0.001 ms for single-neuron simulations and a time-step of 0.01 ms for network simulations. The EIF neuron was simulated with the parameters listed in table 1, and PRCs were generated directly from the single-neuron simulated voltage traces as shown in figure 5.

Table 1. Parameters of the computationally simulated EIF neuron.

τ 10 ms
$\Delta_\textrm{T}$ 5 mV
$V_\textrm{T}$ −10 mV
E −10 mV
$V_\textrm{threshold}$ 1 mV
$V_\textrm{reset}$ −60, −17, −10 mV
w 0.1 mV
σ 0.1 mV
N 4
Figure 5.

Figure 5. A schematic representation of the influence of neuron dynamics on the phase response curve (PRC) in a one dimensional neuron model. (A) A voltage perturbation is applied at various locations between two neuron spikes $t_{\mathrm{spike}_\mathrm{n}}$ and $t_{\mathrm{spike}_\mathrm{n+1}}$, and the corresponding influence of this perturbation on the spike-timing of $t_{\mathrm{spike}_\mathrm{n+1}}$ is recorded as the phase shift. The shape of the voltage trace influences this phase shift as perturbations at regions with a shallow voltage slope induce a big phase shift (top) and perturbations at regions with a steep spike slope induce a small phase shift (bottom). (B) The results of this perturbation protocol can be summarized by the experimental PRC. Infinitesimally small perturbations lead to a phase response that is equal to the derivative of the phase shift divided by the time derivative of the membrane voltage. The latter is often assumed when approximating neuron behaviour in a weakly coupled regimes, where perturbations are assumed to be small.

Standard image High-resolution image

Figures 2 and 3 show simulations of unconnected neurons without noise. For these simulations, synaptic weights w and noise variance σ were set to zero. As these simulations were done without noise, the Runga–Kutta integration method was used [57, 58]. The ODEs for network simulations with noise were integrated by the Heun method [59].

The membrane voltages of the neurons in the network simulations were initialized by drawing samples from a uniform phase distribution, and then mapping to the neurons' voltage spike wave-form. Network simulations were run for 100 seconds.

All simulations can be reproduced via the published code [55].

4.2. Emulation of EIF dynamics on BrainScaleS-2

All experiments on BrainScaleS-2 were performed online via the EBRAINS service [60] and relied on the high-level PyNN interface [61]. The latter was used to define the network topology—consisting of a population of four neurons with an optional all-to-all connectivity for the coupled network study—as well as for the parameterization of the individual neuron circuits. For that purpose, the BrainScaleS-2-specific implementation of PyNN features a custom neuron type which directly exposes all circuit parameters to the user. This allowed us to, first, set the circuit up for EIF dynamics and, then, tweak each of the neurons for widely homogeneous dynamics across the population.

Specifically, we enabled the leak and synaptic input circuits as well as the exponential current term, while leaving the adaptation term disabled. The neurons were configured for a maximum dynamic range by setting the reset potential (≈0.2 V) and spike detection threshold (≈1.1 V) appropriately, as well as by choosing a large $\Delta_\textrm{T}$ and a low $V_\textrm{T}$ through the respective bias currents. With $V_\textrm{T}$ determining the minimum of the $\dot V$-V-curve of the membrane, this choice guaranteed a reasonable voltage swing and thus SNR even for high $V_\textrm{reset}$. This choice was significant, especially in light of $V_\textrm{reset}$ being the parameter determining the neuron dynamics: increasing $V_\textrm{reset}$ up to the SNL point inherently reduces the signal amplitude. A large $\Delta_\textrm{T}$, furthermore, reduced the strength of the exponential term and thus ensured that firing frequencies increased only reasonably with rising $V_\textrm{reset}$.

The synaptic input circuits of the silicon neurons were configured for delta-like behavior by setting the shortest possible τsyn. With the model assuming only weak interaction, the synaptic gain was reduced accordingly, at the same time also improving the noise characteristics of the neuron.

The neuron circuits were tuned to yield homogeneous dynamics, combatting fixed-pattern deviations induced by manufacturing mismatch. Automated calibration routines for leak, reset, and threshold potentials, the membrane time constant as well as the exponential term parameters [14, 62] were used as a starting point. Additional manual adjustments were then applied to further homogenize the $\dot V$-V-curves of the individual neurons: these included tuning of $V_\textrm{T}$ for horizontal shifts and changes to the leak potential for vertical translations, whereby especially the latter also impacted the overall firing rates.

Membrane recordings obtained via an on-chip 30 MSamples−1 ADC allowed for the extraction of limit cycle traces as well as $\dot V$-V-curves (figure 3). The latter required post-processing to suppress high frequency noise components induced by the ADC itself: sample-to-sample fluctuations on the order of 1 LSB ~ 1 mV already correspond to voltage slopes of ${\approx}\mathrm{30}\, \mathrm{V} \mathrm{ms}^{-1}$, equivalent to approximately half of the exponential term's maximum current recorded at the spike detection threshold. This readout noise could be combatted by binning the individual data points $\dot V = \frac{\Delta V}{\Delta t}$ in regular intervals of V and then averaging the samples within each of the latter.

PRCs were obtained through the well-established pulse perturbation protocol [26]. For this protocol, a periodically spiking neuron is perturbed with randomly timed pulses that cause the spike times of the neuron to shift. From the firing phases at which the neuron was perturbed and the resulting phase shifts the PRC can be experimentally estimated. For a more detailed explanation of this procedure see [26], figure 5 and the corresponding section of the published code [55]. Previous works used a Fourier basis to fit PRCs [26]. However, due to the jump around firing phase zero, fitting the SNL PRC accurately requires a lot of Fourier components. Therefore, here PRCs were instead fitted using a polynomial with five degrees.

Pulses were applied by connecting an external, virtual spike source with user provided spike times to all neurons in the network via an excitatory synapse. During perturbation protocols, the connections within the all-to-all coupled network were set to zero. The coupling strengths and time constants of the synapses used to generate the PRCs were the same as those in the all-to-all coupled network, which means the PRCs in this case is identical to the phase interaction function [29].

4.3. Derivation of an SNL-like switch

In one-dimensional neuron models, the peak of the PRC is located at the firing phase with the smallest membrane voltage derivative. If the slope of the membrane voltage is monotonically increasing as the firing phase progresses, the minimum voltage derivative, and thus the PRC peak, will be at phase zero. If the slope of the membrane voltage is not increasing monotonically, the PRC peak is located at a firing phase that is greater than zero.

The membrane voltage at phase zero is determined by the reset voltage, $V_\textrm{reset}$. If $V_\textrm{reset}$ is set at the global minimum of the $\dot V$-V-curve, the PRC peak will be at phase zero. As EIF dynamics is convex, any $V_\textrm{reset}$ value above this minimum leads to a PRC peak at phase zero, while any $V_\textrm{reset}$ value below this minimum does not.

A PRC with a peak at phase zero is a hallmark of a neuron with a HOM onset-bifurcation, while a monophasic PRC with a peak at a phase above zero is a hallmark of a neuron with a SNIC onset-bifurcation. The transitioning point between these onset-bifurcations, which behaves like an SNL bifurcation, occurs when $V_\textrm{reset}$ is set to the minimum of the EIF voltage equation

Equation (7)

At the minimum of this equation, the derivative with respect to V is zero. The derivative of the voltage equation is

Equation (8)

Setting this equation to zero and rearranging to derive an equation for V gives

Equation (9)

Setting $V_\textrm{reset}$ below or above this voltage level determines whether the time derivative of the voltage trace $\dot V$ will be monotonically increasing or not and therefore determines the onset-bifurcation of the neuron. This equation also shows that the SNL-like point can be undergone by using $V_\textrm{T}$ as a tuning parameter as it shifts the minimum of the voltage equation. It also shows that $\Delta_\textrm{T}$, τ, E, and $V_\textrm{threshold}$ cannot induce a switch in spiking dynamics since they have no influence on the location of the minimum in the voltage equation.

4.4. Derivation of the pairwise phase interaction function

In order to predict the phase relations between two coupled neurons based on their phase response curves the following assumptions are utilized:

Assumption 1. Two weakly ($|Z_{1,2}\Pi|\ll f_{1,2}$) pulse-coupled oscillators with similar frequencies $f_1-f_2\ll f_1,f_2$. With out loss of generality assume $f_1\gt f_2$ otherwise relabel. The coupled phase equations read,

Equation (10)

Equation (11)

Assumption 2. The phase evolution is expanded as $\phi_1(t) = f_1t + \int^tZ_1(f_1t)\Pi(f_2t) + \cdots.$ For the analysis below the lowest order term is retained.

Assumption 3. In order to be compatible with empirically measured PRCs, the pulses are normalized to deliver short unit perturbation in time, so it is required

Equation (12)

The second equality results from the scaling properties of the delta function. Identifying the phase to lower order from assumption 2 and with some abuse of notation, the perturbation is chosen as $\Pi(\phi_1(t)) = f_1\delta(\phi_1(t))$.

The coupled system of two phase oscillators then reads

Equation (13)

Equation (14)

The phase difference $\psi = \phi_1-\phi_2$ evolves as

Equation (15)

or

Equation (16)

Due to assumption 1, entrainment will happen on a time scale of the evolution of the phase difference, which will be longer than several spike periods. More precisely, the natural timescale on which convergence to an entrained situation would happen is

Equation (17)

where $T_{1,2}$ is the reciprocal of the frequency and which by assumption 1 is a long quantity compared to individual periods. Physiologically this means that the slow phase difference will see many spike periods during the time it convergences. Therefore, one may apply the method of averaging and define the time-averaged phase as

Equation (18)

Its time evolution is governed by

Equation (19)

Equation (20)

Equation (21)

Equation (22)

The integral is a T1-periodic function of time. Now define

Equation (23)

For large N the error in the prefactor introduced by substituting N for $T_2/(T_2-T_1)$ vanishes and one has

Equation (24)

Equation (25)

The last term is of order $\mathcal O(1/N)$ and vanished the larger the time scale separation. Together we have

Equation (26)

Equation (27)

Applying integration by substitution with $\phi(t) = ft$, the integral can be rewritten as

Equation (28)

The averaged slow phase, hence, evolves as

Equation (29)

Equation (30)

If the two frequencies are equal one recovers that the phase difference evolves according to the odd-part of the PRC.

Acknowledgments

We thank the organizers of the Capo Caccia Cognitive Neuromorphic Engineering Workshop for the inspiring meeting, where the idea for this project was born. We are grateful for the technical support by the BrainScaleS-2 support team, especially Joscha Ilmberger, Yannik Stradmann and Julian Göltz. We also thank Moritz Drangmeister and Robert P Gowers for their conceptual and technical contributions to the prototype of this project at the Capo Caccia Workshop.

This project has received funding from the European Research Council (ERC) under the Union's Horizon 2020 research an innovation program (Grant Agreement No. 864243) and from the Einstein Foundation Berlin (EP-2021-621).

Data availability statement

All results presented in this article can be reproduced via the code published at https://itbgit.biologie.hu-berlin.de/computational_neurophysiology/qualitative-switches-in-single-neuron-spike-dynamics-on-neuromorphic-hardware.

Please wait… references are loading.
10.1088/2634-4386/ad2afc