This site uses cookies. By continuing to use this site you agree to our use of cookies. To find out more, see our Privacy and Cookies policy.
Paper The following article is Open access

Weakly nonlinear response of noisy neurons

and

Published 28 March 2017 © 2017 IOP Publishing Ltd and Deutsche Physikalische Gesellschaft
, , Citation Sergej O Voronenko and Benjamin Lindner 2017 New J. Phys. 19 033038 DOI 10.1088/1367-2630/aa5b81

1367-2630/19/3/033038

Abstract

We calculate the instantaneous firing rate of a stochastic integrate-and-fire neuron driven by an arbitrary time-dependent signal up to second order in the signal amplitude. For cosine signals, this weakly nonlinear theory reveals: (i) a frequency-dependent change of the time-averaged firing rate reminiscent of frequency locking in deterministic oscillators; (ii) higher harmonics in the rate that may exceed the linear response; (iii) a strong nonlinear response to two cosines even when the response to a single cosine is linear. We also measure the second-order response numerically for a neuron model with excitable voltage dynamics and channel noise, and find a strong similarity to the second-order response that we obtain analytically for the leaky integrate-and-fire model. Finally, we illustrate how the transition of neural dynamics from the linear rate response regime to a mode-locking regime is captured by the second-order response. Our results highlight the importance and robustness of the weakly nonlinear regime in neural dynamics.

Export citation and abstract BibTeX RIS

Original content from this work may be used under the terms of the Creative Commons Attribution 3.0 licence. Any further distribution of this work must maintain attribution to the author(s) and the title of the work, journal citation and DOI.

1. Introduction

Linear response theory is a successful tool for analyzing how a nonlinear dynamical system reacts to external perturbations [1]. It was originally developed in physics but has also been applied in various other fields [2, 3]. The response to time-dependent signals is of particular importance for nerve cells (neurons) because the response properties of these highly nonlinear and noisy elements determine the way that our brain perceives sensory stimuli, makes decisions, and generates behavior. The linear response of neurons has been measured in experiments [46] and calculated for simple models [79]; it has been used to explore, for instance, transmission of fast signals [10, 11] and the emergence of stable activity in recurrent networks [12].

Despite its success, linear response theory inherently cannot fully characterize the behavior of nonlinear systems and, consequently, extensions of the theory to second and higher orders have been studied, e.g., in nonlinear optics [1315] and magnetic particle imaging [16]. In neuroscience, the second-order response of nerve cells has been measured experimentally [1720], used in the weakly nonlinear analysis of network stability [21], and calculated for a simple Poisson neuron model [22]. However, a complete theory for the second-order response for a neuron model with explicit voltage dynamics and an understanding of the significance of the second-order response for biological neurons is still missing.

In this paper, we analytically derive the second-order response of a noisy integrate-and-fire neuron to a general time-dependent external signal. Calculated in this way, the rate yields interesting single-neuron statistics that can be compared with experimental data; it can be also regarded as the instantaneous population rate of a group of uncoupled neurons subject to the same external drive. Hence, our results are relevant for population coding of time-dependent signals. We illustrate the significance of the weakly nonlinear response by means of signals consisting of one or two cosine functions. By means of extensive numerical simulations, we verify that our results are valid for a broad range of noise intensities.

2. Methods

2.1. LIF model

We study the leaky integrate-and-fire (LIF) model in which the subthreshold voltage $v(t)$ obeys

Equation (1)

Whenever $v(t)$ hits the threshold ${v}_{T}=1$, we register a spike time (figures 1(b), (c)) and set $v(t)$ to ${v}_{R}=0$ for an absolute refractory period τ. Time is measured in units of the membrane time constant and the voltage in multiples of the distance between reset voltage and threshold voltage (for details, see [23]). The voltage dynamics are subject to a constant input current μ and to white Gaussian noise $\xi (t)$ with zero mean and autocorrelation $\langle \xi (t)\xi (t+{t}^{\prime })\rangle =2D\delta ({t}^{\prime })$. The noise represents intrinsic fluctuations (e.g., channel noise) or external background input from other neurons in the network [24]. The strength of the driving signal $\varepsilon s(t)$ (figure 1(a)) is quantified by the small parameter $\varepsilon $. We consider the instantaneous firing rate $r(t)$, i.e., the probability density of a spike at time t. The rate can be obtained numerically by averaging N trials of the spike train (figure 1(c)). For this, we divide the fraction of trials that have fired in an interval $[t,t+{\rm{\Delta }}t]$ by the size of the interval ${\rm{\Delta }}t$ (depending on parameters, we used $N={10}^{8}\mbox{--}{10}^{9}$). The rate can also be determined analytically from a perturbation approach (see below). Already for small $\varepsilon $, the rate can exhibit a strongly nonlinear response to the signal as can be seen from the higher harmonic in figure 1(d).

Figure 1.

Figure 1. Nonlinear response of the LIF model. (a) Cosine signal with frequency $f=0.215;$ (b) membrane potential; (c) spike times for different trials (those for (b) are shown in orange); (d) instantaneous firing rate from simulations (red), linear theory (dashed line), and nonlinear theory (solid black line) given in equation (8). Parameters: $\mu =1.1$, $D=0.001$, $\tau =0$, $\varepsilon =0.05$.

Standard image High-resolution image

The voltage $v(t)$ in equation (1) is a stochastic process whose probability density, $P(v,t)$, obeys the Fokker–Planck equation [3]

Equation (2)

with the operator ${\hat{L}}_{0}={\partial }_{v}(v-\mu +D{\partial }_{v})$ and appropriate boundary condition (more details in the appendix). The solution of equation (2) in principle allows us to fully determine the density $P(V,t)$ and the time-dependent firing rate $r(t)$, but in practice it can only be solved via a perturbation approach.

2.2. Izhikevich model with ion channel noise

We also perform numerical simulations of a model of excitable membrane dynamics that was proposed by Izhikevich [25] and is similar to the two-dimensional Morris–Lecar model [26]. Following reference [27] the model is endowed with channel noise, which is due to the finite number of potassium selective channels. The voltage dynamics of the Izhikevich model are given by

Equation (3)

where I0 is a constant input current, IL is a passive leak current, INa is a deterministic 'persistent sodium' current, and IK is a potassium current gated by the number of open potassium channels $0\leqslant n\leqslant N$. The number of open channels $n(t)$ comprises a time-dependent stochastic process with voltage-dependent per capita transition rates ${\rho }_{o}(v)$ for channel opening and ${\rho }_{c}(v)$ for channel closing. In our simulations, we consider a potassium current with a high threshold and standard parameters: Passive leak current ${I}_{L}={g}_{L}(v-{E}_{L})$ with ${g}_{L}\,=8\,{\rm{mS}}/{{\rm{cm}}}^{2}$ and ${E}_{L}=-80\,{\rm{mV}}$; 'persistent sodium' current ${I}_{{Na},p}={g}_{{Na}}{m}_{\infty }(v)(v-{E}_{{Na}})$ with ${g}_{{Na}}\,=20\,{\rm{mS}}/{{\rm{cm}}}^{2}$, ${E}_{{Na}}=60\,{\rm{mV}}$, and voltage-dependent activation ${m}_{\infty }(v)=1/\{1+{\rm{\exp }}[(-20\,{\rm{mV}}-v)/15\,{\rm{mV}}]\};$ potassium current ${I}_{K}={g}_{k}(n/N)(v-{E}_{K})$ with ${g}_{K}=10\,{\rm{mS}}/{{\rm{cm}}}^{2}$, ${E}_{K}=-90\,{\rm{mV}}$, open channel number $0\leqslant n\leqslant N$, and the total number of channels N = 100. Membrane capacitance is $C=1\,\mu {\rm{F}}/{{\rm{cm}}}^{2}$. The per capita transition rate for channel opening is ${\rho }_{o}(v)=1/\{1+{\rm{\exp }}[(-25\,{\rm{mV}}-v)/5\,{\rm{mV}}]\}$ and for closing is ${\rho }_{c}(v)=1-{\rho }_{o}(v)$.

2.3. Weakly nonlinear response theory

For weak signals ($\varepsilon \ll 1$), the firing rate $r(t)$ can be approximated by the first terms of a Volterra series [28]

Equation (4)

Here, ${r}_{0}={\left[\tau +\sqrt{\pi }{\int }_{(\mu -{v}_{R})/\sqrt{2D}}^{(\mu -{v}_{T})/\sqrt{2D}}{dx}{e}^{{x}^{2}}\mathrm{erfc}(x)\right]}^{-1}$ is the unperturbed firing rate (i.e., for $\varepsilon =0$), and K1 and K2 in equation (4) are the first- and second-order response kernels, respectively. In general, it is more convenient to use equation (4) in the frequency domain:

Equation (5)

where the response functions ${\chi }_{1}$ and ${\chi }_{2}$ are defined as the Fourier transforms of the respective kernels

Equation (6)

The linear response function ${\chi }_{1}(\omega )$ for the leaky integrate-and-fire model has been calculated in [8, 9]; in terms of the parabolic cylinder functions ${{ \mathcal D }}_{k}(x)$ [29], it reads [8]

Equation (7)

with ${\rm{\Delta }}=[{v}_{R}^{2}-{v}_{T}^{2}+2\mu ({v}_{T}-{v}_{R})]/(4D)$.

For an illustration of the second-order effects that are quantified by ${\chi }_{2}$, we consider the sum of two cosines as an input signal $s(t)=\alpha \cos ({\omega }_{1}t)+\beta \cos ({\omega }_{2}t+\phi )$, with a relative phase difference ϕ. Inserting this into equation (4), expressing the cosine functions by complex exponentials, identifying the integrals as the respective susceptibilities, and neglecting higher than second-order terms in $\varepsilon $ yields

Equation (8)

where ${\phi }_{\mathrm{1,2}}$ is the complex argument of the first- and second-order response function, respectively. The underlined terms represent the time-averaged firing rate that is affected by the cosine signals in the second order, quantified by ${\chi }_{2}(\omega ,-\omega )$. By time averaging, we mean integration over a sufficiently long time window and division by the time window. The time-averaged firing rate obtained in this way will contain only contributions from the time-independent terms in equation (8), while each of the time-dependent terms will average out to zero. The bracket ${[...]}_{{LR}}$ encloses the well-known linear response terms at the driving frequencies (the ground modes of the response) which are quantified by ${\chi }_{1}(\omega )$. Higher harmonics of each driving frequency (terms that oscillate twice as fast as the driving oscillations) are contained in ${(...)}_{{HH}}$ and are quantified by ${\chi }_{2}(\omega ,\omega )$. Last but not least, nonlinear interactions of the two signals result in a mixed response (terms in $\{...\}{}_{{MR}}$) at sums and differences of the two signal frequencies quantified by ${\chi }_{2}({\omega }_{1},\pm {\omega }_{2})$. Note that if the signal consists of two cosines with equal frequencies (${\omega }_{1}={\omega }_{2}$), the last term in $\{...\}{}_{{MR}}$ will be time-independent and will contribute to the time-averaged firing rate.

2.4. Numerical measurement of the first- and second-order response functions

The first- and second-order response functions can be also determined from numerical simulations (here applied for the Izhikevich model) or from experimental measurements of neural spike trains in response to a sum of two cosine stimuli with $\alpha =\beta =1$, $\phi =0$, and ${\omega }_{1}\gt {\omega }_{2}$. To this end, we consider the Fourier transform of equation (8) (here only for $\omega \gt 0$)

Equation (9)

Now, in order to determine the response functions numerically, we first numerically estimate the time-dependent firing rate by simulating N realizations of a spike train and dividing the fraction of trials that have fired in an interval $[t,t+{\rm{\Delta }}t]$ by the size of the interval ${\rm{\Delta }}t$. Then, by performing a discrete Fourier transform of the estimated rate, we obtain $\tilde{r}$ and, finally, we determine the numerical estimates of the response functions by inverting equation (9):

Equation (10)

Equation (11)

Equation (12)

where we neglected higher-order terms in $\varepsilon $. Employing the symmetry relations for ${\chi }_{2}({\omega }_{1},{\omega }_{2})$ that we explain below, we can extract the second-order response function for all frequency arguments.

3. Results

3.1. Weakly nonlinear response of the LIF model

Using a perturbation ansatz for $r(t)$ and $P(v,t)$ in equation (2), we obtain a hierarchy of equations (details in the appendix) that determine the second-order response of the LIF model (see [21] for the treatment of a similar problem). Solving this hierarchy with the appropriate boundary conditions, we arrive at the Fourier transform of the second-order response kernel

Equation (13)

The function ${\chi }_{2}$ obeys the symmetries ${\chi }_{2}({\omega }_{1},{\omega }_{2})={\chi }_{2}({\omega }_{2},{\omega }_{1})$ and ${\chi }_{2}({\omega }_{1},{\omega }_{2})={\chi }_{2}^{* }(-{\omega }_{1},-{\omega }_{2})$ (where the asterisk denotes the complex conjugate), which implies in particular that ${\chi }_{2}(\omega ,-\omega )$ is real. These symmetries become apparent in figure 2, where we plot the absolute value and the complex argument for two distinct firing regimes of the LIF model [23]. Values along the white solid diagonal (in figures 2(a), (b), (c), (d)) quantify the second-order contribution of a cosine signal to the mean firing rate (see underlined terms in equation (8)), values along the white dashed diagonal quantify the second-order contributions of a single cosine to higher harmonics in the output (see terms contained in ${(...)}_{{HH}}$ in equation (8)), and the area off the white diagonals quantifies the nonlinear interactions of two cosine signals (see terms contained in ${(...)}_{{MR}}$ in equation (8)). Values of the second-order response functions along the black lines in figures 2(a) and (c) ($\ {\chi }_{2}(\omega ,0)={\chi }_{2}(0,\omega )\ $) are proportional to the derivative of the linear response function w.r.t. the base current, $d{\chi }_{1}(\omega )/d\mu $ (see Appendix B for details). For a subthreshold base current $\mu \lt 1$, noise required to reach the threshold and spike trains are typically irregular [interspike intervals have a high coefficient of variation (CV)] whereas, for a suprathreshold base current $\mu \gt 1$, the model exhibits sustained firing even in the absence of noise and spike trains are more regular (the CV is low). In the subthreshold regime (${r}_{0}\approx 0.14$, ${CV}\approx 0.4$), the dependence of ${\chi }_{2}$ on the two frequencies is limited to two broad peaks that for our choice of parameters are located at ${f}_{1}={f}_{2}\approx \pm 0.2$ (figure 2(a), (b)). In the suprathreshold regime (${r}_{0}\approx 0.42$, ${CV}\approx 0.02$) the second-order response displays a rather different shape in the form of triangular structures (figures 2(c), (d)).

Figure 2.

Figure 2. (a), (c) Amplitude and (b), (d) complex argument of the second-order response function (equation (13)) as a function of the two frequency arguments. (a) and (b) show the subthreshold regime with $\mu =0.9$, $D=0.005$, and $\tau =0$. (c) and (d) show the suprathreshold regime with $\mu =1.1$, $D=0.001$, and $\tau =0$. Note the different scales in (a) and (c). (e) Amplitude of the ground mode $\varepsilon \alpha | {\chi }_{1}(\omega )| $ (diamonds) and amplitude of the second harmonic $(1/2){\varepsilon }^{2}{\alpha }^{2}| {\chi }_{2}(\omega ,\omega )| $ (circles) for a single cosine ($\alpha =1$, $\beta =0$, $\varepsilon =0.05$) in the subthreshold regime. (f) Same as figure 2(e) but for the suprathreshold regime. Nonlinear response depends on signal frequency and can sometimes even exceed the linear response. The theory (black) in figures 2(e) and (f) is obtained by evaluating equations (7) and (13). Note that the frequency dependence of the amplitude of the second harmonic in figures 2(e) and (f) can be obtained by looking at the dashed white diagonal in figures 2(a) and (c).

Standard image High-resolution image

Because the magnitude of ${\chi }_{2}$ is much larger in the suprathreshold regime than in the subthreshold regime (see figures 2(a), (c)), we mainly focus our subsequent analysis on the suprathreshold regime and proceed to discuss quantitatively the nonlinear effects of the sum of cosine signals on higher harmonics, the time-averaged firing rate, and the mixed response.

3.2. Excitation of harmonic oscillations

We compare the amplitude of the harmonic oscillation to the amplitude of the ground mode (orange circles and blue diamonds, respectively, in figure 2(e)) that is excited by a single cosine ($\alpha =1$, $\beta =0$ in equation (8)). The amplitude of the second harmonic exhibits one pronounced peak at ${f}_{1}\approx 0.42$ (at the unperturbed firing rate) and another at ${f}_{1}\approx 0.21$ (half the unperturbed firing rate), where it even slightly exceeds the amplitude of the ground mode, implying that for these frequencies the excitation of the harmonic oscillation is most pronounced. This is exactly the case that we illustrated in figure 1. The linear response theory is in this case particularly misleading because the higher harmonic is stronger than the ground mode. Note that the theory put forward by Ostojic and Brunel [30], which is based on the incorporation of a static nonlinearity (linear-nonlinear model), cannot capture the dominance of the higher harmonics for certain values of the driving frequency. In fact, for the parameter set used in figure 1, the Ostojic/Brunel theory yields practically the same curve as the linear response (not shown) because the stationary firing rate (i.e., the static nonlinearity) is essentially linear over the relevant range of signal values. The observed nonlinear effect in our theory and in the simulations arises from the true dynamic second-order response to signals with a time scale that is comparable to the time scale of the system (inverse firing rate).

3.3. Change of the time-averaged firing rate

The time-averaged firing rate does not change in linear response. Second-order effects for a single cosine signal ($\alpha =1$, $\beta =0$ in equation (8)) are illustrated in figure 3 for the two different firing regimes of the model.

Figure 3.

Figure 3. Weakly nonlinear effects of a sinusoidal stimulation. (a) Nonlinear change of the time-averaged firing rate versus signal frequency for a single cosine ($\alpha =1$, $\beta =0$) in the subthreshold regime ($\mu =0.9$, $D=0.005$, $\tau =0$). Simulations (symbols) are compared to theory (solid lines, underlined terms in equation (8)) for $\varepsilon =0$ (diamonds), $\varepsilon =0.05$ (circles), and $\varepsilon =0.1$ (squares). (b) As in (a) but for the suprathreshold regime ($\mu =1.1$, $D=0.001$, $\tau =0$). In the suprathreshold regime, a cosine signal can both increase and decrease the time-averaged rate. Inset in (b): For a strong signal and f close to ${r}_{0}\approx 0.42$, the mean rate follows closely the input frequency (identity shown in green), reminiscent of 1:1 frequency locking. (c), (d), and (e) show nonlinear response to a sum of cosines in the suprathreshold regime ($\varepsilon =0.05$ and remaining parameters as in (b)). (c) Input signal with $\alpha =1$, $\beta =0$, and ${f}_{1}=0.1;$ (d) input signal with $\alpha =0$, $\beta =1$, and ${f}_{2}=0.33;$ (e) sum of responses to the individual signals from (c) and (d) with subtracted unperturbed rate r0 (grey) versus the response to the sum of the two signals (red). Only the nonlinear theory, equation (8) with $\alpha =\beta =1$ (black solid line), describes the response to the sum of cosines correctly.

Standard image High-resolution image

In the subthreshold regime ($\mu =0.9$), a signal with a low or intermediate frequency can only increase the time-averaged rate (figure 3(a)). High-frequency signals have practically no influence on the rate, as expected from the low-pass nature of the LIF model. A maximum is attained at an intermediate frequency that is larger than the firing rate and depends on the specific choice of μ and D.

In the suprathreshold regime ($\mu =1.1$), the signal can both increase and decrease the rate depending on the signal frequency (figure 3(b)). While fast signals do not change the time-averaged rate, as for $\mu =0.9$, a low-frequency signal ($f\to 0$) decreases the time-averaged rate in marked contrast to the subthreshold case. The different effects of slow periodic signals on the time-averaged rate can be understood from a Taylor expansion of the firing rate. For slow signals, the time-dependent firing rate can be written as the unperturbed firing rate with a modulated base current, ${r}_{0}(\mu +\varepsilon s(t))$. A Taylor expansion then yields

Equation (14)

For the time-averaged rate of an LIF driven by a slow cosine signal, this leads to

Equation (15)

Hence, the change of the time-averaged rate due to periodic low-frequency signals is related to the curvature of the sigmoidal rate function ${r}_{0}(\mu )$. The curvature of this rate function is positive for the subthreshold regime (at the foot of the sigmoid) and negative for the suprathreshold regime (close to the plateau of the sigmoid).

Another feature in the suprathreshold regime is a precursor of frequency locking, which has already been studied in the deterministic [31] and stochastic [32] versions of the LIF model. A signal with a frequency close to the model's unperturbed firing rate can entrain the model within a limited frequency range (inset in figure 3(b)). A signal with $f\gt {r}_{0}$ will speed up the firing, while a signal with $f\lt {r}_{0}$ will slow down the model. Hence, the second-order response kernel provides a link between the deterministic nonlinear behavior of the LIF model and its stochastic firing rate.

3.4. Interaction between two periodic signals

In order to study the mixed response, we now consider two cosines ($\alpha =\beta =1$ in equation (8)). For the subthreshold regime ($\mu =0.9$), the second-order response function has a pronounced peak on the white dashed diagonal (quantifying the amplitude of the higher harmonics) but only weak contributions off the white diagonals, indicating a weak mixed response (figure 2(a)). However, in the suprathreshold regime ($\mu =1.1$), the second-order response function exhibits pronounced peaks and stripes off the white diagonals, indicating a strong mixed response (figure 2(c)).

In the latter regime, we first stimulate the neuron by the sum of two cosines (figures 3(c), (d)) and record the rate response (red noisy curve in figure 3(e)). The two signal frequencies are chosen such that they lie on the off-diagonal stripe in figure 2(c), satisfying ${f}_{1}+{f}_{2}\approx {r}_{0}$. In order to extract the significance of the mixed response (terms indicated by $\{...\}{}_{{MR}}$ in equation (8)), we then stimulate the neuron by the two cosines separately. The grey noisy curve in figure 3(e), obtained by summing up the respective responses and subtracting the mean firing rate, r0, is well represented by the linear theory (keeping only zero and first-order terms in equation (8)), indicating that the harmonics induced by the cosines are negligible. The striking difference between grey and red lines shows that the response to the sum of two cosines can be strongly nonlinear even in a regime where the responses to the single cosines are linear. Linear response theory in this case (dashed line) fails to grasp the modulation of the firing rate (red noisy line) both with respect to its overall amplitude as well as to the timing of local minima and maxima of the rate. The weakly nonlinear theory (equation (8), solid line) in contrast captures both these aspects quite well. The mixed response is the strongest second-order effect we observe in the weakly nonlinear regime of the LIF model.

3.5. Weakly nonlinear response of the Izhikevich model with ion channel noise

In order to test the robustness of the results obtained for the LIF neuron model, we performed numerical simulations for a biophysically more complex neuron model with ion channel noise, equation (3), that is driven by a cosine stimulus (figure 4(a)). In contrast to the integrate-and-fire neuron, the membrane voltage (figure 4(b)) of the Izhikevich model exhibits excitability (i.e., it generates spikes) and does not rely on a fire-and-reset rule. As for the LIF model, we find that for a suprathreshold mean current (${I}_{0}\gt 4.51$), the firing rate of the Izhikevich model exhibits harmonic oscillations (figure 4(d)), i.e., the response $\tilde{r}$ at $\omega =2{\omega }_{1}$ is equally strong or even stronger than the response at the driving frequency itself ($\omega ={\omega }_{1}$).

Figure 4.

Figure 4. Weakly nonlinear response of the NaK model. (a) Input signal; (b) voltage trace obtained from numerical simulations of equation (3) in the suprathreshold regime ($\varepsilon =0.3$, ${I}_{0}=5\mu {\rm{A}}/{{\rm{cm}}}^{2}$); (c) raster plot; (d) firing rate; (e) and (f) show absolute value and argument of the second-order response function for the subthreshold regime (${I}_{0}=4.4\mu {\rm{A}}/{{\rm{cm}}}^{2}$). (g) and (h) show absolute value and argument of the second-order response function for the suprathreshold regime (I0 as in figures 4(b)–(c)). Figures 4(e)–(h) were obtained from simulations with $\varepsilon =0.05$ and $\varepsilon =0.1$, employing equations (11) and (12).

Standard image High-resolution image

The numerically measured second-order response function (figures 4(e)–(g)) shows a striking resemblance with the second-order response function that we calculated analytically for the LIF model (figure 2), indicating a high robustness of the weakly nonlinear theory for the response of spiking neurons. In particular, we observe the following similarities: the triangular structure of the second-order response amplitude (with peaks located at half the firing rate) in the suprathreshold case, the broadly peaked structure in the subthreshold case, and the overall dependence of the phase on the two frequency arguments in both regimes. However, there are also differences: (1) for the Izhikevich model, the amplitude is larger in the subthreshold than in the suprathreshold regime (the other way around for the LIF model); (2) the detailed behavior of the phase along the diagonal in the suprathreshold regime, which is almost constant for the Izhikevich model but decreases by π in the same range for the LIF model; (3) in the suprathreshold regime, the peak on the diagonal is larger than the peaks on the frequency axes. Points (1) and (2) could be due to the specific choice of parameters or could reflect more substantial differences in the two models. Note in particular that we used the same number of channels N for the two regimes for the Izhikevich model in figure 4 but different values of the noise intensity D for the LIF model in figure 2. The difference (3) could be caused by the finite-amplitude simulation from which the second-order response is estimated for the Izhikevich model. As we have verified also for the LIF model, a too-large signal amplitude $\varepsilon $ leads to an underestimation of the second-order response at the maxima on the frequency axes (not shown). For the computationally costly Izhikevich model, it is not possible to reduce $\varepsilon $ much more than the values we have used here.

3.6. Transition from linear response to stochastic mode locking via the weakly nonlinear regime

Let us return to the LIF model and illustrate under which circumstances the weakly nonlinear response becomes manifest in the activity of spiking neurons.

First, we consider an LIF neuron that is driven by a cosine stimulus of increasing amplitude (figure 5(a)) and is subject to a background noise of constant strength (figure 5(b)). (Because we cannot directly plot the white noise $\xi (t)$ (it possesses an infinite variance), we use in figures 5(b) and (f) a filtered version of the noise, $\eta (t)={\int }_{t}^{t+{\rm{\Delta }}t}{{dt}}^{\prime }\sqrt{2D}\xi ({t}^{\prime })$.) For weak signal amplitudes ($\varepsilon \lt 0.08$), the firing rate of the neuron (red noisy curve in figure 5(e)) is well described by the linear response theory (dashed black line). For large signal amplitudes ($\varepsilon \gt 0.15$), the firing of the neuron shows pronounced periods of silence which become apparent from the raster plot (figure 5(c)). In this regime, which we call stochastic mode locking, the neuron fires twice within each period of the driving signal (2:1 mode locking) and noise only shifts the spike times but does not induce any skipping of firing. The transition ($0.08\lt \varepsilon \lt 0.15$) between the regimes of linear response and stochastic mode locking is well described by the second-order theory (black solid line in figure 5(d)). For $\varepsilon \gt 0.15$, the second-order theory still predicts the position of the peaks in the firing rate quite well but attains unphysiological negative values between the peaks (results not shown). The regions of the different regimes cannot be sharply defined but here are chosen to be determined by computing the relative mean squared error between theory and prediction (computed over one signal period), and terminating the theory where the error exceeds 10%.

Figure 5.

Figure 5. In the suprathreshold regime ($\mu =1.1$, $\tau =0$), the transition from linear response to mode locking is well described via the weakly nonlinear theory for a signal of changing amplitude (left) and a constant signal amplitude but changing noise strength (right). (a) Cosine signal ($\alpha =1$, $\beta =0$, ${f}_{1}=0.21$) with increasing amplitude ($0\lt \varepsilon (t)\lt 0.18$); (b) a filtered version of the white noise, $\eta (t)={\int }_{t}^{t+{\rm{\Delta }}t}{dt}^{\prime} \sqrt{2D}\xi (t^{\prime} )$, shown at multiples of the time step ${\rm{\Delta }}t$ with ($D=0.002,{\rm{\Delta }}t=0.01$); (c) raster plot; (d) firing rate (red) with the linear theory for small signal amplitudes (dashed black line) and weakly nonlinear theory for intermediate signal amplitudes (solid black line); (e) cosine signal ($\alpha =1$, $\beta =0$, ${f}_{1}=0.21$, $\varepsilon =0.18$); (f) filtered version of the white noise, $\eta (t)={\int }_{t}^{t+{\rm{\Delta }}t}{dt}^{\prime} \sqrt{2D}\xi (t^{\prime} )$, shown at multiples of the time step ${\rm{\Delta }}t$ with decreasing noise strength ($0.014\gt D(t)\gt 0.001$, ${\rm{\Delta }}t=0.01$); (g) raster plot; (h) firing rate (red) with the linear theory for large noise intensity (dashed black line) and weakly nonlinear theory for intermediate noise intensity (solid black line). For the theoretical prediction of the rate in figures 5(a)–(d) we employed equation (8), where we updated the parameter $\varepsilon $ in every time step; for the theoretical prediction in figures 5(e)–(h), we also employed equation (8) and updated the parameter D in equations (7) and (13) in every time step.

Standard image High-resolution image

Another transition in which the weakly nonlinear response becomes manifest is depicted in figures 5(e)–(h). Here, we consider an LIF neuron that is driven by a cosine stimulus of constant amplitude (figure 5(e)) but where the strength of the intrinsic noise decreases slowly with time (figure 5(f)). While the rate response is well described by the linear theory for large noise strengths (dashed line in figure 5(h)) and the activity exhibits 2:1 mode locking for low noise strengths, the transition between the two regimes is well captured by the second-order theory (black solid line in figure 5(h)). The transition from the linear to the nonlinear regime controlled by noise is consistent with the known linearizing effect of fluctuations [33].

4. Discussion

In this study, we derived an analytical expression for the second-order response function of the LIF model and illustrated the significance of the weakly nonlinear regime. Combined with the linear theory, our result (equation (13)) predicts the firing rate of a neuron (or, equivalently, the population rate of uncoupled neurons) for an arbitrary signal up to the second order in the signal amplitude. Since the temporal modulation of the rate is thought to be an essential channel in neural information transmission [28], understanding and predicting it theoretically for a broad range of signal frequencies is crucial.

The response of noisy systems to weak periodic stimuli can be successfully described by linear response theory. In particular in computational neuroscience, this theory has been used to investigate, for example, the influence of noise on signal transmission [8, 9, 3436], the stability of the autonomous activity in recurrent neural networks [12], and the magnitude and speed of information transmission in neural populations [10, 11]. However, in the case of strong periodic stimulation, linear response theory fails and tools from nonlinear dynamics are predominantly employed; in particular, measures of mode locking and synchronization [37]. For general nonlinear systems, some features of the transition between these extremes, linear response and mode locking, can be described by the theory of stochastic synchronization [38, 39]. However, in the context of neural oscillators, the complete transition is analytically poorly understood.

Here, we described in detail the transition between the linear-response regime and the stochastic mode-locking regime for a noisy neuron. This transition emerges in two ways: either with increasing signal amplitude or by decreasing the intrinsic noise of the neuron (both illustrated in figure 5). Both manifestations of this transition are well captured by the second-order response that we derived in this paper. We also demonstrated a number of striking features of the second-order response such as a higher harmonic that can exceed the ground mode (figure 1), or the highly nonlinear response to a sum of cosine signals (figure 3(c)). Beyond the analytically tractable LIF model with white Gaussian current noise, we also measured numerically the second-order response function in a biophysically more realistic spiking neuron model with channel noise. We found the features of this response function to be very similar to those calculated for the LIF model. Given the differences in the noise model (discrete multiplicative temporally correlated noise instead of additive uncorrelated Gaussian noise) and the differences in the neural dynamics, this similarity is surprising. It suggests that the discussed features do not hinge on the particularities of the model and are, thus, general.

Many papers [4, 4045] have studied the response of biological neurons to periodic stimulation in different firing regimes, and it is crucial to find a theoretical description that can capture the regimes' response properties. We expect the weakly nonlinear regime to be relevant in experimental preparations where neurons are subject to weak noise, e.g., in vitro where a small-amplitude periodic current stimulation is applied and where channel noise is the main source of fluctuations. However, the weakly nonlinear response may also be relevant in situations where network fluctuations are weak, e.g., in networks with very low firing rates [46] and where network mechanisms lead to the emergence of either one or multiple global oscillations [47].

Possible extensions of the framework presented here are the application of the Richardson method [48] for an efficient numerical estimation of higher-order response functions, the computation of the second-order response for neural models with synaptic filtering [10, 36], and the inclusion of finite synaptic potentials [49].

Acknowledgments

This work was supported by the BMBF (FKZ: 01GQ1001A) and the DFG (research training group GRK 1589/2).

Appendix A.: Calculation of the second-order response function

Here, we present the calculation of the second-order response function for an LIF neuron with white Gaussian noise. For completeness, we also sketch the first-order response (see [50] for a detailed presentation or [36] for an alternative approach to the perturbation calculation). The nonlinear response for one cosine signal (an important but not the complete second-order response) was already calculated by Brunel and Hakim in [21].

The voltage $v(t)$ in equation (1) is a stochastic process whose probability density, $P(v,t)$, obeys the Fokker–Planck equation [3]

Equation (A.1)

with the operator ${\hat{L}}_{0}={\partial }_{v}(v-\mu +D{\partial }_{v})$ and an absorbing boundary at the threshold which leads to the boundary conditions $P({v}_{T},t)=0$ and $-{{DP}}^{\prime }({v}_{T},t)=r(t)$, where the prime denotes a partial derivative with respect to v (see [50]). Additionally, the density is continuous everywhere, integrable and normalized to 1. For the computation of the first- and second-order response functions in equations (4) and (8), we choose a signal that consists of a sum of cosines. Note that in general, the first- and second-order response functions describe the response to signals with arbitrary time dependence. The signal here (a sum of two cosines with different frequencies), however, is chosen because it is amenable to an analytic calculation and still completely probes the second-order response properties of the neuronal dynamics (i.e., it allows computation of the response function ${\chi }_{2}$ for all possible combinations of frequency arguments). This is also the main difference to a similar calculation performed for the second-order response function [21] where, due to a different choice of probing signal, the mixed response (see equation (8)) is not present and hence the second-order theory is incomplete. With $s(t)=\alpha \cos ({\omega }_{1}t+{\varphi }_{1})+\beta \cos ({\omega }_{2}t+{\varphi }_{2})$, the Volterra expansion of the the firing rate (equation (4)) can be written as

Equation (A.2)

where $c.c.$ denotes the complex conjugate. In analogy to equation (A.2), we make the following ansatz for the density $P(v,t)$:

Equation (A.3)

Inserting equations (A.2) and (A.3) into equation (A.1) and sorting out the terms with the corresponding time dependence, we obtain the following hierarchy of equations:

Equation (A.4)

Equation (A.5)

Equation (A.6)

Dropping the frequency arguments, the corresponding boundary conditions read

Equation (A.7)

Equation (A.8)

Additionally, every Pk is continuous everywhere. The zeroth-order equation, equation (A.4), with the respective boundary conditions can be integrated twice to yield a result for ${P}_{0}(v)$. Enforcing the normalization condition for the zeroth-order density ($\int {{dvP}}_{0}(v)+{r}_{0}\tau =1$), one can determine r0. The well-known result [51, 52] reads

Equation (A.9)

For the solution of the first- and second-order equations, equations (A.5) and (A.6), we introduce the notation

Equation (A.10)

where we again dropped the frequency arguments for the sake of notation and consider the homogeneous equation

Equation (A.11)

which, after enforcing continuity everywhere and using boundary condition, equation (A.7) is solved by the Green's function

Equation (A.12)

Here,

Equation (A.13)

are parabolic cylinder functions [29] and linear independent solutions of

Equation (A.14)

The factor A in equation (A.13) is chosen such that

Equation (A.15)

The relations equations (A.10) and (A.12) can now be employed to find solutions to the inhomogeneous equations (A.5) and (A.6) and, after exploiting the boundary condition equation (A.8) and relation equation (A.15), we can find explicit expressions for the first- and second-order response functions

Equation (A.16)

Equation (A.17)

First, for the sake of simplicity, we employ the notation $f(v,\omega )={e}^{\tfrac{{(v-\mu )}^{2}-{({v}_{T}-\mu )}^{2}}{4D}}{{ \mathcal D }}_{i\omega }\left(\tfrac{\mu -v}{\sqrt{D}}\right)$. Secondly, we note that the function f obeys the relation

Equation (A.18)

which can be verified by insertion of the parabolic cylinder function. Finally, by partial integration, employing the differential equation equation (A.4) with the respective boundary conditions, assuming that the functions P0 and f decay sufficiently fast for $v\to -\infty $, and employing the relation equation (A.18), we find

Equation (A.19)

Equation (A.20)

Equation (A.21)

Using equations (A.19–A.21) and exploiting some properties of the parabolic cylinder functions [29], we can simplify equations (A.16) and (A.17) and find the already well-known result for the first-order response function (equation (7)) and the main result of this paper, the second-order response function (equation (13)).

Appendix B.: Response to the sum of fast and slow cosine

In order to obtain a better interpretation of the second-order response function, we will consider a signal consisting of a sum of two cosines $s(t)=\alpha \cos ({\omega }_{1}t)+\beta (\cos ({\omega }_{2}t+\phi )$, where we take ${\omega }_{2}\to 0$. In this limit, the time-dependent term $\beta (\cos ({\omega }_{2}t+\phi )$ can be interpreted as a modulation of the constant base current μ in equation (1) such that the response to the sum of two cosines can be reduced to a response to a single cosine (equivalent to equation (8) with $\beta =0$) but with a time-dependent parameter ${\mu }_{{sig}}(t)=\mu +\varepsilon \beta \cos ({\omega }_{2}t+\phi )$:

Equation (B.1)

Expanding the time-dependent functions in equation (B.1) with respect to $\varepsilon $ and neglecting higher than second-order terms yields

Equation (B.2)

Comparing equation (8) in the limit ${\omega }_{2}\to 0$ with equation (B.2), we find

Equation (B.3)

Equation (B.4)

Please wait… references are loading.