EDITORIAL

Special issue on applied neurodynamics: from neural dynamics to neural engineering

and

Published under licence by IOP Publishing Ltd
, , Citation Hillel J Chiel and Peter J Thomas 2011 J. Neural Eng. 8 060201 DOI 10.1088/1741-2552/8/6/060201

1741-2552/8/6/060201

Abstract

Tracing technologies back in time to their scientific and mathematical origins reveals surprising connections between the pure pursuit of knowledge and the opportunities afforded by that pursuit for new and unexpected applications. For example, Einstein's desire to eliminate the disparity between electricity and magnetism in Maxwell's equations impelled him to develop the special theory of relativity (Einstein 1922)Einstein 1922 p 41 'The advance in method arises from the fact that the electric and magnetic fields lose their separate existences through the relativity of motion. A field which appears to be purely an electric field, judged from one system, has also magnetic field components when judged from another inertial system.'. His conviction that there should be no privileged inertial frame of reference Einstein 1922 p 58 'The possibility of explaining the numerical equality of inertia and gravitation by the unity of their nature gives to the general theory of relativity, according to my conviction, such a superiority over the conceptions of classical mechanics, that all the difficulties encountered must be considered as small in comparison with this progress.' further impelled him to utilize the non-Euclidean geometry originally developed by Riemann and others as a purely hypothetical alternative to classical geometry as the foundation for the general theory of relativity. Nowadays, anyone who depends on a global positioning system—which now includes many people who own smart phones—uses a system that would not work effectively without incorporating corrections from both special and general relativity (Ashby 2003).

As another example, G H Hardy famously proclaimed his conviction that his work on number theory, which he pursued for the sheer love of exploring the beauty of mathematical structures, was unlikely to find any practical applications (Hardy 1940)Hardy 1940 pp 135–6 'The general conclusion, surely, stands out plainly enough. If useful knowledge is, as we agreed provisionally to say, knowledge which is likely, now or in the comparatively near future, to contribute to the material comfort of mankind, so that mere intellectual satisfaction is irrelevant, then the great bulk of higher mathematics is useless. Modern geometry and algebra, the theory of numbers, the theory of aggregates and functions, relativity, quantum mechanics—no one of them stands the test much better than another, and there is no real mathematician whose life can be justified on this ground. If this be the test, then Abel, Riemann and Poincaré wasted their lives; their contribution to human comfort was negligible, and the world would have been as happy a place without them.'. Ironically, the famous Rivest, Shamir and Adleman (RSA) algorithm, which currently underpins much of modern cryptography, depends on fundamental ideas from number theory (Cormen et al 2001).

Finally, the indeterminacy of the quantum states of light, atoms and molecules, a source of great theoretical interest in the first quarter of the last century, is now in the process of being harnessed for creating algorithms, and novel computers, that can solve problems that could not be addressed by current computing devices (Steane 1998, Ralph and Pryde 2010).

Thus, perhaps we should not be surprised that a focus on whether a three-body system (such as the sun, earth and moon) would remain stable over time ultimately became the basis for a new geometrical way of thinking about nonlinear dynamical systems, and that this approach has begun to find practical applications in the understanding and control of nervous systems, including novel ideas for brain–computer interfaces.

Classical dynamical systems theory began with the work of Newton on the motion of the planets. He was able to solve a two-body problem, the motion of the earth around the sun (Newton 1687, Chandrasekhar 1995). Finding explicit solutions for the slightly more complicated problem of three bodies (for example, the sun, earth and moon) proved to be far more difficult. In the late nineteenth century, Poincaré made significant progress on this problem, introducing a geometric method of reasoning about solutions to differential equations (Diacu and Holmes 1996).

This work had a powerful impact on mathematicians and physicists, and also began to influence biology. In his 1925 book, based on his work starting in 1907, and that of others, Lotka used nonlinear differential equations and concepts from dynamical systems theory to analyze a wide variety of biological problems, including oscillations in the numbers of predators and prey (Lotka 1925). Although little was known in detail about the function of the nervous system, Lotka concluded his book with speculations about consciousness and the implications this might have for creating a mathematical formulation of biological systems. Much experimental work in the 1930s and 1940s focused on the biophysical mechanisms of excitability in neural tissue, and Rashevsky and others continued to apply tools and concepts from nonlinear dynamical systems theory as a means of providing a more general framework for understanding these results (Rashevsky 1960, Landahl and Podolsky 1949).

The publication of Hodgkin and Huxley's classic quantitative model of the action potential in 1952 created a new impetus for these studies (Hodgkin and Huxley 1952). In 1955, FitzHugh published an important paper that summarized much of the earlier literature, and used concepts from phase plane analysis such as asymptotic stability, saddle points, separatrices and the role of noise to provide a deeper theoretical and conceptual understanding of threshold phenomena (Fitzhugh 1955, Izhikevich and FitzHugh 2006). The Fitzhugh–Nagumo equations constituted an important two-dimensional simplification of the four-dimensional Hodgkin and Huxley equations, and gave rise to an extensive literature of analysis. Many of the papers in this special issue build on tools directly descended from the analysis of the Hodgkin and Huxley equations in FitzHugh and Nagumo's early work.

Mathematicians became increasingly interested in biological problems in general, and in the function of the nervous system in particular, during the latter part of the twentieth century. The natural tool for describing more complex neural systems whose patterns of activity unfold in time was nonlinear dynamical systems theory. Classic work from such investigators as Kolmogorov, Arnol'd, Moser, Malkin, Andronov, Hopf, Birkhoff, Hartman and others (reviewed in Izhikevich 2006) served as the basis for understanding the dynamics of neural models such as the coupling of oscillators for rhythmic behavior, leading to work such as that of Koppell and Ermentrout on the lamprey swimming system (Kopell and Ermentrout 1986, 1990), based on earlier models of Cohen et al (1982). Exploration of nonlinear interactions in neuronal populations, especially those that might be related to vision, led to the development of the Wilson–Cowan equations in the 1970s (Wilson and Cowan 1972, 1973). The advent of increasingly powerful personal computers also made it feasible to combine theoretical analyses with extensive numerical investigations of nonlinear dynamical systems. An important and influential example of such work was the detailed bifurcation analysis of Morris and Lecar's two-dimensional model of nonlinear dynamical behavior in the giant muscle fiber of the Pacific barnacleBalanus nubilis (Morris and Lecar 1981), done by Rinzel and Ermentrout in the late 1980s (Rinzel and Ermentrout 1989). The mathematical analysis of bursting behavior based on decomposition of a dynamical system into fast and slow subsystems, an application of Fenichel's geometric singular perturbation theory (Fenichel 1979, Jones 1995), continues to play an important role. Recent work on dynamical analyses of neurons and neural circuits is described in Izhikevich's recent book (Izhikevich 2006), which is based in part on his own work in this area. This is a very small glimpse of a much larger literature; these mathematical themes recur throughout this issue. Practitioners of neural engineering who want to explore the language and role of dynamics further can find accessible introductions to the key ideas in works such as Strogatz (1994) and Izhikevich (2006).

In this special issue of Journal of Neural Engineering, we provide a sample of the vigor and excitement of the recent developments in the applications of nonlinear dynamical systems theory to the understanding and control of the nervous system. Four of the papers demonstrate the power of dynamical systems theory to analyze and understand neural systems, both in isolation and within a neuromechanical context (Coggan et al 2011, Nadim et al 2011, Spardy et al 2011a, 2011b). One paper focuses on the importance of noise and delay in dynamical systems for control (Milton 2011). Two papers focus on the dynamics of ion channels—in one paper, new approaches for estimating their parameters are described (Meng et al 2011), and in a second, the time courses of sodium ion channels are used to understand conduction block due to high-frequency stimulation (Ackermann et al 2011). Two papers focus on the use of optimal control theory to develop approaches for understanding (deWolf and Eliasmith 2011) and controlling (Nabi and Moehlis 2011) the nervous system. Finally, two papers begin to explore longer time scale neural dynamics through a combination of modeling and experiments, examining how animals learn to reduce the time required to forage for food at multiple sites (de Jong et al 2011), and how the dynamics of the respiratory system change with development (Fietkiewicz et al 2011).

The first four papers of this special issue illustrate the use of dynamical systems theory to analyze and understand neural circuitry and neuromechanical systems. The first of these papers uses the phase response curve (PRC) of an oscillator, which is a conceptual tool rooted in the analysis of systems of nonlinear differential equations that quantifies the effect of internally generated or externally applied perturbations on the phase of an ongoing oscillation. Nadim et al (2011) elegantly apply PRCs and related techniques to shed light on mechanisms for stabilizing the period of a particular central pattern generator circuit, responsible for the pyloric rhythm in the stomatogastric ganglion of the crab. Although the digestive system of Cancer borealis may seem somewhat removed from the concerns of neural engineers, this system has provided the basis for both experimental and theoretical work on the role of neuromodulation on neural circuitry. Neuromodulators can functionally alter the dynamics of a neural circuit on a moment-to-moment basis, 'carving out' distinct functional circuits from a single anatomical circuit (Marder and Thirumalai 2002). Furthermore, a hallmark of central pattern generator (CPG) systems in humans and other animals is a balance of robustness to perturbation and adaptability to changing conditions. Here Nadim et al (2011) focus on robustness, both to intrinsic perturbations such as barrages of irregular synaptic activity (incorporated into a dynamical systems model of the circuit as a Poisson input train), as well as perturbing inputs from other rhythmic processes internal to the animal, such as a slow modulatory input from the animal's gastric mill. Experimentally identified inhibitory feedback from a particular part of the circuit to a pair of pacemaker cells appears extraneous at first, inasmuch as suppressing it seems to have no effect on the period of the pyloric rhythm. But while the mean period is unaffected by removing this synapse's effect, the variance of the period shows great sensitivity. Using an experimental approach that allows them to artificially remove the inhibition—dynamic clamp—and a model that is simplified but based in a principled fashion on the original system, the authors use the structure of the phase response curves with and without the synapse present to explain the mechanism underlying this variance-suppressing inhibitory synapse. Their results can be explained at a conceptual level using phase plane analysis to show that inhibitory synaptic input and the intrinsic properties of the neuron act to cancel out the changes in phase induced by perturbations. These results may have intriguing implications for the role of inhibition in stabilizing vertebrate nervous systems, as well as artificial neural networks.

The second paper in this special issue uses dynamical analysis to shed light on the dysfunctional activation of peripheral neurons, for instance in paroxysmal attacks of pain or spasticity. Coggan et al (2011) provide insight into changes in axonal excitability through dynamical analysis of conductance-based models. These authors make elegant use of fast/slow analysis to explain the initiation and termination of ectopic spiking that may underlie paroxysmal neurological symptoms. They work both with a multi-compartment conductance-based model and a lower dimensional, single-compartment model based on the Morris–Lecar model mentioned above. They find that axonal susceptibility to after-discharge depends on dynamical properties such as bistability, in which a dynamical system has more than one stable attractor for a given set of parameters (in this case, a 'quiet' stable fixed point and an 'active' stable limit cycle). Moreover, they find that the system's behavior depends on geometrical features of the dynamics, such as the distance between stable and unstable (saddle) fixed points in the phase plane. Based on their models, they make observations that may have clinical relevance: an axon susceptible to paroxysmal discharges due to disease or mutation may be able to operate normally unless an appropriate 'trigger' is encountered, accounting for the intermittency in the phenomenon that is often observed. Moreover, absolute values of currents may not be as relevant as relative time scales of underlying currents for determining whether paroxysmal discharges will occur.

As Coggan et al (2011) observe, 'clinically relevant changes in excitability can be replicated in surprisingly simple models, and can be explained on the basis of a relatively small number of complex nonlinear interactions'. In what could be a theme for this special issue, they further write 'With increases in computing power, there is less and less practical need to keep models simple. However, as models become more complicated, they become harder to analyze (in any formal mathematical manner at least) and ultimately harder to understand. Especially when one's goal is to explain the basis for some well-characterized phenomenon, building a model with the minimally sufficient components may be a good approach. Such models afford the best opportunity to apply tools from dynamical systems theory to formally characterize the nonlinear dynamical basis for the phenomenon'.

The third and fourth papers in this special issue, by Spardy et al (2011a, 2011b), use dynamical systems theory to understand the underlying dynamics of vertebrate locomotion. Within the theoretical framework of dynamical systems, a central pattern generator circuit is typically thought of as a stable limit cycle, or isolated periodic orbit. From this point of view, it is natural to see the relationship between the central circuit and the peripheral musculo-skeletal system as feed-forward only. However, real neuromechanical systems often include feedback from the periphery to the central generator, and these interactions may have nontrivial consequences for rhythm generation (Chiel and Beer 1997, Chiel et al 2009). Incorporating peripheral feedback into a neuromechanical central pattern generator complicates the model, but also can provide a closer match to empirical behavior. As an example, Markinet al (2010) recently showed that fictive locomotion (in an isolated spinal cord preparation from the cat) occurred only over a narrow range of supra-spinal drives compared to those supporting normal locomotion in a preparation in which feedback from the limb to the spinal cord was kept intact. The mechanisms by which locomotory behavior can be maintained, both under normal conditions and under reduced supraspinal drive (representing the effects of spinal cord injury (SCI)) are important building blocks for rehabilitative therapy post-SCI. Spardy et al (2011a, 2011b) analyze afferent control in a neuromechanical model of limbed locomotion that addresses these experimental results directly. Using sophisticated mathematical techniques from dynamical systems analysis, such as dissection of the dynamics into fast and slow subsystems, they are able to stitch together the geometry underlying transitions between different combinations of flexor/extensor activation and stance/swing phases of limb movement. Their analysis explains how peripheral feedback extends the range of supraspinal drives for which stable oscillations occur. In fact, they identify qualitatively distinct mechanisms by which the CPG creates rhythmic behavior in the presence and absence of feedback; in one case, the timing is determined by release from inhibition, and in the other case, escape from inhibition.

In a companion paper, this same group carries out a further analysis of the locomotory control system, allowing them to resolve the following issue. As running speed increases, the durations of different component phases of the motion (stance/swing) do not change symmetrically. Instead, increase in velocity of forward motion is accomplished by decreasing the stance phase duration while the swing phase duration remains roughly constant. How is it that a CPG built of symmetrically related elements (flexor/extensor related rhythm generation neurons and interneurons), and symmetric descending drive, is able to produce asymmetric cycles? By finding a reduced model that retains the essential dynamical elements of the larger computational model, they are able to explain why a CPG or drive asymmetry are not needed to generate asymmetric changes in the duration of the swing/stance phases as running speed increases, as observed experimentally. As part of the analysis, they raise and answer an important question: does the reduced model indeed possess limit cycle dynamics? They are able to prove mathematically (using averaging methods) that the move to a more analytically tractable model has not eliminated the main phenomenon of interest, and that a limit cycle does indeed exist in the simplified model.

The fifth paper in this special issue, by Milton (2011), extends the classical dynamical systems approach by incorporating the 'reality' of imperfect actuators and sensors, unexpected events in the world, noise and delays. From an engineering viewpoint, delays can cause deleterious complications within control systems. For example, sufficiently long delays in a feedback control loop can be destabilizing. However, the existence of delays in a neural control system is often ignored because it complicates mathematical analysis of the system. The coexistence of delays and random perturbations (or noise) further complicates system analysis. Milton (2011) gives an elegant review of this topic with applications to human neural control problems, for example the stability and response time during postural sway in healthy adults, visuomotor stabilization of an inverted rigid rod (stick balancing), delay-induced transient oscillations and anticipatory synchronization. As Milton (2011) points out, in order to connect insights from analysis of noisy delayed control systems with human neuromotor control, it is essential to understand the nature of the cost functions that the nervous system uses to manage its resources.

The sixth paper in this special issue, by Meng et al (2011), focuses on the problem of parameter estimation for models of nerve cells that have multiple ionic conductances. The richness of the dynamics of nerve cells, which are far more than on/off switches, comes from the many ion channels within their membranes, which allow them to be spontaneously active, to be bistable (e.g. silent or firing repetitively based on inputs), to fire in rhythmic bursts and to have their dynamical properties altered by exogenous neuromodulators. In many extracellular recordings, however, only information about the timing of spikes is available. What can be inferred from these data? Meng et al (2011) use the statistical theory of point processes and Monte Carlo methods to infer the parameters for a proposed dynamical model of a nerve cell. They show that it is possible to estimate two 'unknown' conductances (gNA andgK) for a standard Hodgkin–Huxley model from two sets of spike train data with two different simulated resting currents, and suggest ways in which this approach could be generalized to more complex spike train data.

The seventh paper, by Ackermann et al (2011), also focuses on the dynamics of ion channels to shed new light on a controversy concerning the mechanism by which high-frequency stimulation can reversibly block conduction of action potentials in peripheral nerves. High-frequency block (HFB) techniques show promise for remediating the effects of chronic spasticity by preventing pathological peripheral activity from propagating to central sensation centers. Using the known dynamics of sodium and potassium channels, and a sensitivity analysis of a dynamical conduction model, the authors are able to narrow down the likely mechanism by which HFB acts to stop signals from propagating into the nervous system.

The eighth paper, by Nabi and Moehlis (2011), uses approaches from control theory to explore the control or remediation of pathophysiological states. In many cases, pathological neurological conditions involve a dynamical component. For example, debilitating akinesia and involuntary tremor associated with Parkinson's disease (PD) are believed to involve atypical synchronized activity in populations of neurons in regions such as the subthalamic nucleus within the basal ganglia (Hauptmann and Tass 2010, Rosin et al 2007). Remediation of PD symptoms via deep brain stimulation (DBS) is limited in that it can only 'force' the system in a small number of independently tunable directions. How can one most effectively desynchronize a high-dimensional dynamical system using an input that is limited to a much smaller dimension? To make things worse, the details of the system for any given patient—number, connectivity and physiology of the neurons involved—are largely unknown. Nabi and Moehlis (2011) take a step toward addressing these issues by considering optimal desynchronizing control for a simple system that retains essential elements of the problem: given three coupled oscillators with a tendency to synchronize (that is, a globally attracting stable synchronized state), and a control signal that can only be applied to one of the oscillators, what is the optimal desynchronizing control signal under a quadratic control penalty? Put another way, with how light a touch is it possible to bring the oscillators away from their preferred synchronized state by a given amount? The answer depends in part on the form taken by the coupling between the cells. For instance, if the coupling between the stimulated cell and the others is identical, then from symmetry one can see that no desynchronizing control signal exists (at least, if the cells are identical). Encouragingly, for reasonable control stimuli, the authors find that control signals of modest size can effectively desynchronize the population for a large fraction of possible couplings.

The ninth paper addresses one of the most ambitious areas within neural engineering: the application of control theoretic techniques to dynamics of circuits in the central nervous system. Control of high-dimensional systems (often accessed through low-dimensional interventions) and the complexity of interactions intrinsic to central neural circuits makes this problem especially challenging. The paper by De Wolf and Eliasmith (2011) proposes a general theoretical framework for modeling neural control, the neural optimal control hierarchy (NOCH), relying on concepts from optimal control theory. Using the Bellman equation, they define key control concepts for trajectories, argue for a motor hierarchy and suggest ways in which this could be mapped onto actual neural systems, and apply their framework to understand arm reaching, both in normal subjects and in those affected by Huntington's disease and cerebellar injury. They also argue that some of the highly nonlinear responses of motor cortical neurons during reaching behavior are a natural consequence of the control problem solved by the neural circuitry, as they have defined it. Although much experimental work remains to validate their approach, they discuss the likelihood that ideas from control theory could already be incorporated into novel brain–machine interfaces.

Finally, the last two papers begin to empirically address two important aspects of slower time scale dynamics in the nervous system: learning and development. The paper by de Jong et al (2011) develops a new paradigm for understanding how animals respond to spatial problems by studying the paths that rats take among several different food sources. After repeated exposure, the paths that animals take shorten, suggesting that they are able to find more efficient ways of exploiting resources in their environment. The problem is challenging because the animals need to use local cues to determine the nature of the task itself, and then to determine how to improve their responses over time. If a food source is removed, animals are able to rapidly adjust their routes to continue to take the shortest path among the food sources. As the authors point out, this task is related to a classic problem in computational complexity theory, the traveling salesman problem, and may shed light on how biological systems rapidly obtain good (if not globally optimal) solutions to such problems, as well as creating a new paradigm for understanding the mapping of spatial problems within brain areas such as the hippocampus. The paper by Fietkiewicz et al (2011) uses both empirical and modeling studies to understand the developmental dynamics of the respiratory system. Phase relationships among motor neurons driving the respiratory system change early in development, and play an important role in generating stable breathing rhythms. Using cross-correlation techniques, the authors demonstrate the stage of development at which these changes occur, and then create a model that provides some insight into how this change may be instantiated neurally.

These papers also suggest some important directions for future work. As Milton (2011) emphasizes, noise and delays are inherent in the nervous system, the body and the environment, and since all have co-evolved, subject to energetic constraints, many aspects of the control of the overall system will not be clear unless these stochastic processes are properly modeled and understood (Goldwyn et al 2011, Thomas 2011, White et al 2000). Similarly, the two papers by Spardy et al (2011a, 2011b) emphasize that nervous systems are embodied, and that neural dynamics is shaped by the neuromechanical properties of the periphery. The paper by DeWolf and Eliasmith (2011) poses an interesting challenge, suggesting that optimal control theory and hierarchical structures may provide insight into the nervous system. It remains to be seen whether this approach can capture the highly recurrent nature of biological nervous systems, and the complex spatial and temporal dynamics of its elements. The papers by de Jong et al (2011) and Fietkiewicz et al (2011) suggest that understanding more about learning and development will provide deep insights into the slow dynamics of the nervous system that allow it to adapt over the lifespan of an individual. They pose the challenge of relating local plastic changes to the global dynamics of the system.

Many of the papers suggest fruitful and potentially novel ways to begin to develop new brain–computer interfaces based on an understanding of neural dynamics: extracting parameters for the underlying dynamics using the approaches suggested by Meng et al (2011); using an understanding of the dynamics of ion channels to develop new protocols for enhancing or blocking signaling in the nervous system, using the approaches suggested by Ackermann et al (2011); exploiting an understanding of the dynamics of neural populations to find the lowest-dimensional driving signals that could synchronize or desynchronize population activity, using the approach of Nabi and Moehlis (2011); recognizing the importance of appropriately timed inhibition to stabilize a neural circuit in response to perturbations, using the approach of Nadim et al (2011); and defining rational treatments for paroxysmal bursting in axons based on the dynamical analysis of Coggan et al (2011). Although it may take time, the approaches exemplified by these papers suggest that future brain–computer interfaces will not only exploit correlations between brain activity and sensory input or motor output, but will take full advantage of the actual transformations performed by the dynamics of the nervous system.

References Ackermann D M, Bhadra N, Gerges M and Thomas P J 2011 Dynamics and sensitivity analysis of high-frequency conduction block J. Neural Eng. 8 065007 Ashby N 2003 Relativity in the global positioning system Liv. Rev. Relat. 6 lrr-2003-1 Chandrasekhar S 1995 Newton's Principia for the Common Reader (Oxford: Oxford University Press) chapter 12 Chiel H J and Beer R D 1997 The brain has a body: adaptive behavior emerges from interactions of nervous system, body and environment Trends Neurosci. 20 553–7 Chiel H J, Ting L H, Ekeberg O and Hartmann M J Z 2009 The brain in its body: motor control and sensing in a biomechanical context J. Neurosci. 29 12807–14 Coggan J S, Ocker G K, Sejnowski T J and Prescott S A 2011 Explaining pathological changes in axonal excitability through dynamical analysis of conductance-based models J. Neural Eng.8 065002 Cohen A H, Holmes P J and Rand R H 1982 The nature of the coupling between segmental oscillators of the lamprey spinal generator for locomotion: a mathematical model J. Math. Biol. 13 345–69 Cormen T H, Leiserson C E, Rivest R L and Stein C 2001 Introduction to Algorithms 2nd edn (Cambridge: MIT Press) de Jong L W, Gereke B, Martin G M and Fellous J-M 2011 The traveling salesrat: insights into the dynamics of efficient spatial navigation in the rodent J. Neural Eng. 8 065010 DeWolf T and Eliasmith C 2011 The neural optimal control hierarchy for motor control J. Neural Eng.8 065009 Diacu F and Holmes P 1996 Celestial Encounters: the Origins of Chaos and Stability (Princeton, NJ: Princeton University Press) Einstein A 1922 The Meaning of Relativity (Expanded Princeton Science Library Edition 2005) (Princeton, NJ: Princeton University Press) Fenichel N 1979 Geometric singular perturbation theory for ordinary differential equations J. Differ. Equ. 31 53–98 Fietkiewicz C, Loparo K A and Wilson C G 2011 Drive latencies in hypoglossal motoneurons indicate developmental change in the brainstem respiratory network J. Neural Eng. 8 065011 FitzHugh R 1955 Mathematical models of threshold phenomena in the nerve membrane Bull. Math. Biophys. 17 257–78 Goldwyn J H, Imennov N S, Famulare M and Shea-Brown E 2011 Stochastic differential equation models for ion channel noise in Hodgkin–Huxley neurons Phys. Rev. E 83 041908 Hardy G H 1940 A Mathematician's Apology (new edition with a foreword by C P Snow 1969) (Cambridge: Cambridge University Press) Hauptmann C and Tass P A 2010 Restoration of segregated, physiological neuronal connectivity by desynchronizing stimulation J. Neural Eng. 7 056008 Hodgkin A L and Huxley A F 1952 A quantitative description of membrane current and its application to conduction and excitation in nerve J. Physiol. 117 500–44 Izhikevich E M 2006 Dynamical Systems in Neuroscience: The Geometry of Excitability and Bursting (Cambridge, MA: MIT Press) Izhikevich E M and FitzHugh R 2006 FitzHugh–Nagumo model Scholarpedia 1 1349 Jones C K R T 1995 Geometric singular perturbation theory Dynamical Systems (Lecture Notes in Mathematics vol 1609) pp 44–118 Kopell N and Ermentrout G B 1986 Symmetry and phaselocking in chains of weakly coupled oscillators Commun. Pure Appl. Math. 39 623–60 Kopell N and Ermentrout G B 1990 Phase transitions and other phenomena in chains of coupled oscillators SIAM J. Appl. Math. 50 1014–52 Landahl H D and Podolsky R J 1949 On the velocity of conduction in nerve fibers with saltatory transmission Bull. Math. Biophys. 11 19–27 Lotka A J 1925 Elements of Physical Biology (Baltimore, MD: Williams and Wilkins) Marder E and Thirumalai V 2002 Cellular, synaptic and network effects of neuromodulation Neural Netw. 15 479–93 Markin S N, Klishko A N, Shevtsova N A, Lemay M A, Prilutsky B I and Rybak A I 2010 Afferent control of locomotor CPG: insights from a simple neuro-mechanical model Ann. New York Acad. Sci. 1198 21–34 Meng L, Kramer M A and Eden U T 2011 A sequential Monte Carlo approach to estimate biophysical neural models from spikes J. Neural Eng. 8 065006 Milton J G 2011 The delayed and noisy nervous system: implications for neural control J. Neural Eng.8 065005 Morris C and Lecar H 1981 Voltage oscillations in the barnacle giant muscle fiber Biophys. J.35 193–213 Nabi A and Moehlis J 2011 Single input optimal control for globally coupled neuron networksJ. Neural Eng. 8 065008 Nadim F, Zhao S, Zhou L and Bose A 2011 Inhibitory feedback promotes stability in an oscillatory network J. Neural Eng. 8 065001 Newton I 1687 Philosophiae Naturalis Principia Mathematica Section XI, propositions LVII–LXIII (London: Royal Society) Ralph T C and Pryde G J 2010 Progress in Optics vol 54, ed E Wolf (New York: Elsevier) pp 209–79 (arXiv:1103.6071) Rashevsky N 1960 Mathematical Biophysics: Physico-Mathematical Foundations of Biology vol 1 3rd edn (New York: Dover) pp 375–462 (first edition 1938) Rinzel J and Ermentrout G B 1989 Analysis of neuronal excitability and oscillations Methods in Neuronal Modeling ed C Koch and I Segev (Cambridge, MA: MIT Press) pp 135–69 Rosin B, Nevet A, Elias S, Rivlin-Etzion M, Israel Z and Bergman H 2007 Physiology and pathophysiology of the basal ganglia-thalamo-cortical networks Parkinsonism Relat. Disord.13 S437–9 Spardy L E, Markin S N, Shevtsova N A, Prilutsky B I, Rybak I A and Rubin J E 2011a A dynamical systems analysis of afferent control in a neuromechanical model of locomotion: I. Rhythm generation J. Neural Eng. 8 065003 Spardy L E, Markin S N, Shevtsova N A, Prilutsky B I, Rybak I A and Rubin J E 2011b A dynamical systems analysis of afferent control in a neuromechanical model of locomotion: II. Phase asymmetry J. Neural Eng. 8 065004 Steane A 1998 Quantum computing Rep. Prog. Phys. 61 117–73 Strogatz S H 1994 Nonlinear Dynamics and Chaos: with Applications to Physics, Biology, Chemistry, and Engineering (Cambridge, MA: Perseus) Thomas P J 2011 A lower bound for the first passage time density of the suprathreshold Ornstein–Uhlenbeck process J. Appl. Probab. 48 420–34 White J A, Rubinstein J T and Kay A R 2000 Channel noise in neurons Trends Neurosci. 23 131–7 Wilson H R and Cowan J D 1972 Excitatory and inhibitory interactions in localized populations of model neurons Biophys. J. 12 1–24 Wilson H R and Cowan J D 1973 A mathematical theory of the functional dynamics of cortical and thalamic nervous tissue Biol. Cybern. 13 55–80

Export citation and abstract BibTeX RIS

10.1088/1741-2552/8/6/060201