A perspective on the neuromorphic control of legged locomotion in past, present, and future insect-like robots

This article is a historical perspective on how the study of the neuromechanics of insects and other arthropods has inspired the construction, and especially the control, of hexapod robots. Many hexapod robots’ control systems share common features, including: 1. Direction of motor output of each joint (i.e. to flex or extend) in the leg is gated by an oscillatory or bistable gating mechanism; 2. The relative phasing between each joint is influenced by proprioceptive feedback from the periphery (e.g. joint angles, leg load) or central connections between joint controllers; and 3. Behavior can be directed (e.g. transition from walking along a straight path to walking along a curve) via low-dimensional, broadly-acting descending inputs to the network. These distributed control schemes are inspired by, and in some robots, closely mimic the organization of the nervous systems of insects, the natural hexapods, as well as crustaceans. Nearly a century of research has revealed organizational principles such as central pattern generators, the role of proprioceptive feedback in control, and command neurons. These concepts have inspired the control systems of hexapod robots in the past, in which these structures were applied to robot controllers with neuromorphic (i.e. distributed) organization, but not neuromorphic computational units (i.e. neurons) or computational hardware (i.e. hardware-accelerated neurons). Presently, several hexapod robots are controlled with neuromorphic computational units with or without neuromorphic organization, almost always without neuromorphic hardware. In the near future, we expect to see hexapod robots whose controllers include neuromorphic organization, computational units, and hardware. Such robots may exhibit the full mobility of their insect counterparts thanks to a ‘biology-first’ approach to controller design. This perspective article is not a comprehensive review of the neuroscientific literature but is meant to give those with engineering backgrounds a gentle introduction into the neuroscientific principles that underlie models and inspire neuromorphic robot controllers. A historical summary of hexapod robots whose control systems and behaviors use neuromorphic elements is provided. Robots whose controllers closely model animals and may be used to generate concrete hypotheses for future animal experiments are of particular interest to the authors. The authors hope that by highlighting the decades of experimental research that has led to today’s accepted organization principles of arthropod nervous systems, engineers may better understand these systems and more fully apply biological details in their robots. To assist the interested reader, deeper reviews of particular topics from biology are suggested throughout.


Introduction
Nature has often presented solutions to engineering problems since the time of Da Vinci (Brioist 2020). One such engineering problem is creating legged robots with the agility and payload of animals but without the expiration or stubbornness of their biological counterparts. Despite the appeal of natural examples for engineering solutions, engineers have often lacked the neuromechanical knowledge, controller structure, or electromechanical hardware necessary to fully implement them. However, due to a variety of advances in the neuroscience of motor control and the field of neurorobotics, machines as agile and adaptable as animals appear more in reach than ever before. The authors' perspective is that incorporating more biological detail into robot control architecture, computational hardware, and mechanics will ultimately improve the capability of robots. The goal of this article is to share some of these biological details with an engineering audience, describe how these ideas have shaped the control of hexapod robots in the past, and identify future areas of research that may further facilitate the application of invertebrate neuromechanics to robotics.
This perspective focuses on hexapod (with some exception) robots with multi-jointed legs and neuromorphic controllers. The authors define 'neuromorphic' as broadly nervous system-like, including control systems whose organization is distributed, with pattern generating structures, sensory feedback, and global orientation cues combining to direct behavior; whose computational units mimic neurons, synapses, and/or groups of neurons; and whose computational hardware is inspired by the nervous system. Because of the progression of engineering technology over the past 35 years, some robots whose controllers may not be considered neuromorphic today were considered so when they were developed, and thus will be included in our discussion. For example, multiple hexapod robots were developed in the late 1980s and early 1990s whose distributed control system architecture was inspired by the organization of arthropod nervous systems, even though they were implemented as finite state machines (FSMs) on board von Neumann computers (Brooks 1989. For the sake of this article, such robots will be classified as having neuromorphic controller architecture but not neuromorphic computational units or neuromorphic computers. A robot will be classified as having neuromorphic computational units if its controller is built from simulations of neurons, whether the controller architecture is neuromorphic or not (Ayers and Rulkov 2008, Manoonpong et al 2008. Robots will be classified as having neuromorphic hardware if the control system is implemented via analog very large-scale integration (VLSI) circuit models of neurons (Mead and Ismail 1989) or neuromorphic computers (Akopyan et al 2015, Davies et al 2018. Why adopt such a broad definition of 'neuromorphic'? Robots may either be intended as neuromechanical models of insects (or other arthropods), intended to implement novel robotic techniques, or both. The authors of this perspective are primarily interested in robots that model a particular insect closely and will pay the most attention to those. However, studies that apply abstracted or general principles from insect motor control to advance robot control without direct connections back to neuroscience are still of interest for two reasons. First, such studies may serve as hypotheses for the types of calculations that motor control networks in animals perform during locomotion. Second, the primary goal of such studies is to improve robot mobility and autonomy, which is still often a secondary goal of robot studies that model animals. As a result, this perspective notes many studies that may not result from direct collaborations with neuroscientists, but may inject new ideas into the field of neuroscience.
Why focus on insects and the hexapod robots they inspire? Due to their proliferation in terms of species (ca. 5.5 million, (Stork 2018)) and individuals (ca. 10 billion billion, (Hall 2008)), it has been argued that Insecta is the most successful class on the planet (May 1986, Ritzmann andZill 2017). Despite their small size and 'limited' computational power, insects exhibit a diversity of behavior that rivals that of vertebrate animals . Such success draws attention from engineers seeking to build robots that perform varied tasks in diverse environments (Ritzmann et al 2000). Insects also exhibit statically stable locomotion at all speeds with the minimum number of legs. Keeping at least three legs in stance at any given time (except in extraordinary circumstances (Full and Tu 1991)) enables insects to support their body weight on a broad, stable base (Cruse 1990, Ting et al 1994, Szczecinski et al 2018. Like mammals, as the speed of an insect increases, the number of legs on the ground during stance decreases down to a minimum of three (Hughes 1952, Wendler 1964, Wilson 1966, Wosnitza et al 2013. In contrast, because mammals have four legs and often swing two at once, they must temporarily assume statically unstable postures during which they must actively balance (e.g. a galloping horse) (Alexander 1984). Other classes of arthropods, such as crustaceans (i.e. decapods) and myriapods also avoid the need for active balance, and many notable robots have been built to mimic them (e.g. (Ayers and Rulkov 2008, Hoffman andWood 2011)). However, these classes have redundant legs, which increases the scale of the controller and cost of building robots modeled after them.
Despite the appeal of studying insect motor control, some technical challenges exist that complicate experiments. As we will discuss, exposing the insect nervous system to record intracellular neural activity can affect the electrical potential of the neurons themselves, altering the neural activity under investigation (Treherne and Maddrell 1967). However, this is not true in crustaceans, meaning that many circuits underlying locomotion were first investigated in crustaceans instead of insects. Fortunately, due to their phylogenetic proximity (together they make up the Pancrustacea), insects and crustaceans likely share some conserved circuits that control leg motion (Dohle 2001), and history has shown that similar organization and neural mechanisms can be found in the nervous system of each (Delcomyn 1980, Mulloney and Smarandache 2010, Smarandache-Wellmann 2016. Furthermore, insects are again the focus of much research due to the recent explosion of connectomic and optogenetic tools being applied to dissect the nervous system of the fruit fly Drosophila melanogaster (Riemensperger et al 2016, Kohsaka andNose 2021).
This article is not meant to be a review of any particular field, but instead shares the authors' perspective on how hexapod robot walking has been controlled in the past, what current approaches seem most promising, and what technologies may advance this field in the future. The topic of neuromorphic control of robots is interdisciplinary, drawing from neuroscience and its many subfields, biomechanics, electronics, mechanics of machines, control theory, and robotics. The authors reside at the interface of these fields and leave thorough reviews of each field to the experts therein. Thorough reviews have been published regarding the organization of invertebrate nervous systems (Kennedy and Davis 1977), the evidence for pattern generating elements throughout the nervous systems of arthropods and other phyla (Delcomyn 1980), the history of pattern generating networks and their discovery in crustaceans and insects (Mulloney and Smarandache 2010), past and more recent surveys of broad locomotor principles (Stein 1978, Ritzmann andZill 2017), as well as central pattern generating networks that underlie insect walking (Bidaye et al 2018, Mantziaris et al 2020. Further reviews discuss the close phylogenetic proximity of insects and crustaceans (Dohle 2001) and common principles of nervous system organization between them (Smarandache-Wellmann 2016). Finally, reviews and perspectives describe the application of principles from neurobiology to robot control (Ijspeert 2014, Buschmann et al 2015, Webb 2020) and survey legged locomotion in hexapod robots of all types (Manoonpong et al 2021). Each of these articles provides more depth regarding the neuroscience of arthropod legged locomotion than would be possible in this brief article.

Foundational biological principles
Throughout the second half of the twentieth century, many organizing principles of the nervous system were discovered through experiments on invertebrate animals (Kennedy and Davis 1977). Many of these phenomena supported a view of the arthropod nervous system as highly distributed, in which coordinated behaviors emerged from a network of central pattern generators (CPGs) whose relative phases were controlled by sensory feedback and whose activity could be activated, coordinated, or deactivated by command neurons (Stein 1978, Ritzmann andZill 2017). These insights provided the knowledge base that was used to develop controllers for biologically inspired walking robots. The following background is not a comprehensive review of the neurophysiology of arthropod motor control and behavior. Instead, it is intended to orient readers with engineering background to the neuroscientific work that underpins models and inspired robotic implementations today. Many wonderful review articles listed in the introduction present these fields in greater detail than would be possible in this brief perspective article.
To aid the presentation of these ideas, a series of definitions is provided. First, control architectures may be referred to as 'centralized' , 'distributed' , or 'decentralized' . These terms may mean different things to robotics engineers and experimental biologists. This article defines a centralized architecture as one in which all sensory feedback bypasses motor networks and heads straight to the brain, which formulates motor commands for the entire body at once. This is an approach common in control systems engineering, in which measurements are taken to formulate the full state of the system, this state is used to update an internal model, and then all outputs are calculated at once to drive the system state to its desired value(s). Of course, animals' brains can and do integrate sensory information from across the body to direct behavior. However, the nervous system is decidedly more distributed.
This article defines a distributed architecture as one in which sensory feedback affects motor neurons, CPGs, command and coordinating neurons, and higher processing centers in the brain. Motor commands are the result of activity from all across the network, i.e. distributed. This model is widely accepted due to the fact that complete animal behavior requires coordination between distributed rhythm generating networks, sensory feedback from the periphery, and descending commands from the supra-or suboesophageal ganglion (Bidaye et al 2018, Ritzmann andZill 2017).
This article defines a decentralized architecture as one in which motor activity is primarily driven by sensory feedback in the form of reflexes, with no CPGs, no command and coordinating neurons, and no higher processing centers in the brain processing sensory information. In such an architecture, the reflex is the fundamental unit of the nervous system, and multi-phase motions such as walking emerge from 'chains' of reflexes that activate one another sequentially (Sherrington 1906(Sherrington , 1910. There is overwhelming evidence that distributed parts of the nervous system (e.g. spinal cord in vertebrates, ventral nerve cord in insects) can generate rhythmic motor output without sensory feedback, meaning that reflexes alone cannot explain motor activity (Delcomyn 1980). What Sherrington and others were likely observing was the effect of sensory feedback pathways that activate motor neurons and coordinate CPGs, which is only a portion of the motor control network.
Although many of the principles described below were applied to insect-inspired robots, most were first discovered in crustaceans, due to experimental amenability. Clearly demonstrating neural mechanisms that underlie behavior relies on the ability to control external ionic compositions during experimentation. However, insects have a ganglionic sheath that separates the hemolymph from the extracellular fluid within the ganglion, which results in a sheath potential of about 20 mV (Treherne and Maddrell 1967). Removing this sheath would eliminate this potential, but doing so profoundly degrades the integrative properties within the ganglion, meaning this technique cannot be used broadly . As a result, the use of ionic substitutions and channel blockers is not feasible in insects. Furthermore, phenomena such as neuromodulation, endogenous bursting, and bistability, which are key to central pattern generation, are difficult to assess mechanistically in insects. In contrast, these phenomena do survive in vitro in crustacea, which is why their underlying ionic and neuromodulatory mechanisms have been studied so closely.

Features of distributed control: CPGs, sensory feedback, and command neurons
CPGs have been found to underlie many rhythmic behaviors in the nervous systems of both invertebrates and vertebrates (Delcomyn 1980). CPGs are neuronal units capable of producing rhythmic outputs given non-rhythmic inputs from elsewhere in the nervous system. Despite being called 'central' pattern generators ('central' refers to the central nervous system), there is not one central rhythmic signal underlying complex behaviors such as walking. Instead, walking arises from the interplay of multiple rhythmic units whose relative phasing are controlled by central and peripheral influences (for reviews, see ( CPGs that drive the leg motions were first studied in the crayfish, a crustacean (Hughes andWiersma 1960a, 1960b). This work demonstrated that the swimmerets of the crayfish could beat in a coordinated manner even once the nerve cord was isolated from proprioceptive inputs, the defining characteristic of a CPG. Subsequent work studying the flight wingbeats of locusts revealed coordinated neural activity closely resembling normal flight motor patterns by stimulating decapitated animal's nerve cord ( (Wilson 1961, Wilson andWyman 1965); review in (Mantziaris et al 2020)). These independent studies from a crustacean and an insect suggested that CPG networks may underlie diverse rhythmic motions in many animal species (Mulloney and Smarandache 2010), and spurred decades of research that confirmed this suggestion (Delcomyn 1980). Furthermore, these and subsequent studies into the central generation of motor output countered the reigning hypothesis of the day, that a reflex is the simplest unit of the nervous system and that complex behaviors such as walking were the result of 'chains' of reflexes (Sherrington 1906). This idea that endogenously oscillating networks contribute to the control of walking has been highly influential in the field of robotic locomotion (Ijspeert 2008).
Subsequent work in other species revealed similar rhythmic behavior from deafferented networks, with some networks proving more amenable to experimentation than others. Certain networks, such as those controlling the gastric mill of the spiny lobster (Perkel and Mulloney 1974, Mulloney and Selverston 1974a, 1974b, Selverston et al 1976 and the heartbeat of the medicinal leech Stent 1976a, 1976b) became model systems from which broader principles of biological pattern generation were extracted (Pinsker and Ayers 1983). One example mechanism is von Holst's 'magnet effect' , in which one CPG synapses onto another, forcing the second CPG to oscillate with the same frequency as the first with a phase lag (Holst 1939, Ayers andSelverston 1979). The name 'magnet effect' describes the stable phase locking between the CPGs. Another example mechanism is that CPG phasing may be controlled by sensory feedback. To ensure the joints of a walking leg flex and extend in the proper relative phase, feedback from movement sensors (Ayers and Davis 1977aDavis , 1977b and load sensors (Pearson 1972) can reset the phase of CPG when the limit of that phase of the motion has been reached. In this way, the motor program remains adaptable despite being driven by a central network.
Inputs from other parts of the nervous system influence CPG activity on longer timescales. So-called 'command neurons' , or interneurons whose stimulation activates, modulates, or halts a local network's activity, were first discovered in crayfish. Stimulating any one of a collection of five interneurons caused the crayfish's swimmerets to beat at a frequency observed during intact behavior (Wiersma and Ikeda 1964). Subsequent studies identified neurons whose activity altered the posture of the abdomen Kennedy 1967, Kennedy et al 1967), the walking legs Kennedy 1969a, 1969b), and the swimmerets (Davis and Kennedy 1972, 1972a, 1972b. Increasingly, it appeared that the activity of single neurons may be sufficient to elicit complete behaviors. However, as crayfish were examined more closely and as these experiments were replicated in other organisms, the picture became more complicated (Kupfermann and Weiss 1978). For example, although some single neurons in the crayfish could evoke complete behaviors, many others were found whose activity only slightly moved some body parts or launched segments of behaviors (Atwood andWiersma 1967, Bowerman andLarimer 1974). For another example, it was found in the locust that 70 bilateral pairs of neurons that connected the suboesophageal ganglion and the thoracic ganglia could elicit similar postural adjustments, but only when activated in concert, suggesting a population-based code for issuing commands in insects (Kien and Altman 1984). Despite these nuances, it became clear that the nervous system has ways to issue low-dimensional commands to rhythm generating networks and produce behaviors much more complicated than the command would suggest (Pinsker andAyers 1983, Harris-Warrick 1989).

Insect discoveries
Although the majority of biological results discussed so far come from work on crustaceans, work on insects has revealed similar network function. Early work in the cockroach showed that CPG networks underlie their locomotion Iles 1970, Pearson andFourtner 1975). In subsequent decades, experiments showed that rhythmic activity could be evoked from the thoracic networks of other insects after the application of the muscarinic agonist pilocarpine ( (Ryckebusch andLaurent 1993, Büschges et al 1995), for a review see (Bidaye et al 2018)). How much these motor patterns resemble in vivo behavior is a controversial question, but these results suggest that insect walking is supported by rhythm generating networks like those in crustaceans.
In addition to demonstrating the existence of CPGs, these and other studies showed how synaptic inputs to these networks can alter the phasing of their rhythms, a necessary element for the control of rhythmic motions such as walking. In insect locomotion, many such inputs come either directly or indirectly from sensory neurons, ensuring that CPGs change phase when a particular leg state has been reached (Wong and Pearson 1976, Cruse 1985, Schmitz 1986, Hess and Büschges 1999, Akay et al 2001, Bucher et al 2003. Coupling CPGs in this manner enforces interjoint and interleg coordination between structures (e.g. joints, legs) operating at their own frequencies with their own CPG units (Bidaye et al 2018, Ritzmann andZill 2017). The distributed nature of such systems gives them the flexibility to produce different coordination patterns in different contexts. Modern approaches have helped identify neurons with command-like function that alter walking direction, apparently by changing the way sensory feedback coordinates CPGs that drive leg motion (Bidaye et al 2014, Martin et al 2015.
Discoveries made in both insects (Cruse 1985) and crustaceans (Cruse and Muller 1986) were synthesized in perhaps the most famous example of insect-inspired coordination rules, the 'Cruse Rules' for interleg coordination during walking (Cruse 1990). These behavioral rules do not explicitly incorporate CPGs, but they have inspired inter-CPG connections for neuromechanical models of walking insects (Rubeo et al 2017). The Cruse Rules describe how the points at which a leg lifts off at the end of a step (i.e. its posterior extreme position [PEP]) and at which it touches down at the beginning of a step (i.e. its anterior extreme position [AEP]) shift based on the motion of other, adjacent legs. For example, a leg's PEP is moved posterior (prolonging stance) while the adjacent posterior leg is in swing, and a leg's AEP is moved posterior (shortening swing) when the adjacent anterior leg begins swing. Speed-dependent interleg coordination patterns observed in insects and other arthropods emerged from these and other sensory-based rules (Cruse 1990), demonstrating the adaptability that is possible when sensory feedback is utilized in control. Subsequent robotic studies have demonstrated that these rules can be generalized to any direction of walking (Espenschied et al 1995, Manoonpong et al 2008, Schilling and Cruse 2020, simultaneously showing the generalizability of the Cruse Rules and proposing neural mechanisms that may underlie them in vivo.

Early robotic implementations
The experiments from the previous section provided a foundation for the application of neurobiological principles to the control of legged robots. The following sections provide a condensed timeline of the development of neuromorphic robot controllers over the last several decades. We emphasize studies that tested biological principles or applied biological principles in novel ways to control.

Controllers with neuromorphic organization: mimicking the nervous system's distributed nature
Early efforts to apply principles of a distributed arthropod-like nervous system to robotics were successful in demonstrating their potential power. The posture and locomotion of Rodney Brooks' hexapod robot Genghis was controlled by a network of augmented finite state machines (AFSMs) assembled according to subsumption architecture (Brooks 1989). In this architecture, higher-level behaviors (e.g. walking) could be built atop simpler behaviors (e.g. standing), mimicking the hierarchical organization of the nervous system.
Transitions between states within the AFSM could be driven by sensory feedback or by the timing of a central clock, similar to a CPG network. Genghis was the first robot to utilize abstractions of biological concepts to control walking, demonstrating the potential of a biologically-inspired approach to controlling robot legged locomotion.
At about the same time, Beer, Chiel, and Sterling developed a recurrent neural network (RNN) for controlling insect walking based upon current literature . This network of 37 neurons (6 for each leg and a command neuron) controlled a simulated insect such that it walked in the continuum of speed-dependent insect gaits described by Wilson (1966). Robot I was then developed by Espenschied, Quinn, Beer, and Chiel to be controlled by this RNN such that it, too, mimicked the functional outputs of insect walking . The RNN controller for Robot I was more distributed than that of Genghis, with patterning arising from the interaction between leg-specific networks, not set by a central clock. Sensory feedback signaled the beginning and end of stance phase, but otherwise the coordination arose from central coupling.
The Cruse Rules (1990) were also implemented to control Robot I in a FSM, in place of the RNN , 1990, resulting in the same speed dependent continuum of insect gaits (Espenschied et al 1993). The Cruse Rules implementation was resilient against 'lesions' within the network, that is, when sensory information transmission was blocked between legs. Such resiliency did not alter robot coordination under nominal conditions, but prevented the robot from being incapacitated when sensors or computers were damaged, which improved the robot's performance overall.
The application of biological principles also aided the tuning of robot controller parameters. A subsequent functional neural controller for Robot I was evolved using a genetic algorithm (in simulation (Beer and Gallagher 1992); and implementation in Robot I in (Gallagher et al 1996)). The evolved networks coordinated the legs of the simulated insect and the robot both when sensory feedback was provided and when it was disabled, demonstrating their resiliency against robot damage. Such features arising from simulated evolution demonstrated the usefulness of distributed, biologically-inspired algorithms for tuning network parameters for functional robot behavior.
To increase the flexibility and adaptability of Robot I's controller, the subsequent Robot II's posture and locomotion were enhanced by the addition of another degree of freedom to each leg and several leg-local insect-inspired reflexes, for example, a reflex to step in response to large perturbations and a reflex to search for a foothold if none was found (Espenschied et al 1995. When superimposed, these reflexes enabled the robot to walk over extremely cluttered terrain with no prior knowledge of the obstacles. The Cruse Rules were generalized for omnidirectional walking by setting the AEP in the direction of desired motion and the PEP behind that and varied according to the Cruse Rules. Thus, Robot II could walk in any direction (including lateral 'crab walking') and exhibited a speed-dependent continuum of insect gaits. This biologically-inspired approach of incorporating sensory-driven reflexes into Robot II's controller improved its mobility and adaptability relative to Robot I's.
Around the same time, the Technical University of Munich (TUM) Walking Machine was developed based on the stick insect. The TUM Walking Machine (Pfeiffer et al 1995, Steuer andPfeiffer 1997) featured a highly distributed controller that took close inspiration from the studies of Cruse et al regarding interleg coordination (Cruse 1976a, 1990, Cruse et al 1995. Like Robot I and Robot II described above, the controller utilized a distributed leg coordination module that passed the proprioceptive information of each leg to the adjacent legs and altered their AEPs and PEPs. Each leg controller possessed reflexes that enabled the system to react to obstacles on a single-leg basis with no prior knowledge. Using its biologically-inspired controller, the robot was subsequently able to stably walk along even terrains with obstacles and unstructured surfaces. Also inspired by the stick insect, researchers at the Forschungszentrum Informatik (FZI) began developing the LAURON series of robots (Berns et al 1994, Gaßmann et al 2001. Initially, LAURON's controller was comprised of a two-layer neural network, with one layer consisting of modules for the control of each leg and the second layer managing leg coordination and path planning. The specific structures of the neural networks were generated using various reinforcement learning techniques (Ilg and Berns 1995). For the control of subsequent iterations of the robot (e.g. LAURON III), traditional neural networks were removed, but the distributed, neuromorphic architecture remained. LAURON III's controller used a series of local behaviors (e.g. cyclic gait generation, ground searching, collision reaction) in combination with higher level body position control to generate foot trajectories (Gaßmann et al 2001). Stepping pattern generation was created by a series of polynomial functions relating step parameters to leg phase and modulated by proprioceptive feedback. Leg coordination was handled using a series of coordination rules based on achieving pre-calculated goal leg phases and maintaining the robot's static stability. Although this later iteration was structured in a more biologically plausible manner than the earlier neural network implementations, it did not feature reinforcement learning, meaning the walking parameters for this controller had to be manually tuned for each specific environment. Regardless, the LAURON series of robots has been a valuable demonstration of how biological principles of controller organization can be abstracted and implemented within the context of modern robotics techniques to produce adaptable robot locomotion.
HECTOR is another robot whose performance has demonstrated the value of applying biological principles to robot control (Dürr et al 2019). HECTOR's computational hardware is distributed, with each body segment (prothorax, mesothorax, and metathorax) possessing its own computing hardware and sharing information via sparse connectives (Schneider et al 2014). Studies utilizing HECTOR have fallen into two general categories: demonstrating the utility of decentralized control mechanisms in leg control (Paskarbeit et al 2015, Simmering et al 2023; and exploring how hierarchical mechanisms may improve robot autonomy in challenging environments (Meyer et al 2020, Schilling et al 2021. Exploring decentralized control mechanisms does not refute the existence of CPGs and command neurons in the insect nervous system; indeed, no model could do so without experimental evidence. Instead, these decentralized control mechanisms confront robotics engineers with alternatives to commonly-used centralized models, which may be complicated to design or difficult to implement for real-time operation. For example, behavioral experiments in stick insects suggest that their posture on uneven ground is the result of independent height controllers for each pair of homologous legs (Cruse 1976b, Cruse et al 1993. It was shown that HECTOR could coordinate its posture in a similar way by modeling each leg as an independent virtual spring-damper, simulating how each would strain given the current forces on the leg, and commanding the feet to move according to these passive dynamics (Paskarbeit et al 2015). This method only worked because HECTOR's compliant joint drives enabled estimation of the force vector acting on each foot. Maintaining body height required averaging the lengths of the legs, which represents some degree of centralization. However, this bioinspired controller demonstrates that a robot can control its posture without immediately sending all sensory readings to a central 'brain' that tells the legs how to move.
Each of these studies showed the potential of neuromorphic control systems with distributed architectures to control of multi-legged robots. However, the complexity of the control networks was limited by the biological data and computational power available. Advances in both areas fueled the next generation of neuromorphic control systems constructed from neuromorphic computational units, i.e. dynamical neuron and synapse models.

Controllers with neuromorphic computational units: analog VLSI circuits
The computational power available during the construction of these early robots greatly limited the ability to simulate neural dynamics. As a result, analog VLSI circuits were developed as an alternative to simulation of neural dynamics on a von Neumann architecture (see section 3.1. for more discussion). These circuits enabled rapid computation of network dynamics in a highly parallelized fashion, making them theoretically capable of implementing very large neural controllers in real time with lower power consumption. Early efforts with analog VLSI circuits showed that oscillator models could be constructed in hardware to drive robots' oscillatory leg movements (Still andTilden 1998, Brown 2000). Later, more refined versions were produced that controlled the omnidirectional stepping of a legged robot via a biological control framework in which walking direction was determined via exteroception (or internal 'volition') and communicated to the legs via descending commands (Ayers et al 2010). However, due to the difficulty of tuning network parameters, the difficulty of incorporating capacitors of the required magnitudes, and the rise of more powerful desktop computers, analog VLSI circuits have not yet superseded digital simulation as the most common way to implement neuromorphic control systems on hexapod robots (Lewis et al 2000).

Controllers with neuromorphic computational units: dynamical neurons and synapse models
Over time, increased computational power enabled the simulation and implementation of control networks that more directly encapsulated the dynamics of neurons and synapses in the nervous system. In 2008, Ayers and Rulkov implemented the structure from an earlier FSM controller (Ayers 2004) as a network of spiking neurons that controlled the directional walking of their robot, RoboLobster (Ayers and Rulkov 2008). They built the controller using a computationally efficient, two-dimensional phenomenological model of a spiking neuron, called a discrete time map-based (DTM) model, in which each neuron was tuned to produce spiking, bursting, or quiescence depending on its function within the network (Rulkov 2002). The adoption of phenomenological neural models that eschew the computational complexity of traditional models (e.g. the Hodgkin-Huxley model (Hodgkin and Huxley 1952)) facilitated the development of this and subsequent biomimetic robot controllers despite limited available computing power.
Around this time, AMOS-WD06 and AMOS-WD08 were developed with RNN controllers that generated omnidirectional locomotion of 6-and 8-legged robots (Manoonpong et al 2008, Steingrube et al 2010. The controllers implemented dynamical CPG models to produce alternating stepping signals, with walking direction changing due to descending commands and carefully designed networks that altered the phasing of joint movements relative to the master CPG. These robots demonstrated how simple descending commands could drive complex behavioral changes, mimicking the function of the nervous system. These robots also demonstrated how design principles could be used to tune networks to conduct specific functionality without large-scale optimization. The resulting networks had more meaningful structure and overall transparency than if they had been developed using a generic machine learning method, e.g. a perceptron network.
The controller for the modular legged robot Octavio was also developed using an RNN (von Twickel et al 2011a, 2011b. Researchers tested multiple controllers on Octavio. Some used evolutionary algorithms to develop and refine modular single joint controllers that were later coupled together, while others were neurobiologically derived to mimic the earlier Ekeberg model of insect leg coordination (Ekeberg et al 2004, von Twickel et al 2011a. The modularity of Octavio's limbs allowed for rigorous experimentation with both single leg controllers and the interlimb coordination for the entire network with 4, 6, or 8 legs.
Also inspired by the stick insect, Tarry IIB was designed to be controlled by the modular neural network controller, Walknet (Schmitz et al 2008, Schilling et al 2013. Walknet was constructed from networks of perceptrons (i.e. neuronal activation functions without dynamics), whose dynamics are simpler than those of other controllers from that time. Despite this simplification, Walknet on board Tarry IIB demonstrated how local reflexes could simplify the body-level postural control of robots and animals. Many legged robots are kinematically redundant structures, meaning that for a given body position, there are infinitely many possible joint angles. Such problems are often solved by constructing a kinematic model of the entire body and then minimizing some metric to calculate sequences of joint angles (Lynch and Park 2017). In contrast to this highly centralized approach, Walknet implemented positive velocity feedback at most joints in the legs, a reflex observed in some animal behaviors (Bässler 1976. Such reflexes enabled the robot's leg joints to settle into feasible configurations without resisting one another during motion, all without centralized planning. This solution both demonstrated that complex control problems could be solved in an entirely decentralized way and provided a concrete hypothesis for the postural control of insects. It should also be noted that in contrast to many other models of insect motor control, Walknet has consistently considered the magnitude of motor output, rather than focusing on rhythm generation or multi-leg coordination, alone. Researchers at FZI continued expanding their LAURON platform with the development of LAURON V (Roennau et al 2014). This updated platform utilized a behavior-based control system in which 'behavior blocks' produced actuator commands given sensory inputs, a rating criterion, and a motivation signal (Kerscher et al 2008). In LAURON's controller, six separate local leg behavior groups composed of swing, stance, ground contact, and collision behaviors directed the stepping of each leg. These local behavior groups were coordinated in different walking patterns (i.e. tripod, tetrapod, pentapod, and free gait) by abstractions of the Cruse Rules. The higher-level posture control group, comprised of body height, inclination, and body position behaviors, then ensured the overall stability of the robot. Thus, given desired velocities and walking patterns as external inputs, the controller was able to generate stable, autonomous walking. Notably, LAURON V demonstrated ability to cope with the dynamic, unstructured terrain of a staged search and rescue site in the EU Taranis Field Exercise, highlighting the controller's robustness and flexibility.
Eventually, software tools such as Nengo (Bekolay et al 2014), AnimatLab (Cofer et al 2010), and the AnimatLab Robotics Toolkit (Szczecinski et al 2015) facilitated desktop-computer assembly and testing of dynamical neuromechanics models. These enabled users to construct dynamical neural control networks, test their performance in simulation, then port them directly for robot control (Galluppi et al 2014, DeWolf et al 2020. All these platforms demonstrated that it was practical to simulate dynamical neural networks with commercial off-the-shelf (COTS) computing and actuator hardware. Thus, it became more practical to mimic the function and more of the morphology of animal nervous systems in robot controllers.

Controllers with neuromorphic organization and computational units: synthetic nervous systems
Increasingly powerful off-the-shelf computing hardware, coupled with a growing body of research in the neuroscience of invertebrate behavior, has recently facilitated the development of robots with control systems that control motion through biologically detailed, morphological mechanisms. In short, technology has enabled an increasingly 'biology-first' approach to the control of robotic locomotion and navigation, in which controllers leverage more biological detail than ever before.
As mentioned at the beginning of the previous section, RoboLobster's controller demonstrated that the anatomical structure of crustacean thoracic circuits was sufficient to produce directed underwater stepping (Ayers and Rulkov 2008). In particular, complex calculations emerged from simple neural dynamics. Networks could even be implemented on low-power LEGO robot hardware, demonstrating that dynamical neural controllers could solve complex control problems while using very limited computer power (Blustein et al 2013). Further application of their DTM neuron model and related neural architectures has resulted in more proficient robotic walking as well as swimming and flying (Ayers et al 2012). This group's close consideration of the biological model system enabled the development of controllers with capabilities greater than the sum of their parts, that is, for the cost of simulating simple neural dynamics, they obtained emergent complex neural computation for robot control.
MantisBot, a robot developed in 2015 based on the Chinese mantis, controlled leg stepping with a dynamical neural framework similar to RoboLobster's CCCPG approack (Szczecinski et al 2015). The stepping controller's architecture heavily based in biological findings, with multiple pattern generator networks controlling the alternating motion of each joint (Büschges et al 1995). The relative phases of the CPGs were coordinated by sensory feedback from leg kinematics (Hess and Büschges 1999, Akay et al 2001, Bucher et al 2003 and leg strain (Akay et al 2004, Zill et al 2004, Akay and Büschges 2006. Similar techniques were later applied to Drosophibot, the group's subsequent robot inspired by the fruit fly, and its walking (in simulation) (Goldsmith et al 2020). These robots' controllers served to integrate experimental results collected over years of motor control research in insects into singular platforms.
One goal of these projects was to understand how distributed walking motor programs could be modified by simple descending commands to cause the robot to start stepping, stop stepping, and step in different directions. It was found that only two descending signals, one for start/stop and one for walking path curvature, were required to modify stepping . The same signals were distributed to each leg pair's 'ganglion' control module, but each utilized them differently to execute different changes in movement . This mechanism mimicked recordings in the cockroach in which intraleg segment phasing was altered by stimulating regions of the brain associated with curved walking (Martin et al 2015). Closely considering the details of related biological experiments resulted in a robot controller that served as a concrete hypothesis for how descending signals direct the stepping of insect legs, despite the distributed nature of the control networks. Subsequent experiments with a robotic leg have extended this work by modeling specific afferent pathways in the insect nervous system and how they may regulate its activity to change the effect of descending signals (Goldsmith et al 2021).
Studies have also begun to use the influx of new information regarding the insect central complex (CX) on robotic platforms. The CX integrates many sensory modalities and appears to play a critical role in how insects localize within and navigate through their environment [for a brief review, see (Heinze 2017)]. In work by Stone et al, a neural model of path integration and steering in the bee CX was implemented as part of a robot controller, enabling the robot to biomimetically integrate its path as it explored an area, then head straight 'home' when finished exploring (Stone et al 2017). The connectivity, receptive fields, and functions in the model were directly derived from electrophysiology and electron microscopy measurements from the bee, with a one-to-one correspondence between anatomical neural types and the neurons in the robot controller. The results provide a functional interpretation for many architectural features of the CX and capture features not present in previously proposed models. Furthermore, this demonstration clearly shows that a biology-first approach to robotics can lead to the creation of robots that behave in adaptive, animal-like ways.

The future of insect-inspired robot control
The studies described in the previous section show the promise of a biology-first approach to robot control (for a review, see (Webb 2020)). The implementation of increasingly detailed neural models for robot control has benefitted from the ever-increasing computational power available in COTS electronics. However, as transistors have been miniaturized to the atomic scale, an end to Moore's law is in sight (Shalf 2020), meaning that historical advances in raw computational power are unlikely to continue indefinitely. Simultaneously, more biological data is available than ever before, including electron microscope scans of the fruit fly brain and ventral nerve cord (Scheffer et al 2020) and optogenetic imaging and stimulation methods (Riemensperger et al 2016, Kohsaka andNose 2021). To continue to apply the computation of the insect nervous system to real-time robot control, new software and hardware architectures are needed. We anticipate that three unfolding advances will lead to further breakthroughs in insect inspired robotic control: Neuromorphic computers for robot controllers, increasingly biomimetic robotic structures, and microscopy and genetic techniques for investigating the insect nervous system.

Neuromorphic computing hardware
A major hindrance in the development of neural-based robotic control is that biological neural systems are organized in a manner almost completely opposite to traditional computer architectures; arthropod nervous systems contain millions of neurons operating asynchronously and in parallel, while traditional von Neumann computers use one or a small number of dedicated central processors which operate synchronously at high speed (von Neumann 1993). Traditional computers are more than capable of simulating neural dynamics on a smaller scale. However, simulating networks consisting of thousands to millions of dynamic neurons is out of reach of embedded systems suitable for use on-board a mobile robot. To solve this problem, specialized computer hardware is needed.
Designing computer hardware for simulating neural dynamics is not a new field, dating back to simulating the Hodgkin-Huxley neural models on analog computers (FitzHugh 1961). This practice came to the forefront with the work of Carver Mead, who coined the term 'neuromorphic engineering' for describing the design of VLSI circuits that mimic the layout of the nervous system (Mead and Ismail 1989). This approach of using specific analog circuits to mimic neural networks continued (for its use in robot controllers, see section 2.2.2), but has not yet reached widespread adoption due to the difficulty of tuning, limitations on capacitance values, and designing a unique VLSI circuit for each application.
In response to this analog circuit development, other researchers have been designing general-purpose digital neuromorphic computer hardware for solving computational problems that are difficult for the von Neumann model. One of the first of these modern neuromorphic solutions was SpiNNaker (Khan et al 2008), which combined large numbers of small processing cores onto a single chip in a manner similar to the design of modern supercomputers. This was followed by chips from IBM (Merolla et al 2014, Akopyan et al 2015 and Intel (Davies et al 2018(Davies et al , 2021, which were dedicated to simulating arbitrary networks of leaky integrate-and-fire spiking neurons. These digital processors have proved to be easier to manufacture than their analog predecessors, and due to their improved availability have been used in a variety of robotic applications (Davies et al 2021, Cohen 2022 including CPGs for legged locomotion of robots ((Gutierrez-Galan et al 2019); in simulation: (Polykretis et al 2020, Angelidis et al 2021). As the second generation of these chips, which are capable of simulating up to one million simple neurons per chip, start to become available , Yan et al 2022, we foresee practical neural control systems running on-board legged robots approaching the scale of those in arthropod nervous systems.
Looking further into the future, work is ongoing on designing neuromorphic systems that move away from digital processors (Christensen et al 2022). A new hybrid digital-analog neuromorphic chip uses transistors operating in their subthreshold regime to simulate spiking neurons using less than 40% the energy used by the leading digital chip (Neckar et al 2019). Recent work has also shown the ability to manufacture circuits using memristors that can emulate the behavior of spiking neurons without any transistors (Kumar et al 2020), a discovery which may lead to extremely dense arrays of power-efficient neurons in silico. As the capacity of neurons and synaptic connections per chip increases, we foresee neural control solutions becoming more widely implemented in both legged robotics and robotics in general.

Biomimetic robot structures
Just as the mechanical structure of an organism is intricately tied to its nervous system (Chiel and Beer 1997), robot hardware also affects what computations are necessary for control. If the hardware is configured to reduce the complexity of the control system, it is said to be performing 'morphological computation' (Pfeifer et al 2006). As such, we expect that more effective robot locomotion control will arise in tandem with more biologically-inspired hardware, as the potential mismatch between mechanical structure and nervous system controller is minimized. Recent robotic modeling studies suggest that the kinematics of insect legs increases their range of motion and reduces the peak torque during walking (Billeschou et al 2020), which may motivate engineers to build robots whose legs closely mimic those of an insect. Manoonpong et al (2021) provide a recent review of several advancements made to the mechanical structures of insect inspired robots.
Most hexapod robots are actuated by electric motors, and the increasing availability of backdrivable, high-torque, low-speed motors enables strength-to-weight ratios closer than ever to those of muscles (Seok et al 2012(Seok et al , 2013. These motors are able to mimic the dynamics of antagonistic muscle pairs even if the underlying physics are completely different, thus minimizing any mismatch between the structure and a highly neuromorphic controller (von Twickel et al 2011a, Goldsmith et al 2020). One drawback of using motors to actuate insect-mimetic robots is that the motors' mass on the legs increases inertia about joints and affects the robot's center of gravity. Inertia about joints may be balanced by elastic components, resulting in a robot that has the same 'dynamic scale' as the model animal, even if its size is different (Goldsmith et al 2020, Sutton et al 2021. However, because the robot's mass is concentrated in its legs, the center of mass (COM) is highly dependent on leg posture. An insect does not face this challenge (Hooper 2012); for example, in Drosophila, only 11% of the body's mass is in the legs (Szczecinski et al 2018), meaning that the location of the COM is largely independent of leg posture. Future robots could address this issue by housing motors within the body and actuating legs via cable. Such a solution would also simulate muscle geometry by introducing angle-dependent moments for actuators.
Several alternatives to electric motors may facilitate the construction of more biomimetic robots. In the past, braided pneumatic actuators have been used as 'artificial muscles' to actuate robot legs (Kingsley et al 2006). Such actuators contract by pumping air into an isovolumetric bladder, increasing its cross-sectional area and shortening its length. These actuators are power-dense and compliant, have linear geometry (as opposed to rotary geometry, like an electric motor), and can only actively shorten, capturing many features of a muscle-tendon complex (Zajac 1989). Another alternative to motors is to actuate leg joints with shape memory allow actuators (SMAs) (Ayers 2016). SMAs may be stretched (e.g. by the antagonistic actuator) up to 5% when at room temperature, then contract to their original length when heated. Such actuators are power-dense. However, SMAs perform best under water, where the actuation heat can be quickly convected away to drive 'relaxation' at timescales useful for legged locomotion. Future research is needed to optimize SMAs for use in robots that walk on land through air. A radically different approach is to 3D print actuators from biological tissue (Webster et al 2017, Won et al 2020. These actuators have the advantage of being largely comprised of muscle, not simply being muscle-like. Although some technical challenges, e.g. oxygenating and feeding the tissue, must be addressed before widespread deployment, this early work harkens to future biohybrid robots controlled by biological neural circuits.
Another aspect of biological structures that could improve robotic locomotion control is compliance. Organisms have a great deal of compliance throughout their limbs, profoundly affecting the way their bodies manage external forces and energy and how their nervous systems control movement (Chiel and Beer 1997, Pfeifer et al 2006, Laschi and Mazzolai 2016. Several recent robots incorporate compliance in their structures (Dürr et al 2019, Goldsmith et al 2020, but further investigation could provide insight into precisely how such compliance affects nervous system control. Feet are one opportunity for compliance in biological organisms that is presently under-investigated in robotic systems but could produce a variety of advancements for walking robots. Because insect tarsi are compliantly nested tarsal segments comprising nearly 30% of leg length (Manoonpong et al 2021), feet may be particularly important to insect-inspired robots. Compliant feet may aid in conforming to a variety of substrates, with additional foot actuation possibly aiding in propulsion by 'pushing off ' at the end of stance and aiding in stability by gripping the substrate (Tran-Ngoc et al 2022). Adding appropriate feet to robots may facilitate larger increases in efficiency than control software or hardware.
Biomimetic sensing could also provide insights into how the nervous system processes sensory feedback. Traditional robotic approaches prioritize a small number of central, high-power, and high-precision sensors for internal and external sensing. In contrast, the nervous system utilizes readings from a multitude of distributed, lower precision sensory organs. Several of these sensors are range-fractionated, with unique static and dynamic ranges that provide redundant and resilient information about the body's state (for a review, see (Delcomyn et al 1996)). Taking a similar approach in robotic systems may reduce computational complexity and increase the robustness and resilience of the robot's sensory feedback. Additionally, place-coded sensory information may facilitate alternative computational approaches that sidestep the need for computationally expensive coordinate frame transformations (Guie and Szczecinski 2022).

Experimental opportunities with insects
The development and increasing refinement of more advanced genetic tools such as connectomics and optogenetics have made insects, particularly Drosophila, an even more attractive model for neuromechanical investigations. Connectomics refers to the development of detailed maps of synaptic connections between neurons within defined areas of the nervous system, or connectomes (Kleinfeld et al 2011). These connectomes can provide insight into the function of certain neurons (e.g. neurons connecting to muscles being identified as motor neurons), as well as the anatomical and functional organization of the nervous system. Partial functional connectomes have been established over the past several decades for a variety of well-studied systems such as the stomatogastric ganglion in crustaceans (Bargmann and Marder 2013). However, only recently have advances in electron microscopy and image analysis made more detailed anatomical connectome generation feasible for a variety of invertebrate and vertebrate organisms. Drosophila has been a popular target for this work, with connectomes of large portions of the Drosophila brain and central nervous system being published over the last few years (Zheng et al 2018, Scheffer et al 2020. Analyzing the unprecedented amount of data supplied by these studies will allow neuroscientists to further understand the organization of the insect nervous system, as well as precisely target future experiments based on the provided connectivity. One emerging method for these detailed experiments in insects is optogenetics. Optogenetic techniques allow for the targeting and manipulation of specific neurons with light-sensitive proteins through illumination (Riemensperger et al 2016, Kohsaka and Nose 2021). This non-invasive technique allows for spatiotemporal manipulation of selected elements in neuronal circuits, helping determine causal relationships between neural activity and various animal behaviors. The advantages of optogenetics are best exploited in genetically tractable animals with small yet robust nervous systems, such that distinct neurons can be targeted reproducibly to generate complex behaviors. To that end, Drosophila has emerged as a highly favorable model organism. Advancements within the last decade have allowed for efficient depolarization in the neurons of adult Drosophila, leading to a variety of illuminating studies utilizing optogenetics in the study of locomotion. One example is the identification of command neurons in otherwise entirely intact animals, such as the moonwalker descending neurons that drive backward walking in Drosophila (Bidaye et al 2014). Subsequent studies have identified additional neurons with similar command-like functionality (Bidaye et al 2020). Command neurons were initially identified in reduced preparations in crustaceans, but their precise function in the intact animal was unknown. Experimental work uniquely possible in insects has the potential to further refine and extend principles of nervous system organization, particularly when combined with eletrophysiological approaches such as patch-clamp recording (Liessem et al 2022). Such advanced techniques will improve our understanding of the insect nervous system, which may further increase the neuromorphic nature of hexapod robot controllers.

Final thoughts
The past several decades have seen great progress in the application of biological principles to the control of insect-like legged locomotion and we foresee this trend expanding in the future. As experimental techniques in insect neuroscience advance and biological principles are incorporated more directly into robots, robots may serve as integrative neuromechanical models of the animals under investigation. Furthermore, such a 'biology-first' approach to robotics may be a fruitful long-term investment strategy for understanding what makes animal motion so adaptable, robust, and resilient. Ongoing work to develop neuromorphic hardware and specialized actuators will encourage researchers to develop more animal-like robots in the coming years. We predict that further advances in computation due to software architectures, hardware architectures, morphology, and sensing will drive even more convergence between robotics, neuroscience, and biomechanics, emphasizing the importance of a 'biology-first' approach.

Data availability statement
No new data were created or analyzed in this study.