Locomotor adaptations: paradigms, principles and perspectives

The term ‘locomotor adaptations’ (LMA) indicates the alteration in motor commands that is automatically or volitionally generated in response to a perturbation continuously altering the task demands of locomotion. LMAs have been widely studied, using a variety of experimental paradigms and analysis techniques. The perturbation can be expected or unexpected and constituted by a change in the movement environment, by forces actively pushing the person’s body segments, by a modification in the sensory feedback associated with the task or by explicit task instructions. The study of LMAs has been key in widening our understanding of the principles regulating bipedal locomotion, from the overall strategies driving the short-term adjustments of motor commands, down to the different neural circuits involved in the different aspects of locomotion. In this paper we will provide an in-depth review of the research field of LMAs. We will start with an analysis of the principles driving the evolution of bipedal locomotion in humans. Then we will review the different experimental paradigms that have been used to trigger LMAs. We will analyze the evidence on the neurophysiological correlates of adaptation and the behavioral reasons behind it. We will then discuss the characteristics of LMA such as transfer, generalization, and savings. This will be followed by a critical analysis of how different studies point to different task-goal related drivers of adaptation. Finally, we will conclude with a perspective on the research field of LMAs and on its ramifications in neuroscience and rehabilitation.


Introduction
Locomotion is central in the life of mammals, whether it is required for foraging, escaping predators, or migrating to more favorable environments. It is a ubiquitous task that demands a relatively short learning phase, consistent repeatability, high flexibility, and energy optimality. Humans, among mammals, are notably defined by their locomotor ability, as our almost unicity in being preferentially bipedal is both a product and enabler of our evolutionary path. The control of locomotion is notoriously flexible to the different demands of the different environments where humans live and prosper. This flexibility encompasses the ability to counteract or prevent sudden events that may affect our safety (e.g. a slip or a stumble) or dynamically modify our motor plan to account for systematic changes in our gait environment (e.g. walking on uneven terrains). As such, the study of how locomotion is controlled and adapted to different scenarios has been a major point of interest of the motor control research community for several years [1]. Locomotion is a highly stereotyped task that results from the synergistic activity of supra-spinal and spinal circuitry [2], generating the unique, when compared with other voluntary movements, volitional but semi-automatic patterning that characterizes it. Locomotor patterns result from the optimization of several variables reflecting the goal task of locomotion, that is carrying the body weight safely, flexibly, and efficiently. As a result, the different ways in which locomotion is adapted depend on the altering condition and how it affects these parameters. The aim of this review is to provide an in-depth overview at the current stage of research in the field of locomotor adaptations (LMAs) and to stimulate discussion on future research developments, both in the motor control field and in the functional use of our current knowledge of LMAs in clinical rehabilitation. In this work we Similarly, another feature of the human lower limb, that is the increased length of the femur, may not be directly related to the adoption of exclusive bipedalism [28] as limb length does not appear to provide a clear energetic advantage [29,30]. Instead, longer femurs may have been the result of the necessity of optimizing thermoregulation through radiational cooling [31] and a byproduct of the increase in pelvis size that was necessary to accommodate the parturition of the human cranium [32]. On the other hand, evolutionary changes observed at the knee appear focused on reducing potential cartilage stress and optimizing muscle function after the transition to habitual bipedalism and a more mobile spine [33].
From a neuromechanical point of view, current human bipedalism is similar, in terms of muscular activations, to the patterns observed during the climbing of arboreal monkeys [34]. The arboreal origin of bipedalism has recently gained a prominent place in the literature, and it has been demonstrated that bipedal posture presents a clear foraging and stability advantage for arboreal monkeys with respect to the quadrupedal alternative [8,10]. Energy efficiency has also long been investigated as a potential evolutionary cue related to bipedalism, with inconsistent results. A well-known early study has shown that bipedal locomotion does not provide a substantial energetical advance over quadrupedal locomotion in hominids [35]. However, more recent evidence has demonstrated that bipedal locomotion in hominid apes may be more energetically efficient than knuckle-walking quadrupedal behaviors and that human bipedal locomotion is, generally, significantly more efficient than both modes of locomotion in apes [24]. Thus, it appears that bipedalism represented an energetical advantage for early hominids, especially in light of a prevalent view that hominids, during Miocene, had to adapt to undergo longer terrestrial distances in order to harvest, as food became more widely dispersed [12,36].
The evidence accumulated from fossils of course does not allow us to make speculations on short-term adaptive behaviors exhibited by early hominids over time, but the way in which pre-humans evolved to bipedalism could provide some insights on which parameters are naturally task-relevant during gait. A comprehensive analysis of all evidence suggests that our ancestors evolved from a compliant quadrupedalism, to a compliant and stable bipedalism to a more energetically optimal inverted-pendulum gait [21]. At the same time, given how several of the physiological characteristics of our locomotor apparatus are a byproduct of our increased brain size, we should not understate how this factor may have impacted our locomotor control system. In fact, the increase in height and weight of the head escalates the dangers associated to the potential effects of a fall, making stability a critical control variable. All in all, evolutionary evidence appears to demonstrate the key role of stability and energy minimization in the locomotion task.

Locomotor adaptation paradigms
During locomotor behaviors, adaptations have been observed in response to several different kinds of perturbations, each of them pointing to one or more task-relevant dimension of walking. LMA, as we have defined earlier, happens when the routine walking behavior is consistently made to deviate from its self-selected, self-optimized pattern in a task-relevant way. This can be prompted either by modifying the walking environment so that some characteristics of normal gait are altered (e.g. the relative kinematic relationship between the two legs in split-belt walking [37]) or by using sensory feedback (e.g. visual [38] or auditory [39]) to make the person walk in a different way, or by using external agents that actively change the biomechanics of walking (e.g. cable systems acting on the foot [40], orthoses [41] and exoskeletons [42]).

Environmental paradigms
Among all LMA paradigms, the most widely studied is, by far, the split-belt adaptation paradigm. A search on Pubmed for the words 'split-belt adaptation human' returns 180 results between 1994 and August 2022, including a comprehensive review by Hinton et al [43]. In the split-belt paradigm, individuals are asked to walk on a treadmill that is composed of two independent parallel belts. During the experiments, participants usually experience a mixture of walking conditions including walking with the belts running at the same speed, walking with the belts running at different speeds (most commonly with speed ratios of 2:1 or 3:1, although LMAs have been observed for several other speed ratios [44]) and even walking with the belts running in different directions [45]. In most implementations of the split-belt treadmill experiment, participants, after being exposed to the split-belt scenario, present an initial step-length asymmetry that is promptly compensated for. A recent study has replicated an adaptation behavior similar to the one observed during split-belt treadmill walking using motorized shoes [46]. As we will see in detail in the following sections, split-belt walking perturbs kinematic and kinetic symmetry [47] and hampers stability [48,49] and energy consumption [50]. Other environmental paradigms simulate walking environments that present expected or unexpected unevenness [51][52][53], compliancy [54,55] or sudden destabilizing changes [56,57]. The aim of these paradigms is mostly to challenge gait stability, although they present effects on other relevant dimensions such as energy efficiency [51].

Sensory feedback paradigms
While paradigms such split-belt treadmill induce adaptations by modifying the intrinsic characteristics of the walking environment, paradigms employing external agents trigger LMA by actively altering the biomechanics of gait or its perception by the walking individual. As a matter of fact, LMAs can be explicitly triggered by just modifying the sensory feedback associated with the movement [38,39,58]. Visuomotor adaptations have been observed during locomotion in response to implicit visual disturbances [58] or explicit visual guidance [59], and evidence has been provided that the former induces longer-lasting retention with respect to the latter [60]. Some works have also combined visual feedback perturbations with the split-belt [38,61,62] or other environmental [56,63] or external-agent based paradigms [64]. Changes to the perception of the orientation of the different body segments also leads to adaptation. An example of this is the podokinetic after-rotation (PKAR) paradigm [65][66][67]. In the PKAR paradigm, individuals perform a training period walking on a rotating treadmill, which re-calibrates the perception of the relative rotation of the trunk with respect to the feet. They are then made to walk overground blindfolded, and the adaptation is expressed by a curved trajectory to which the individuals are oblivious. The effect of sensory feedback-based perturbations on the task goals of gait is variable. Explicit guidance is normally designed to become the task goal (e.g. 'track this trajectory with your foot') and thus overrides other adaptation drivers. The effect of implicit distortion depends on the way in which the feedback is designed. As an example, a paradigm where implicit feedback is used to give the perception of step length asymmetry [58] likely triggers the same adaptation drivers as the split-belt treadmill.

External agent paradigms
The most common way to trigger LMAs using external agents is through the direct application of forces on the individuals while walking which induces biomechanical changes that in most, but not all [42], cases trigger an adaptive response. Different points of application and application methods have been tested, with works showing the effects of forces applied at the waist [68,69], lower limb joints [42,70,71] and feet [72]. When the forces are applied directly to the legs, they are generally more effective in generating LMAs when applied during the swing phase, early stance and mid stance, while only minimally effective when applied in late stance [73]. Forces can be applied as weights [74][75][76] or, most commonly, by using actuators [42,70,76], either worn as part of an orthosis or exoskeleton or non-worn and connected to the limbs through cables or rods [40,72]. Forces can be applied unilaterally [72,77] or bilaterally [41] depending on the aim of the experiment. External forces have been used to purposely alter balance [78] or stepping characteristics such as step length or step height [42] or to force pace and alter energy efficiency [41].

LLMA paradigms
It is worth mentioning a few additional LLMA paradigms that are not directly based on walking but whose results can inform the way in which repetitive lower limb level control is organized. Among those, there are paradigms such as unilateral stepping [79] or cycling paradigms such as temporally and spatially asymmetric cycling [80], unilateral cycling [81] and leg-independent cycling [82,83]. A paradigm adjacent to our definition of LMA is the broken escalator phenomenon [84]. In this paradigm, individuals experience stepping into a moving platform several times. They then unexpectedly experience stepping on the same platform which is, this time, fixated. This transition leads to an after-effect that expresses in a stumble that is washed out over several repetitions.

Reactive and anticipatory components of adaptation
LMA, as a behavioral phenomenon, can be considered as the summation of multiple neurophysiological responses to a modification in the walking environment. Most commonly, it is possible to discern distinct reactive and anticipatory responses to a perturbation [1] which, summed together, make the adaptation. Reactive responses are driven by sensory recalibration [85] and are characterized by feedback-driven fast modulations of the muscular patterns that directly respond to the perturbing event [86], possibly to stabilize the joints and avoid falls, often through co-contraction of agonist and antagonist muscles. These responses are not learned over time and do not present an after-effect once the perturbation is removed [71]. Anticipatory responses, on the other hand, are progressive updates of the motor plan that evolve through the continuous experience of the perturbing event. These responses are learned over time and present an after-effect once the perturbing event is removed [70,71]. Literature has shown that anticipatory responses appear to originate in cerebellar, cerebral and subcortical areas, while reactive responses are thought to originate in the brainstem, the vestibulo-spinal portions of the cerebellum and the spinal cord [43]. The combination of these responses makes the adaptive behavior characterized by an exponential change attributed to the anticipatory responses and an offset or step-wise response attributed to the reactive component [70]. Both components of adaptations, and the adaptation process in general, are affected by ageing, following the effect that ageing has on the underlying neural mechanisms controlling the reactive and predictive components of locomotion (for a comprehensive review on this topic see [87]). Exponential adaptive responses have been observed in the kinematics [42,72,88,89], the kinetics [90], the muscular activations [49,70] and the energy consumption [50,91].

The role of the cerebellum and the cerebrum in locomotor adaptations
Substantial interest has been devoted to unravelling the neurophysiological substrates associated with the adaptive responses. A comprehensive recent review on this topic has analyzed the contributions of different parts of the Central Nervous System to both reactive and adaptive responses during the split-belt treadmill paradigm [43]. Here we will try to reassume and complement this information also considering other adaptation paradigms. Most research on the neurophysiological correlates of LMA has so far focused on the role of the cerebellum in this process. Both the upper and lower limb adaptation literature has shown that the cerebellum has a central role in error-driven motor adaptation [92][93][94][95][96][97][98]. Hinton et al recently measured patterns of brain activation using positron-emitted tomography during split-belt walking to capture the changes happening in the activation of different brain structures during LMA [99]. They showed increased activity in the lateral part of the cerebellum, in line with the demonstrated importance of cerebellar structures in motor adaptations [94,100]. Individuals with impairments caused by cerebellar lesions present intact reactive responses but diminished or absent anticipatory adaptations [94], mostly characterized by an inability to adapt the intralimb temporal parameters of gait, while presenting an unaltered ability to adjust step-length. Similar results have been observed in cerebellar patients during PKAR experiments [101] and in cerebellar patients affected by essential tremor during split-belt adaptation [102]. Jossinger and colleagues, using a tractographic analysis of the cerebellar pedincules and the corticospinal tract, found a significant positive correlation between the magnitude of adaptation achieved by healthy individuals and the diffusivity levels observed in the left part of the inferior cerebellar pedincules during a split-belt treadmill experiment [103]. Previous evidence on transcranial magnetic stimulation (TMS) conclusively showed that adaptive learning resulted in the reduction of cerebellar inhibition, while concurrent changes in M1 excitability were not related to the adaptation process [104]. However, stimulation of the cerebellum using transcranial direct current stimulation (tDCS) has, similarly to what was observed in the upper limbs, presented inconsistent results, with a study showing that anodal/cathodal tDCS speeds up/slows down the adaptation process [105] and another presenting that the stimulation has no clear effect on the adaptation per-se, but makes the washout period longer [106]. Most studies so far have investigated the role of the cerebellum in LMAs using the split-belt treadmill. A recent brain imaging study employing fMRI during robot-based lower-limb error augmentation has observed substantial activation of cerebellar regions during this adaptation training, corroborating the results observed in the split-belt paradigm [107]. The same study also found activation in subcortical and fronto-parietal regions. The previously cited work by Hinton et al showed [99], together with cerebellar activity, increased activity in the posterior parietal cortex (PPC) which likely further assists the cerebellum in updating the locomotor plan and collates visual and proprioceptive feedback. They also found increased activity in the anterior cingulate cortex (ACC) and supplementary motor area (SMA). tDCS applied to the PPC has highlighted the supposed participation of this area in adaptation [108]. Moreover, TMS applied to the motor cortex has been shown to reduce the size of the after-effect during LMA to an elastic force resisting dorsiflexion, a process that the authors have interpreted to derive from the TMS disrupting the storage of the adaptative behavior [109]. To further highlight the role of the motor cortex in LMAs, a study where tDCS was applied over the primary motor cortex, has shown that the stimulation enhances the after-effects of 'broken escalator' adaptation [110]. These results, collated together, highlight the supporting role of the cerebrum in the generation and storage of LMAs. Nevertheless, an intact cerebrum is not necessary for achieving LMA. Studies on the stroke population have shown that stroke-affected individuals can adapt to split-belt or haptic perturbations [74,88]. Stroke patients present increased initial errors and generally slower and less complete adaptation behaviors compared to control groups. However, impairments of the cerebrum such as those caused by stroke do not hamper the ability of a person to achieve LMAs. As further confirmation of this, De Kam et al recently demonstrated that stroke interferes with the execution, rather than the recalibration, of locomotor movements [111]. Recent work on the mice model, which presents adaptation patterns to the split-belt paradigm comparable to those of humans, has shown that adaptation depends on the intermediate cerebellum, but not the cerebral cortex [112]. However, split-belt adaptation has been also shown to induce an increase in beta-band EMG-EMG coherence, which is considered an indirect marker of the cortico-spinal drive, in the TA muscles [113], more prominently at the initial stages of adaptation [114]. The level of coherence correlates with double-support asymmetry, indicating a contribution from higher brain structures in the temporal control of adaptation [113].
Studies on Parkinson's patients have shown a limited involvement of the basal ganglia in the adaptation process, with patients able to adapt similarly to healthy individuals, but with diminished ability corresponding with increased gravity of the symptoms and the presence of freezing of gait [115]. It has been also shown that Parkinson's patients can adapt step asymmetry using virtual reality-based visual feedback, thus downplaying the role of the basal ganglia in explicit adaptation [116].

Spinal components of locomotor adaptations
The role of the cerebellum and the cerebrum in motor adaptations has been studied extensively using a variety of techniques spanning from neuromodulation to imaging, to the analysis of impaired population. On the other hand, most of these techniques either cannot be used or have not been attempted when studying the contribution of spinal circuitry to motor adaptations. While there is a rich literature on the study of spinal control of locomotion and adaptation using invasive techniques in animal models [117], most of our current knowledge on spinal involvement during LMA in humans is derived from indirect observations coupled with the known role of the spinal cord in encoding muscular coordination. As an example of this, we take as example the interesting perspective on the spinal control of LMA that is obtained from the comparison of the results obtained in adaptation experiments during forward and backward walking. Choi's seminal paper [45] on split-belt adaptation conclusively demonstrated that adaptations for forward and backward walking are completely independent processes, where walking in one direction does not wash out a previously acquired adaptation in the opposite direction. Nevertheless, in a recent study we showed [49] that forward and backward walking, although drastically different behaviors, have similar neuromuscular, stability and energy consumption adaptation strategies, and, on top of this, the same time-invariant muscle synergies [118]. In the interpretation of the muscle synergies model that associates synergy modules to spinal pattern formation circuits, this result suggests that forward and backward walking share the same pattern formation circuits. Thus, it appears that the anticipatory component of LMAs during drastically different locomotor tasks such as forward and backward walking is controlled by independent supra-spinal structures mapping on the same lower-level spinal pattern formation circuits. Similarly to what was observed for upper limb visuomotor rotations [119,120], also LMAs appear to be well described, at the neuromuscular level, by modulations of the activations of fixed or barely changing, set of muscle synergies [49,70,80,121,122]. More recently, Hagerdoorn et al [123] have shown, during adaptation to mediolateral perturbations, that the stability of muscle synergies to perturbations critically depends on gait speed, and that synergistic recruitment changes during adaptation at low gait speeds. A recent study showed that the same synergies are recruited by both reactive and anticipatory strategies [70], creating a parallel with the pattern formation circuits accessible by both descending and reflexive drives observed in animal models [124]. All in all, current results appear to suggest that muscle coordination strategies that arise in the spinal cord are mostly maintained during LMA, that is then obtained by modulating the recruitment of the coordinated muscles at a higher level.

Sensory feedback
Adaptation is a process that corrects for a real or perceived kinematic or kinetic error, or, as we will see in later paragraphs, the effect that those errors have on higher-level movement characteristics such as stability and energy efficiency. As adaptation is driven by error [98], the sensory signals encoding the error are crucial in the generation of adaptive behaviors [125]. Adaptive behaviors can be generated by altering the proprioceptive, visual or auditory feedback of the task, without physically altering the task goal or the movement environment [38,39,[126][127][128]. Vestibular feedback, on the other hand, appears to have a limited role in adaptation, at least when considering the split-belt treadmill paradigm [43]. The LMA process, as observed in the split-belt paradigm, leads to transient changes in the sensorial feedback of the walking task. It has been shown that split-belt treadmill training induces changes in the perceived leg speed [129,130] and that the cerebellum is involved in this recalibration as cerebellar patients present a decreased amount of recalibration [131]. This recalibration appears to be a process concurrent but independent from the adaptation itself [132]. A clear example of the role of sensory feedback in adaptation is the PKAR paradigm.
In PKAR experiments, in fact, adaptation is thought to be due only to a somatosensory-input dependent re-organization of the relative rotation of the trunk with respect to the feet [65,67,101].

Laterality, transfer, and generalization
Most LMA paradigms introduce a perturbation or a disturbance that disrupts the temporal or spatial coordination of the legs by acting on only one of the two legs. However, gait is a stereotypical symmetric movement characterized by shared descending and reflexive circuitry between the two legs. As such, most LMA paradigms cannot be considered fully unilateral and often perturbations applied to one leg trigger anticipatory or reactive responses also contralaterally [133]. As such, the bilateral characteristics of adaptation to unilateral disturbances have been extensively studied in the literature. Reisman first showed that split-belt adaptation is mainly driven by changes in interlimb, bilateral, parameters, that, differently from intralimb parameters, present an after-effect after the perturbation is removed [47]. Savin et al produced purely unilateral perturbations during treadmill walking by attaching a weight to one of the two legs through a pulley connected at the ankle [77]. The adaptations that they observed were mostly bilateral and characterized by changes in the activations of the muscles on both legs. Recently, we showed that the uni-or bi-laterality of adaptation to two different unilateral force-field perturbations applied at the hips and knees using an exoskeleton is context-dependent, with one perturbation eliciting an adaptation characterized by clear neuromuscular changes in both legs and the other inducing adaptive behaviors that were mostly unilateral [70]. On the same topic, in a recent study on a LLMA experiment we observed that non-compensable interlimb asymmetries during cycling induce only unilateral changes in muscular activations [80]. Significant attention has been dedicated to understanding whether unilateral adaptation transfers to the contralateral limb. So far, findings on the interlimb transfer of LMA are inconclusive. Some studies show no evidence of such phenomena [37,45], while others [79,134,135] appear to show the presence of limited transfer of adaptative behaviors between legs. Prokop et al [37] first showed, using the split-belt treadmill paradigm, that adaptation does not transfer between the legs in mirrored experiments, suggesting that limbs are controlled by separate circuits and that adaptation is driven separately by the proprioceptive feedback of each limb. Choi and Bastian explored further the independence of limb specific circuits. In their experiments, they demonstrated that the two legs adapt independently and present independent after-effects [45]. Krishnan et al [135], on the other hand, showed that interlimb transfer is present in the lower limbs during visuomotor tracking walking, and demonstrated that, differently than in the upper limbs, the transfer is symmetrical between the two legs. The interlimb transfer has also been shown in obstacle avoidance based-paradigms [134]. Houldin et al [79], on the other hand, showed only a limited presence of interlimb transfer in a unipedal walking task, especially designed to avoid confounding factors related to interlimb coupling that may be present in split-belt treadmill experiments. All these results appear to suggest that, when excluding the contribution from bilateral reflexes, interlimb transfer of lower limb adaptation is only present when there is a cognitive component involved in the adaptation process.
Similarly, substantial attention has been dedicated also to investigating whether LMAs transfer between different tasks and contexts. Choi and Bastian, in their seminal work, were able to demonstrate conclusively that split-belt treadmill adaptation does not transfer between forward and backward walking [45]. In their experiments, they did not observe after-effects in the opposite direction after forward and backward split-belt adaptations. The opposite is true for PKARs, which have been shown to transfer to backward walking after forward-walking training [67]. This latter result may depend on the fact that the re-calibration of trunk and feet position that is believed to cause adaptation in PKAR is supposedly independent from the walking direction and from the circuits controlling the movements of the legs. Further literature has shown limited generalization between locomotor contexts and tasks. LMAs transfer only partially between different speeds during treadmill walking [136] or between treadmill and overground walking [137][138][139]. On this latter point, visual cues have been shown to play an important role. In fact, during treadmill walking individuals can see that they are not moving and when this visual cue is removed, the generalization of adaptation to overground walking is stronger and the spatial and temporal symmetry changes wash out almost completely [138]. Similarly, altering the attention, by either distracting individuals or making them more aware of the perturbation, appears to improve the generalization of split-belt treadmill adaptation to overground walking [62]. Introducing the perturbation gradually rather than abruptly facilitates the generalization of adaptation, both in healthy and stroke individuals [139,140], while walking speed does not affect the extent of generalization [141]. Small transfer of adaptation has been also observed between different locomotor tasks, such as walking and running [142]. No study, to the authors' knowledge, has so far tested whether adaptation obtained using one experimental paradigm generalizes to another one (e.g. haptic to a split-belt treadmill or vice versa).

Savings
Similarly, once again, with what is observed in upper limb motor adaptation, also LMAs present savings. Savings is the term utilized to indicate the phenomenon whereas adaptation bouts after an initial adaptation are characterized by a smaller initial error and a faster adaptation speed. Savings in LMAs have been observed in the split-belt treadmill paradigm [143][144][145] and the force-field adaptation paradigm [146]. Savings depend on the size of the error and the structure of the initial training [144]. Moreover, savings appear to depend on the number of times an individual experiences the transition between the unperturbed and perturbed scenarios, rather than on the net time spent adapting [143]. Savings due to repeated adaptation over several days have been shown to reflect, although differently, both in the adaptation behavior per-se and in the perception of the belt speed [132]. Interestingly, savings can be expressed also over a few months' time scale [145]. Finally, visual feedback on performance can improve adaptation rate during split-belt treadmill walking, but these gains are not saved over multiple exposures to this perturbation [147].

Implicit and explicit components of adaptations
In recent years, substantial evidence has been presented of the presence of implicit and explicit processes during upper limb motor adaptations, with implicit processes being characterized as slower and involuntary and explicit processes being faster and related to active decision making [148]. Implicit adaptation is thought to be automatic and unconscious. It is often considered a recalibration of the motor plan that happens regardless of explicit decision making. Explicit adaptation, instead, is the conscious modification of the motor plan based on some sort of decision making made to respond to the perturbation. Recently, studies have been highlighting the presence of implicit and explicit adaptation also during locomotion. Long et al showed that split-belt adaptation is mostly characterized by an implicit component that is present even when discouraged by an opposed, visual feedback-driven, explicit process [149]. On the other hand, a study on the Broken Escalator has shown that, in this paradigm, the explicit component plays a central role during the adaptation period [150]. Specific experimental designs using distorted visual feedback can be used to enhance explicit strategy development [38]. The presence of an explicit component of LMA positively correlated with retention in healthy [61] and stroke survivors [151]. McAllister et al very recently showed, using a dual-task protocol, that the energy optimization process observed during LMAs is an implicit component of adaptation [152].

Adaptation for error minimization
Upper limb studies have shown that motor adaptation is a process driven by the minimization of a sensory prediction error [98,153,154]. A similar assumption of error-driven adaptation has been made several times for LMAs also [47,130,132,155]. However, the nature of the error to be minimized is elusive for LMAs. Most studies on the split-belt treadmill and haptic paradigms have shown that adaptation tends to converge towards the minimization of step-length asymmetry between the two limbs [42,47,156], thus indicating that inter-limb asymmetry may be the error signal that drives adaptation. Symmetric gait is, in fact, habitual and inherently more stable and efficient than asymmetric gait.
As restoration of gait symmetry is one of the aims of gait therapy after stroke, there has been substantial interest in recent years in trying to understand whether symmetry is the parameter that is being optimized during LMAs, especially in the split-belt treadmill paradigm. Reisman and colleagues, in their previously cited seminal study [47], performed an in-depth analysis of the inter-and intra-limb changes in spatial and temporal biomechanical parameters during split-belt adaptation. They found that adaptation is observed in inter-limb spatial parameters and inferred that split-belt adaptation is error-driven, with the error being related to the spatial symmetry between the kinematic parameters of the two legs. Follow up studies have shown that LMA in split-belt is obtained by mostly modulating the timing and position of foot landing [155]. Adaptation for symmetry may then indicate that LMA, similarly to muscular coordination [157], converges towards habitual behaviors. Sanchez et al however challenged this assumption by questioning why the CNS would obtain habitual spatial symmetry through a non-habitual temporal asymmetric walking pattern [158]. They also showed that the symmetric behavior towards which subjects converge during adaptation is not necessarily energetically optimal.
An experiment on stroke survivors has shown that individuals already walking with asymmetric gait adapt/de-adapt less and more slowly to split-belt configurations that lead to exaggerated asymmetries (both during adaptation and de-adaptation) [159], suggesting, once again a preference for symmetric gait. The adaptation to small temporal asymmetries during stationary cycling that we observed [80] can also be interpreted as a bias towards symmetry. These results, taken together, suggest that symmetry must have some meaningful play in the adaptation process or that, at least, symmetry is a feature of locomotion that tends to be promoted by the central nervous system either because it is itself a goal of adaptation or because it reflects one or more higher-level task goal. However, further studies, have shown that not all LMA processes result in a symmetrical step length or, in general, a symmetrical gait pattern.
There are, in fact, examples of LMA experiments where symmetry is achievable but is not the behavior towards which the adaptation converges. An interesting work on split-belt walking on inclined surfaces has shown that step length symmetry is not always achieved during LMA [160]. When individuals adapt to split-belt walking on an incline/decline, they tend to overshoot/undershoot step length symmetry in the adapted state, due to the changes in propulsion demands of the two tasks. Using a haptic paradigm Cajigas et al showed that longitudinal symmetry is preferably preserved when the perturbation alters the step length at landing, while subjects maintain a spatially inter-limb asymmetric gait pattern when the perturbation alters step height during late swing [42]. The same study showed that when the perturbation is mixed in effect between step length and step height, step length symmetry is adapted for (although not completely), while step height deviations are still ignored and not adapted. Similarly, other haptic paradigms have shown that step height deviations can be adapted for [40,72], but following what appears to be a greedy optimization process between the kinematic error and the effort required to counteract the perturbation [72]. These results, taken together, suggest that full kinematic symmetry is not the final goal of LMA and that even step length symmetry is not always fully enforced. All in all, most studies appear to suggest that LMA encompasses an error minimization process loosely enforcing kinematic symmetry between the two legs, however, the discrepancy in results observed so far may also suggest that symmetry is a by-product, not always achieved, of the minimization of some other error or control parameter.

Adaptation for energy efficiency
Movement economy is not only an important driver of evolution in all species and humans [24,35], but is also one of the goals of short-term adjustments of movement characteristics [41] and long term motor learning [161][162][163]. Several experiments have shown that humans, over time, refine their execution of an unfamiliar task towards more energetically optimal strategies [163]. During human locomotion, individuals' self-selected style of walking, intended as the mixture of gait speed, cadence, width and step length, is subject-specifically tuned to minimize energy expenditure [164][165][166].
Selinger et al [41] demonstrated that humans continuously update their cadence to minimize metabolic expenditure, even in the presence of perturbations altering their self-selected step frequency. This result indicates that energy optimization is one of the online control parameters of gait. The literature on predictive neuromuscular modeling of locomotion confirms the importance of energy minimization. Predictive simulations are obtained by solving optimal control problems applied to physiologically sound neuromechanical models aiming at minimizing or maximizing the elements of a cost function associated with the goal of the task. The literature in this area has consistently shown that one of the (often few) necessary elements of the cost function for predicting a normative gait behavior is an energy minimization component that forces the simulation to converge towards a solution that minimizes the activation of the actuators and the overall energy exerted during the task [167][168][169][170][171].
As energy optimization affects human movement planning and execution at incredibly different time scales (from short-term adjustments to long term learning to evolutionary changes), it has been often identified as one of the potential drivers of the adaptive behaviors we here categorized under the umbrella term of LMAs. As such, energy optimization has been studied extensively during LMA experiments by analyzing, sometimes in a multimodal fashion [91], the estimated metabolic cost of locomotion using metabolimeters [41,50], the estimated mechanical cost of locomotion [42,158,160,172], perceived exertion [91] as well as changes in the muscular activations [50,72]. Most works in literature have shown that LMAs minimize energy expenditure over the course of the exposure to a perturbing event [50,72,91,160,172].
In a robotic force-field perturbation paradigm, Emken et al [72] demonstrated that adaptation can be modelled as a process of concurrent minimization of effort and kinematic error. The most compelling evidence of the presence of an energy minimization process during LMA was provided by Finley et al using the split-belt treadmill paradigm [50]. They conducted a typical split-belt treadmill experiment while measuring lower limb EMGs and the rate of oxygen consumption and carbon dioxide expulsion. They showed that muscle activation and energy consumption increase at the beginning of the exposure to the split-belt perturbation, with a progressive shift towards a more economical gait strategy over the course of the adaptation. The energy minimization process presented a time constant similar to that of the kinematic adaptation. These results have been complemented by the same group [172] by showing that subjects exposed to split-belt treadmills learn to use the work done by the fast belt of the split-belt treadmill to overcome the perturbation while minimizing muscular activations [50,158]. This behavior, in the long term, appears to converge towards an asymmetric gait that maximizes the exploitation of the work done by the split-belt [132]. To confirm this point, a recent study has found that energy minimization during split-belt adaptation correlates with step-timing symmetry rather than step-length symmetry and that adaptation for energy minimization may lead to asymmetric gait [173]. A process of harnessing the work done by an external device or the environment has been also observed in the interaction between humans and assisting devices [174][175][176][177], further confirming that we can willingly or unwillingly update our motor plan for reducing energy by harnessing the properties of the environment around us.
The principle of energy minimization, however, has not been observed to be necessarily true in all LMA paradigms. In fact, in robot-based experiments with longitudinal perturbations acting in the direction of gait, participants increase their effort to counteract the force rather than 'ride it' for minimizing their effort during the swing phase of gait [42,70]. In these studies the forces employed were not strong enough to challenge the metabolic economy of walking and not adapting at all would have resulted in a more energy-efficient gait, even considering changes in energy consumption that are associated with a step length longer than the preferred one [178]. These results show that not all adaptation behaviors encompass an energy minimization process and raise the question on whether the energy minimization observed during LMA experiments is a genuine driver of adaptation or a by-product of other aspects of this process. Sanchez et al [158,179] acknowledged that energy optimization usually happens at time scales longer than those under observation in the typical LMA experiments [180,181], thus the energy optimization that is commonly observed during the standard LMA experiments may be just a sub-component of a slower process. Another factor that needs to be considered is that most LMA paradigms, including split-belt, present an initial response to the perturbation consisting of a sudden increase in muscular activations [50] in several of the muscles involved in the task. In the split-belt paradigm, this increase in muscular activations and mechanical work is primarily due to increased positive work by the leg on the fast belt during the swing phase of gait [172]. Increased work at the acute exposure to a perturbation is consistent with a response to a potential perceived stability or muscular/joint integrity threat. This increase in muscular activations is adjusted over the time of the experiment, but whether this adjustment is the result of a specific energy-minimization adaptive strategy or of an online optimization like the one observed for cadence [41] or even of an optimization based on other biomechanical implications (e.g. minimize asymmetry [158]) remains to be uncovered. All in all, the results collected so far on energy optimization during locomotion and LMAs appear to suggest that energy optimization is one of the fundamental principles behind the definition of the locomotor plan and the generation of motor adaptations but is not necessarily present in all adaptive behaviors.

Adaptation for stability
The necessity of maintaining a biomechanically stable gait is a plausible and fairly straightforward driver of LMA in altered walking environments. Bipedal walking poses a substantial threat to body integrity in the event of a fall, both because a fall is more likely in what is an inherently unstable locomotor behavior and because in bipedal locomotion the head falls from a greater height with respect to quadrupedal gait. In this perspective, the most obvious reason for LMA appears to be the necessity of maintaining a stable gait pattern.
Dynamic stability can be estimated in several ways and domains (for a comprehensive review see [182]). At the biomechanical level, dynamic stability depends on the position and velocity of the COM with respect to the position, velocity and shape of the BOS [183,184]. Static balance is maintained as long as the projection of the COM falls inside the BOS, but during walking both COM and BOS move and the BOS changes in size dynamically. As such, in this representation of dynamic balance, bipedal gait is inherently unstable, since the COM lies inside the BOS only during the double support phase [185], and stability is dependent on gait speed and step length and width [186,187]. For this reason, gait is described as 'continuous state of falling and recovery' [1], with the swing leg moving precisely, at each step, for catching the position of the COM [185,188]. Humans respond to balance threats by first trying to anticipate the effect of the perturbation when the perturbation is known or predictable from sensory information (e.g. a step, or an obstacle that needs to be avoided) [1]. Then once the perturbation is experienced (or if it cannot be anticipated) fast, monosynaptic reflexes [1,189] with response latency of 30-40 ms, are recruited to counteract the balance threat. This first line of defense is not phase-gait specific and increases the stiffness of the joints. Longer latency reflexes, with response times of 70-80 ms can also be recruited. These reflexes are functionally relevant and present responses that depend on the phase of the gait and the nature of the perturbation [190,191]. Balance maintenance is also characterized by proactive strategies that predict the effect of a perturbation based on sensory input and update the motor plan accordingly. Sudden and continuous balance threats, both sensorial (e.g. visual) or mechanical, have been shown to induce anticipatory changes in parameters that relate to the risk of falling [57]. Walking on compliant or uneven surfaces trigger reactive and anticipatory neuromuscular compensations resulting in more cautious gait behaviors [54,192]. Anticipatory strategies for balance recovery mostly aim at controlling the landing position of the feet, thus affecting step width and step length [69,78,193,194]. Stability maintenance is often overlooked as an adaptation driver because most LMA paradigms characterized by continuous perturbations (e.g. split-belt, haptic or visual feedback perturbations) do not employ disturbances that critically alter balance in the short term making the subject likely to fall. Moreover, people can be tricked into walking asymmetrically or with sub-optimal kinematics using visual feedback, without triggering the onset of compensatory strategies for fall avoidance [58,60]. Nevertheless, although humans are remarkably apt at quickly responding to sudden balance threats, it is not unrealistic to suppose that, similarly to how we constantly change our gait parameters for energy optimization [41], we may have similar strategies for optimizing long-term dynamic balance in the presence of continuous perturbations that may not alter stability immediately but may result in a gait that is less dynamically stable than the unperturbed one.
As a result, several works in the literature have investigated changes in stability during LMAs, with some of them suggesting balance maintenance as one of the main drivers for adaptive behaviors. Prokop et al were among the first to propose that stability drives LMA in the split-belt treadmill paradigm [37]. They argued that the 50% phase shift between the two legs is maintained invariant in mid-stance during split-belt adaptation as a strategy for maintaining the speed of the upper body constant over the BOS during double support. As we said earlier, adaptation to split-belt walking is obtained by regulating the timing and position of the landing foot [155], compatibly with a regulation of the position of the COM. Ogawa et al confirmed this by analyzing the ground reaction forces showing that predictive adaptive behaviors are present for regulating stiffness at the ankle joint in preparation for the landing phase [195]. This adaptation behavior has been seen to result in clear improvements in stability (estimated using the margin of stability parameter [184]) over the course of the experiment, after an initial decrease caused by the introduction of the split-belt scenario [196]. Similar results have been obtained using a treadmill to induce small directional perturbations at mid-stance [197]. An indirect demonstration of adaptation for stability during split-belt walking comes from the observation that allowing participants to hold on rails, thus reducing the balance threat posed by the perturbed environment, reduces the size of the errors and, subsequently, the extent of the adaptation [198]. A recent work compared the temporal evolution of the changes in mediolateral stability with the changes in energy use estimated from the oxygen uptake during split-belt walking [199]. The authors observed an adaptation in the mediolateral margin of stability that was paired with changes in the mediolateral foot roll-off on the fast side, interestingly not accompanied by changes in mediolateral feet placement. They found that mediolateral changes in stability appear correlated to the most common adaptation metrics, while energetic changes do not. However, the authors argued that since changes in mediolateral stability are asymmetrical between the two feet, they are more likely to be a by-product of the spatiotemporal sagittal gait adaptation than an adaptive process per se.
Studies employing haptic perturbation paradigms have also pointed to stability as a primary driver of adaptation [42,71]. An interesting viewpoint on adaptation for stability can be obtained by comparing the results of different haptic-based LMA paradigms. Cajigas and colleagues, in the work we cited previously, employed the Lokomat [42] for administering six different perturbations during the swing phase of gait, each perturbation differing in its effect on step length and step height. The results highlighted the presence of adaptation only for changes in step length, while changes in step height, both increasing and decreasing foot clearance, were completely ignored by the participants. However, several earlier studies had shown that vertical perturbations that can drive a change in step height during swing result in the generation of adaptations [40,72,200,201]. The authors interpreted this difference in results based on the hardware used in theirs and the other studies. Differently from the devices used in the studies showing adaptation to vertical perturbations, the version of the Lokomat that was used in [42] limits movement of most degrees of freedom at the pelvis, curbing the effect that a perturbation applied to the lower limbs can have on the acceleration of the COM. The authors argue that, with the pelvis blocked, a vertical perturbation, while lifting the foot, does not cause substantial changes in the position, velocity, and acceleration of the COM and does not alter the final landing position of the foot with respect to the BoS. The same perturbation, with the pelvis unlocked as in the works by Emken et al [40,72], may instead affect the position, velocity or acceleration of the CoM (especially in the mediolateral direction) with respect to the size of the BoS, thus triggering an adaptive response. It needs to be pointed out that the comparison between these studies, although fascinating since it represents an instance where the same perturbation triggers (or not) a suspected driver of adaptation depending on movement constraints, is conceptual and not corroborated by specifically designed analyses or experiments.

Perspectives on the study and use of locomotor adaptations
LMAs, as their upper limb counterparts, tell us much about the high-and low-level organization of the sensorimotor system, intended as the strategies employed by the CNS to generate, perform, and adjust movements and the neurophysiological substrates that are employed in this task. As such, continuous research effort is dedicated to the investigation of LMAs in motor control and motor learning. As we have seen in this review, several results are consistent between upper and lower limb literature. It would be incorrect, however, to draw a direct parallel between upper and LLMAs, given the differences in task goals at stakes. As we saw in previous paragraphs, stability is an implicit task goal of the locomotion task that drives several aspects of LMAs that are not shared with upper limb motor adaptation. The fact that locomotion needs to be inherently stable means also that the instabilities introduced by perturbations need not be too critical, thus somehow limiting the nature of the experiments that can be performed in the LMA field. Moreover, the rhythmic nature of locomotion further constrains the task. As a result, the neuroscientific literature on LMAs is less developed with respect to its upper limb counterpart on most topics and some concepts that have been studied in depth in upper limb adaptation, such as its implicit and explicit components, have been only marginally investigated in lower limb literature. Nevertheless, the rich literature on upper limb motor and visuomotor adaptation can and should be used as inspiration to develop novel experiments aiming at understanding the critical characteristics of LMA. In final analysis, the neuroscientific literature in LLMAs has substantial room to grow through both studies aiming at validating upper limb findings in the lower limbs and studies investigating the peculiar characteristics of locomotor control and adaptation.
On the other hand, the literature on the functional use of adaptations in the clinical field is more developed in the lower than in the upper limbs. In the upper limbs, adaptations for training have been mostly studied in the error augmentation paradigm [202][203][204]. In this paradigm, kinematic and kinetic errors are haptically or visually augmented so that the resulting compensation generates a motor plan able to counteract the initial error once the augmentation is removed. This paradigm has shown interesting potential in its use in stroke rehabilitation, although only in a limited number of studies [205][206][207][208]. As seen through this review, impaired individuals are able to generate LMAs, unless their cerebellum has been compromised. This observation has generated substantial interest in the use of LMAs for training, especially, but not exclusively, in stroke [209]. Split-belt treadmill training is of particular interest in stroke due to the step-length asymmetry that results from the condition. It has been consistently shown that if the speeds of the split-belts are set so to exaggerate the existing step-length asymmetry, the adaptation process leads to an after-effect that results in a more symmetrical gait pattern [88,[210][211][212] that transfers to overground walking [137]. This process, if repeated over several exposures, can lead to clinical improvements [213][214][215]. A recent meta-analysis of the literature on split-belt based interventions has revealed that this therapy, when implemented as a long-term training paradigm, has the potential to improve step-length symmetry, while also pointing out the necessity of randomized control trials to further confirm this result [216].
While adaptive therapy targeting high-level parameters such as step-length temporal and spatial symmetry has been shown to work, it is not clear whether a more granular approach, e.g. targeting specific joint kinematics or kinetics is clinically feasible, possibly employing robotic or wearable systems to deploy error augmentation or targeted resistance training [217]. Adding weights or viscous resistance at the different joints in their longitudinal degrees of freedom has been shown to produce changes in muscular activations [75,218] and adaptive behaviors [76], that can be used for training neurologically impaired [74,219,220]. Nevertheless, as we have seen, not all forces applied to the legs induce an adaptation. A recent work showed that using an exoskeleton to deploy positive or negative stiffness at the hip did trigger kinematic changes but not an adaptative process [221]. A few studies have investigated the implementation of error augmentation paradigms using robots and haptic systems [107,[222][223][224], but no clinical studies have been conducted so far. A systematic study on whether this paradigm can be used to target different gait parameters using robots or haptic technology is long overdue [225]. A critique often presented to the use of adaptive paradigms for training is that adaptation and actual learning are non-consequential and adapted behaviors are always 'discarded' once the perturbation triggering the adaptation is removed. However, this is true in healthy individuals, where the motor plan is optimized for the un-perturbed environment, while impaired individuals, which present often over-conservative, sub-optimal gait patterns (e.g. in terms of stability and energy consumption) may learn from experiencing a more optimal (in terms of stability and efficiency) adapted behavior, as corroborated by the results obtained in stroke therapy using the split-belt treadmill.

Conclusions
LMA is a complex phenomenon that is not one-to-one comparable to upper limb motor adaptations, although literature demonstrates that several characteristics of adaptation are common between upper and lower extremities adaptation experiments. Evidence suggests, both in an evolutionary and motor control perspective, that LMA is achieved to maximize stability while minimizing energy expenditure, both resulting, in most part, in a minimization of the kinematic error. Our interpretation of the collated results hints at a primacy of stability over energy efficiency when considering short-term adaptations. LMA research provides a glance into the strategies that the CNS employs in dynamically controlling locomotion and in the functional organization of the spinal and supra-spinal networks that control this task. As the literature progresses, improving our understanding of how and under which conditions LMAs are generated and retained, new research avenues on both the motor control and clinical aspects of LMAs open up for the research community. New neuroscientific findings on LMAs together with the growing literature on the use of split-belt treadmill training in neurorehabilitation, provide a solid steppingstone towards the use of LMA protocols in rehabilitation that needs to be further explored.