EMG-driven shared human-robot compliant control for in-hand object manipulation in hand prostheses

Objective. The limited functionality of hand prostheses remains one of the main reasons behind the lack of its wide adoption by amputees. Indeed, while commercial prostheses can perform a reasonable number of grasps, they are often inadequate for manipulating the object once in hand. This lack of dexterity drastically restricts the utility of prosthetic hands. We aim at investigating a novel shared control strategy that combines autonomous control of forces exerted by a robotic hand with electromyographic (EMG) decoding to perform robust in-hand object manipulation. Approach. We conduct a three-day long longitudinal study with eight healthy subjects controlling a 16-degrees-of-freedom robotic hand to insert objects in boxes of various orientations. EMG decoding from forearm muscles enables subjects to move, proportionally and simultaneously, the fingers of the robotic hand. The desired object rotation is inferred using two EMG electrodes placed on the shoulder that record the activity of muscles responsible for elevation and depression. During the object interaction phase, the autonomous controller stabilizes and rotates the object to achieve the desired pose. In this study, we compare an incremental and a proportional shoulder-decoding method in combination with two state machine interfaces offering different levels of assistance. Main results. Results indicate that robotic assistance reduces the number of failures by 41% and, when combined with an incremental shoulder EMG decoding, leads to faster task completion time (median = 16.9 s), compared to other control conditions. Training to use the assistive device is fast. After one session of practice, all subjects managed to achieve tasks with 50% less failures. Significance. Shared control approaches that give some authority to an autonomous controller on-board the prosthesis are an alternative to control schemes relying on EMG decoding alone. This may improve the dexterity and versatility of robotic prosthetic hands for people with trans-radial amputation. By delegating control of forces to the prosthesis’ on-board control, one speeds up reaction time and improves the precision of force control. Such a shared control mechanism may enable amputees to perform fine insertion tasks solely using their prosthetic hands. This may restore some of the functionality of the disabled arm.


Introduction
We rely on our hands and their fine dexterity to perform everyday tasks, from basic manipulations, such as picking up a glass of water, to fine-manipulation skills, such as writing, knitting, and playing an instrument. The human hand owes its dexterity to its individuated control of finger motion and precise sense of touch, and to dedicated brain processes for ensuring fluent control of objects once in hand [1,2].
Being deprived of this dexterity following an amputation affects one's life considerably. Even the simplest chores, such as placing the cap on a bottle or inserting a spoon in a cup, become arduous if not impossible. The lack of tactile perception and the absence of fine finger motions are the predominant hindering factors that render reorienting or repositioning an object impossible. If one could restore some, even if a tenuous part, of this dexterity through the help of automation and robotics, this would have immediate benefits to the persons with an amputation.
While robotics has made vast progress in the control of human-like robotic hands [3,4], these advancements have rarely been used to control prostheses. The main reasons are that (a) robotic prosthetic hands (RPHs) remain under-actuated with simple position or speed control, and (b) we still lack robust decoding strategies to infer and translate the user's intentions into fine and individuated finger motions of an RPH. The latter is a bottleneck that reduces the incentive to develop more complex robotic hands for people with trans-radial amputation.
With a look towards the availability of better and more performing hand prostheses-and while recognizing that individuated finger electromyographic (EMG) decoding will remain limited in its accuracy-, we design shared control mechanisms where a robotic controller is in charge of low-level closed-loop control. This controller enables online adaptation of the positioning of fingers and stabilization of the object in hand through forces applied by the fingers. To this end, we build on recent advances in robotic hand control to enable humanlike control of the fingers in either individual or coordinated manner [5]. Such enhancements permit the execution of robust grasps with multi-finger robotic hands.
Commercially available RPHs rely on identifying motion intention using EMG. The control of prostheses is usually achieved by placing two electrodes on two remaining antagonist muscles of the forearm. A threshold is set on the EMG amplitude acquired at a fixed frequency to control one degree of freedom (DoF) for closing or opening the fingers by a small increment [6]. This is insufficient to capture individuated finger motions and does not allow to provide continuous control of fingers.
Finer EMG-based control can be obtained using single-finger angle regression [7]. With this method, the RPH follows the intended motion of the user in an intuitive manner [8]. Several groups showed successful use of multi-layer perceptrons (MLPs) for the regression of single finger angles. For instance, one of the first example in 2008, Smith et al [9] showed the possibility to regress the metacarpophalangeal (MCP) joint angle of each finger individually and simultaneously with a neural network. Ngeo et al [10] showed the possibility to regress MCP angle as well as proximal interphalangeal (PIP) and distal interphalangeal (DIP) joint angles of each fingers using a MLP. Finally, Dantas et al [11] obtained similar performance with intramuscular EMG recorded from two amputee patients between convolutional neural network and MLP outperforming polynomial Kalman filters and long short-term memory networks for single-finger angle regression.
However, continuous control of finger motion from EMG is susceptible to noise. The slightest error in EMG-based intent detecting could lead a finger to inadvertently re-open, letting the object slip. Hence, controlling for a stable and robust grasp when manipulating objects is crucial to restoring the essential dexterity needed for everyday use. This requires ensuring fast and accurate control of finger-object interactions.
Decades of research in robotics have been devoted to devising algorithms for closed-loop control to enable on-the-fly stabilization of a grasp to secure an object in hand when subjected to external disturbances [12]. Typically, re-balancing the forces to stabilize the object is performed through impedance control [13], whereas repositioning of fingers on the object requires planning algorithms [14]. In this paper, we consider the task of inserting an object into another object. Such a 'peg-in-hole' task can be considered a robotic benchmark. While it has received attention already in the 70s [15], it remains a topic of research [16]. As simple as it may appear, inserting an object into another one still relies on complex algorithms to determine when the object is jammed and when to correctly adapt the object's orientation so that neither object nor the robot is damaged. Such rapid and compliant control of finger-object interaction requires estimating the force at contact and adapting fingers' motion accordingly. Contact detection is usually done through reading of tactile sensors, sometimes in combination with vision [17].
When controlling a prosthesis tasked to insert an object into another one, we must assume that the autonomous controller of the prosthesis has access solely to tactile information. Corrections driven by visual feedback are conveyed implicitly through EMG-based intention detection, driven by the amputee's visual appraisal of the situation. In this paper, we design a shared control framework that uses an autonomous compliant control of fingers to adapt fingers' orientation and exerted forces only based on tactile information.
When integrating users' motor intentions for individuated finger control, a possible solution is to distribute the control by automating some parts of the motor commands and relieving the user from precise modulation [18]. It can ease grasping through preshaping [19], grip force adjustment [18], slip detection [20], and even hand closing using underactuated systems in which spring-like mechanisms mediate grasp force [21]. The Ottobock Sensorhand Speed is a commercial example of shared control in RPHs, which automatically increases thumb flexion during grasping in response to slippage [22]. A more recent study [23] showed that an EMG-based shared control strategy could ensure the safe handling of a bottle filled with content. This work leveraged an autonomous robot control that maximized the number of contacts between the robotic hand and an object.
This work evaluates novel shared control strategies to perform EMG-driven object manipulation tasks. We show that these shared control approaches allow subjects to perform a continuum of manipulation: from grasping an object to manipulating it in air and during insertion when in contact with another object. The subject remains in control of deciding when to grasp and release the object and how to preshape the hand. Importantly, our shared control mechanism enables subjects to rotate objects in hand when controlling a dexterous 16-DoF robotic hand. This is achieved thanks to the following: (a) An EMG-based single-finger proportional decoder to infer high-level finger motor intentions from forearm muscles. (b) A second EMG-based decoder from shoulder muscles to let the user control the in-hand object rotation via elevation or depression of the shoulder. (c) A virtual object-based compliant controller for low-level robot control. Given the task objective and tactile feedback, the low-level control of the robotic hand employs an autonomous and adaptive compliant controller that regulates online the interaction forces at contact points. (d) An interface that uses feedback-based state machines to integrate high-level and low-level robot control. The interface selects which task has to be executed, provided the subject's command and robotic feedback.
In a first experimental protocol, we compare two methods for shoulder decoding, and measure how quickly and precisely healthy participants are able to rotate objects in various orientations. Then, we evaluate our shared control approach with the two shoulder decoding strategies in a longitudinal study, where participants are required to teleoperate a robotic arm to pick, rotate and place an object in boxes with different orientations. The aim of this functional assessment is to evaluate the robustness of our shared control, taking into account the noise induced in finger decoding from forearm motions and interactions between the shoulder decoder and arm motions that would also be present in the case of persons with an amputation.

Methods
The first experiment is designed to compare two EMG shoulder decoding strategies, while the robotic hand is locked in place. The first approach is a thresholdbased incremental decoding. When the subject elevates or lowers the shoulder and the EMG activity recorded from one electrode exceeds a threshold, the decoder modulates a rotation value that remains constant when the shoulder rests. The second strategy consists of a continuous decoder based on a support vector regression (SVR) algorithm that directly maps the shoulder position to the rotation value (e.g. maximum shoulder movement will result in maximum rotation and rest corresponds to zero rotation). We envision that the continuous decoder would allow subjects to rotate the object faster [24] as object rotation is proportionally correlated to shoulder movements, whereas the incremental one would be more robust to noise since it does not rely on a non-linear pattern recognition model.
In the second experiment, we compare four different shared control schemes; see figure 1. We perform a 2 × 2 experimental study with two levels of autonomy in the state machine interface versus the two types of EMG shoulder decoding, resulting in a total of four types of shared control schemes.
The two autonomy modes of the state machine interface are (a) unassisted and (b) assisted. In the former, the controller relies only on the command and state sequence, whereas the latter also uses feedback from the robotic system. The assisted mode utilizes tactile sensing and finger kinematics for identifying the task and its status. We expect the assistance to reduce trial failures, because it would avoid the unexpected drop of objects coming from noise in the finger EMG decoder.
In order to evaluate the performance of the proposed shared controllers, we adopt the Grooved Peg Test [25], a standard dexterity assessment. The test requires fine motor skills to place grooved pegs in holes with different orientations. This test is performed routinely to quantify the development of dexterity in six-year-old children [25], and the loss of dexterity in stroke patients [26]. We adapt the Grooved Peg test to the size of the robotic hand at our disposal and design a peg-in-hole task where subjects have to grasp a rectangular object and place it inside boxes with different orientations ( figure 5). An inclined surface is used to prevent subjects from relying on gravity to insert the object in the box, when the angle of the object does not perfectly match the box's angle.

Finger and shoulder decoders calibration
In each session of each experiment, subjects are asked to follow a series of single and multi-finger opening and closing movements performed by a virtual hand Es ∈ R is the EMG signal from the shoulder. Θ is a vector of high-level commands determined via EMG decoding (see figure 5). S i ∈ R denotes the state machine defining the hand's control mode (see figure 6), and τ h ∈ R 16 is the joint torques computed to control the robotic hand. q i ∈ R 4 and p i ∈ R are, respectively, the joint positions and tactile sensor feedback for the ith finger. on a screen. The sequence of movements is repeated six times, and each movement is held for 5 s before the virtual hand goes back to its original resting position for 3 s. The rationale is to dissociate three main states of the fingers: a resting state corresponding to no muscle activation, flexed and opened positions. The total calibration time for the finger movements is 9 min and 30 s.
Virtual hand kinematics are recorded at 60 Hz and synchronized with the EMG acquisition system. Six DoFs are recorded from the virtual hand corresponding to each finger's flexion and thumb's opposition. Hand kinematics are rescaled between 0 (opened) and 10 (closed), and the rest position is set to 3 for each DoF to mimic a resting hand pose on the virtual hand. Since there is an intrinsic delay between the movement of the virtual hand and the finger movements of the subject, a visual inspection is performed by the experimenter to time-shift the signals and synchronize virtual hand kinematics and EMG activity (maximum shift of 300 ms).
To calibrate the continuous shoulder controller, the same setup is used, with the virtual hand alternating between closed (target = 10), opened (target = 0) and a middle position (target = 5) to record 1 DoF. The subjects are asked to elevate their shoulder when the hand is flexed, lower their shoulder when the hand is opened, and rest when the virtual hand is in the middle position. Each movement is held for 5 s, alternating with a rest position for 3 s. The sequence is repeated six times for a total calibration time of 1 min and 35 s.

Shoulder decoders comparison
After calibrating all EMG decoders, participants are asked to grasp a cuboid placed by the experimenter in the robotic hand, and to perform a sequence of object in-hand rotation tasks. A rotating base with indicators of the desired rotation angles is placed below the grasped object, see figure 2. Markers are attached to the cuboid and the base to track their orientations. The task involves rotating the object 10 times to one of the five target angles. Following instructions from an algorithm creating random integer values between 0 and 5, the experimenter rotates the base randomly between −60 • , −30 • , 0 • , 30 • , and 60 • .
Before starting the experiment, the participants have a maximum 3 min to familiarize themselves with the shoulder decoders and the direction of rotation. Then, the participants must perform the task with both shoulder decoding strategies starting at random Figure 3. Summary of the experimental protocol. In three consecutive days, a subject participates in three sessions of four experiment runs. In each run, a specific control condition among four others is selected. The order of control conditions is randomized in every session and for all subjects. After setting the control condition, subjects complete five pick-and-place tasks, differing in the target positions. The task order is fixed from the top left box to the bottom right one in figure 5. During task execution, we record completion time, the number of failed attempts, the grasping time duration, and EMG signals.
with either decoder. In the case of the cuboid falling from the robotic hand, the experimenter puts it back in the robotic hand, the base resets to the initial configuration without rotation, and the trial is not considered. The experiment is completed in less than an hour for all subjects.

Longitudinal functional assessment
The experimental protocol is summarized in figure 3. Subjects enrolled in this experiment participate in three sessions consisting of four experiment runs. In each run, a specific control condition is selected. Given the control condition, subjects perform five tasks of picking and placing a cuboid in different target boxes. In our experimental protocol: instructed to perform the experiment. Subjects are informed which shoulder decoder is activated but not which control condition is utilized. After each experiment run with a control condition, subjects are given a 2 min break before starting with the next control condition.
(e) Since acquaintance with the setup can affect performance, all subjects participate in three sessions over three consecutive days, in order to account for the expected learning effects. (f) An experimental session takes between 60 and 100 min for all subjects.

Subjects and EMG recording
In the first experiment, four male participants aged between 27 and 32, all right-handed are enrolled (average weight: 77 kg, average height 177 cm). Eight subjects are recruited for the second study, all of whom are males aged between 26 and 30 (average weight: 74 kg, average height: 177 cm, and two lefthanded). A Noraxon DTS system wirelessly records EMG signals at 1 kHz. Six bipolar surface EMG channels are placed on the right forearm to target the extensor digitorum, flexor carpi radialis, palmaris longus, flexor digitorum superficialis, and flexor carpi ulnaris muscles, located with palpation. Two other bipolar surface EMG channels are placed on the right shoulder and the back of the subjects to target the upper and lower fibers of the trapezius that allow, respectively, elevation and depression of the shoulder joint.

Finger and shoulder decoders training
For both decoders, a sliding window of 200 ms is used with a moving step of 30 ms (170 ms overlap). To evaluate the offline performance of the decoders, the last repetition of the sequence of movements is used as a validation set. In total, five features are extracted to serve as input for the decoders [27]: (a) mean absolute value, (b) waveform length (cumulative length of the EMG waveform over time), (c) maximum absolute value, (d) zero crossings (number of times the signal crosses zero), and (e) slope sign change (number of times the slope of the signal changes sign). For the EMG finger decoder, an MLP regressor is used to decode simultaneously the 6 DoFs. The model is designed in Keras 7 with Tensorflow 8 backend. It has 1 hidden layer with 32 nodes (ReLU activation function). The MLP is trained using gradient descent with a batch size of 16 during 50 epochs. The learning rate is set to 0.01 and divided by 2 every 10 epochs. Early stopping is set if the validation loss is not decreasing for more than 13 epochs. A dropout is set to 0.2 during training.
The shoulder continuous decoder is an SVR algorithm with a radial basis function kernel from sklearn [28]. This model is chosen instead of an MLP to avoid overfitting due to the lower amount of calibration data, since the decoding task was simpler.
To maximize real-time performance, decoded values from the MLP are smoothed using a three-point 7 https://github.com/fchollet/keras. 8 www.tensorflow.org/.  2) The user commands the robotic hand to grasp the object by closing his fingers. Then, he lifts the object and moves the robotic arm near the target box. The robotic hand is autonomously holding the object. (3) Through shoulder movement, the subject tries to align the orientation of the object and the target box. (4) The subject aligns the orientations by rotating the object within the robotic hand frame. (5) The object is placed in the target box and then released (6).

Figure 5.
Overview of the experimental setup. The subject is asked to grasp a cuboid and place it in one of the target boxes, differing in position and orientation. The order of all pick and place tasks, as specified from 1 to 5, is fixed throughout the entire recording. A KUKA IIWA 14 robotic arm is set to follow the displacement of the subject's wrist acquired via the Optitrack capture system (Yw: markers tracking signals, xw ∈ R 3 : wrist position, αw ∈ R 3 : wrist orientation, q a ∈ R 7 : joint position feedback of the robotic arm, and τ a ∈ R 7 : joints torque command send to the robotic arm). A left Allegro robotic hand mounted with BioTac tactile sensors executes grasp and manipulation received from the EMG decoder. The hand is commanded to open or close via processing forearm EMG signals, and to rotate the object within the hand frame through shoulder EMG activation (E h ∈ R 200×6 : forearm EMG signals, E h ∈ R 200×2 : shoulder EMG signals, θ f ∈ R 5 : scaled fingers flexion, and θs ∈ R: scaled shoulder flexion). median filter. Moreover, to reduce noise when subjects are performing the tasks, predicted values are set to 0 if the decoded value is below 2, and set to 10 if the decoded value is higher than 7. Similar postprocessing is applied on the SVR shoulder decoder with a median filter and value clipping, if the decoded values are out of bound.

Data analysis
To compare the shoulder decoders, the task was divided in rotations between target angles. We quantified the angle difference over time between the target angle obtained from the base and the angle of the object. For each new target angle, the error was computed as the difference between the object and the base angle. We also quantified the time taken to perform each rotation.
For the functional assessment, we measured the completion time and number of failed attempts for each of the five different tasks within an experiment run. To better follow the results, the key terms and main outcome variables (completion time and the number of failures) are listed below, following control conditions in figure 1.
• Control condition: There are four control conditions differing in the type of employed state machine interface and EMG decoder. For the first experiment, statistical analysis was performed to test for significant differences in performance between the two shoulder control strategies, in terms of rotation time. In the second experiment, we conducted statistical tests to assess for significant changes in performance (in terms of the number of failures and completion time) across the different sessions, control conditions, and targets. To check if the variables were normally distributed, we used the Shapiro-Wilk test and used parametric or non-parametric tests accordingly.

EMG-robot interface
The robotic hand operates in three modes: We model each of these modes via a set of state machines, S = {S 1 , S 2 , S 3 }, see figure 6 where S 1 , S 2 , Figure 6. Block diagram of robot hand control with state machines of the (shoulder) EMG-robot interface. S1, S2, and S3 represent preshape, grasp, and manipulate modes respectively. We assess four control conditions for controlling the robotic hand. Control conditions are the combination of two EMG decoders (incremental/ continuous) and two interface modes (assisted/unassisted); see figures 1 and 5. and S 3 represent preshape, grasp, and manipulate, respectively. Transitions from S 2 to S 3 , and from S 3 to S 1 are unidirectional. However, the switch from S 1 to S 2 is bi-directional, meaning that if the grasp is not sound, the subject can re-open the hand and attempt a new grasping. The bi-directional transition between these two states allows the subject to open/close fingers several times to find the seemingly fine grasp.
We define and evaluate two state transition functions for our interface. One takes only the EMG command as input (unassisted interface), and the second one uses robotic feedback in addition to the EMG command (assisted interface), see figure 6. More precisely, in the assisted interface, we benefit from tactile feedback to verify contact with the object and grasp realization, used in the transition from S 2 state to S 3 state. Moreover, from finger kinematics and the joint angles, we estimate object angular displacement. This estimation is then used to confirm the attainment of the desired motion. Once the desired manipulation is achieved, the robotic hand releases the object only if the command from the subject is received consistently over a time window of 4 s, i.e. the subject insists on opening the hand.

Experimental hardware
The hardware for experiments consists of an Allegro left hand 9 mounted on a KUKA LBR IIWA 14 10 robotic arm, OptiTrack motion capture system, and BioTac tactile sensors, see figure 5. The Allegro hand has four fingers (thumb included), each with four active DoFs. Each DoF is actuated with a motor: three motors at the MCP, PIP, and DIP joints, and the fourth motor is located just under the finger base, attached to the palm and controlling the lateral rotation. The thumb has three motors at the joint connecting to the palm, controlling rotations along the three axes, and one motor at the joint connecting the two phalanges. The Allegro hand can be operated either in position or torque control mode, which is used in our shared control approach. A set of three BioTac sensors are used as fingertips to obtain tactile information at contact points during grasp and manipulation. The robotic hand is mounted on the end-effector (EE) of the KUKA IIWA 14 robotic arm. The KUKA IIWA 14 robot has 7 DoFs that allow its EE to be moved to a desired position and orientation smoothly and continuously. The redundancy in joint space, the extra DoF of KUKA, is exploited to minimize the acceleration in a trajectory, resulting in smooth robot motion.
In the first experiment, 3D markers are placed on the cuboid as well as the rotating base to record the difference in orientation over time with the OptiTrack camera system; see figure 2. In the second experiment, the subject wears a set of 3D markers positioned explicitly on the wrist (see figure 5) to detect the position and orientation of the subject's hand through the OptiTrack camera system. The acquired position and orientation of the subject's wrist are then sent to the control interface and used to teleoperate the KUKA's EE. Target boxes and the object's initial place are fixed to specific positions within reach of the robotic arm. During recording, to avoid inconsistency in actuators and sensors performance due to overheating and friction drift during active times, the entire system is shut down for 30 min after each experiment run.

Autonomous robot controller
We control both the robotic arm and the robotic hand in the torque mode, maintaining joint compliance. Compliant controllers enable the robot to adapt to perception uncertainties and provide safer human-robot interaction compared to position controllers. We introduced an additional safety layer on top of the controller, a user-robot collision avoidance that enhances the safety of human-robot interaction. Our autonomous compliant controller has two sub-modules: (a) a controller to imitate the subject's wrist displacement in position and orientation with the robotic arm EE, and (b) a controller that computes the joint torques of the robotic hand to perform tasks like grasp and manipulation.

Robotic arm control
The desired position and orientation of the robotic arm EE are computed based on the measured subject's wrist motion. From the OptiTrack system, we obtain the 3D position (x, y, and z axes) and orientation (roll, pitch, and yaw) of the subject's wrist with respect to the world frame. In our tracking strategy, the EE must mimic the subject's wrist displacement in four coordinates, 3D position, and the angle in the roll axis. Since the subject and the robot are facing each other in our setup, the robot EE is commanded to mirror the displacement in the pitch axis of the subject's wrist. To restrict the object rotation only to in-hand manipulation in our control scenario, the robot EE is indifferent to the displacement in the yaw axis of the subject's wrist achieved with Rodrigues' rotation formula [29]. We found that mirroring the pitch orientation and not tracking the yaw rotation is more intuitive for subjects and simultaneously results in more accurate tracking. Then, the desired pose of the robot EE is passed to a predefined linear dynamical system (DS) [30] to find the desired translational and angular velocities. These velocities serve as the inputs for our underlying compliant and passive controller [31,32] that outputs a set of joint torques for the robotic arm. Thanks to the DS approach, our controller is robust to perturbations and disturbances and safe for human-robot interactions due to compliant passivity. The redundant DoF of the robotic arm is constantly optimized to decrease robot acceleration while traversing from one point to another.

Robotic hand control
Similar to the robotic arm control, this controller also requires a set of desired fingertip positions/velocities as inputs. The computation of these inputs depends on the task state and whether or not a finger is in contact with the object. In the pre-grasp state, the desired position is computed based on the angle from the EMG decoder. In this state, each finger of the Allegro hand is separately commanded by the EMG decoder. The object's relative position with respect to the robotic hand is computed by tracking the subject's wrist motion. Once the object is within the hand frame, subjects changing hand posture from open hand to fist is equivalent to a robotic hand attempting to grasp. From tactile feedback, the control state changes from pre-grasp to grasp if three fingers make contact with the object. In this state, the fingers establish a force-closure [33] around the object. In other words, in case the subject feels a higher grip force is needed to obtain a firm grasp and lift the object, it can be achieved by clenching the fist tighter.
In the manipulation state, where the fingertips are in contact with the object, the control inputs are determined based on the object's desired position. In the grasp state, the object's desired position is fixed. When the subject intends to rotate the object within the robotic hand, in the manipulation state, the object's desired position changes, given the input from the EMG shoulder decoder. This desired pose is relative to the palm frame (a frame attached to the base of the robotic hand) and is used to find the desired object velocity through a linear DS. Translating an object-centric desired velocity to individual incontact fingers' desired velocities is realized through grasp matrix transformation [33], grasp stability metric, and the estimated contact mechanical properties. More precisely, we obtain the desired velocity of all fingers from the desired object velocity commanded by the shoulder EMG decoder. Similar to robotic arm control, these velocities are sent to our underlying compliant and passive controller to compute the corresponding joint torques [5].

Shoulder controllers comparison
The results of the first experiment are depicted in figure 7, where we compare the control performance of the incremental and continuous EMG decoders. From figure 7 (left), we observe that with the incremental decoder, the rotation error decreases more uniformly and to a lower value (0.18 rad less) compared to the continuous decoder (Z = 2.07, p < 0.05). On the other hand, figure 7 (right) shows that the continuous decoder is significantly faster than the incremental one (Wilcoxon test, Z = 133.5, p < 0.01). Therefore, the incremental decoder leads to a slower but more precise control due to smooth error decay compared to the continuous decoder.

EMG decoder
During EMG calibration, subjects were asked to follow a virtual hand doing a sequence of movements six times. The first five repetitions were used for training the model, and the last one as a validation set, to evaluate the decoding quality and stop the training when the validation loss was not decreasing. The validation losses of the finger decoder for the three sessions of the second experiment are shown in figure 8. The three sessions were compared using Friedman's Analysis Of Variance (ANOVA) (χ 2 (2) = 4.75, p = 0.093) and are not statistically different with a significance level of 5%. Figure 9 shows a representative example of the EMG decoder accuracy for predicting both finger flexion and shoulder movements. The virtual hand provides the ground truth finger flexion in this example. In contrast, the predicted flexion values are computed from the validation dataset of one subject after performing the EMG calibration. Note that the virtual hand finger values were shifted by 0.150 s to account for the delay between the virtual hand and the subject. A mean squared error loss of 5.90 was obtained before post-processing the predicted angles. For clarity, the data shown in figure 9 was post-processed using the same methods applied during real-time decoding (smoothed and clipped). In figure 9 some movements are not completely synchronized even if the general delay between the subject and the virtual hand was taken into account. The model generally performs better on the last fingers than the first ones. Regarding continuous shoulder decoding, all subjects obtained high offline accuracy (average loss: 3.22 ± 1.44 and average R 2 : 0.78 ± 0.1, see figure 9 for an example). In real-time, when the subject was moving the object in space, some noise could arise due to arm motion that activated shoulder muscles.

Shared control functional assessment
On average, one session for a subject took 75 min. Approximately 30 min were used at the beginning to place the EMG electrodes and calibrate the finger and the shoulder decoders. Then, performing the five pick and place tasks took, on average, 6 min and 11 s for each condition. Figure 10 provides an overview of the collected data for all participants and conditions. Figure 11 shows the mean number of failed attempts and mean completion time for the eight subjects, across the three sessions. Using the Friedman's ANOVA test, we observe a statistically significant difference in task completion time across the three sessions, χ 2 (2) = 31.962, p < 0.001. Post hoc analysis with Wilcoxon signed-rank tests was conducted by using a Bonferroni correction, setting the significance level at p < 0.05/3 = 0.017. We observe no sig-  For the number of failures, we also verify a statistically significant difference between the three sessions using the Friedman's ANOVA, χ 2 (2) = 11.494, p = 0.003. Using the Wilcoxon signed-rank test we see, (also in figure 11), that only from session 1 (mean = 0.96, standard deviation (SD) = 1.5) to session 3 (mean = 0.41, SD = 0.68) the mean number of failures decreases significantly (Z = −3.639, p < 0.001) by an average of 12.1 failures. From session 1 to session 2 (Z = −2.321, p = 0.020) and from session 2 to session 3 (Z = −1.509, p = 0.131) there are no statistically significant changes.

Comparison between shared control conditions
We investigated the performance of four shared control schemes, which combined two state transition modes (assisted vs. unassisted) with two EMG decoding approaches (continuous vs. incremental). The mean task completion time and mean number of failures were obtained and analyzed for each of the control conditions, for all subjects and experimental sessions (see figure 12). Each subject tried four different shared control conditions (see figure 1) in three sessions over three days. Thus, for each control condition, there are 24 data points (8 subjects × 3 sessions). In each experiment run, subjects were asked to complete five sub-task in a specific order (figure 5), placing a cuboid in five different target boxes. We recorded the number of failed attempts and the completion of each sub-task. The x-axis for each plot is the mean completion time of these five sub-tasks, the y-axis is the corresponding sum of the failed attempts, and the size of the marker indicates the relative variance in completion time over. Using again the Friedman's ANOVA test with α = 0.05, we observe a statistically significant difference in task completion time between the four experimental conditions, χ 2 (3) = 11.325, p = 0.010. The same is verified for the number of failures, χ2(3) = 10.305, p = 0.016. By running the Wilcoxon signedrank test with p = 0.05/4 = 0.0125, we find that the task completion time in condition U-I (median = 19.0 s, IQR = 11.9) was significantly longer than the completion time in U-C (median = 17.4 s, IQR = 10.4), Z = −2.803, p = 0.005. That is, without assistance, subjects were significantly faster by an average of 3.36 s using the continuous controller, when compared to incremental control. Between U-I and A-I (median = 16.9 s, IQR = 12.2)-i.e. the two assistance modes using the incremental EMG Regarding the number of failures, we see statistically significant changes between U-I (mean 6.125) and A-I (mean 2.625), Z = −3.046, p = 0.002, and between conditions U-C (mean 6.0) and A-I (Z = −2.549, p = 0.011). In fact, the number of failed attempts for both control schemes that used an unassisted interface significantly improved with the assisted interface, when combined with the incremental control modality (by 3.5 and 1.5 difference, respectively).
Overall, comparing the two state transition modes (unassisted vs. assisted), the assisted interface was significantly better than the unassisted counterpart, in terms of resulting in lower number of failed attempts (41% more precise) (Wilcoxon test, Z = 3431.5, p < 0.05). Regarding completion time, no significant difference were observed between the two methods. On the other hand, the two EMG decoding approaches (incremental vs. continuous) performed similarly in terms of both completion time, regardless of the assistance mode.
Taking a closer look at the performance for each of the five targets, figure 13 shows that the assisted mode resulted in lower failures, especially for those that require large change in wrist orientation and large inhand rotation (tasks 1, 2, and 4 where the necessary angle is higher). The first two tasks are considered to be the most difficult ones, since the subject needs to adjust the orientation of the robot's EE (palm being parallel to the target box inclination) and rotate the object in hand simultaneously.

Results summary
In summary, the assisted control mode improved the performance significantly in terms of lowering the number of failed attempts. The assisted state transition mode, when in combination with EMG incremental control condition, A-I, outperformed the others as it has the lowest completion time and significantly lower number of failures (higher precision). On the contrary, the incremental EMG decoding without assistance (U-I) proved to be the least efficient among control conditions. Comparing the control condition U-C with lowest robot autonomy with the condition A-I with highest robot autonomy shows that increasing the autonomy of the robot controller improved the precision (reduced number of failures) more significantly (by 41%) than the efficiency (by 2.4 s task completion time).

Discussion
We first compared two shoulder decoders in a controlled object rotation task, then, to assess the performance of subjects in a functional task, we used a robotic hand to pick, manipulate and place an object in various target boxes. We examined four shared control conditions based on a compliant controller in conjunction with EMG decoding to teleoperate a robotic arm, while maintaining full autonomy over high-level commands. Given the experimental results, we propose the shared control strategy with assisted state machine interface and incremental shoulder EMG decoder.
The participants' grasping motor commands were interpreted individually, simultaneously, and proportionally for each finger. Proportional decoding is intuitive [8]; it enabled the subjects to effectively complete the task while keeping full control over the fingers when not grasping. However, complex decoding strategies inevitably increased the noise in the predicted output, reducing the system's reliability. The presence of noise in the predicted output and disparities in performance between fingers are highlighted in figure 9. The anatomy of the forearm is an essential factor, as superficial electrodes do not allow for selective recordings from deep muscles. However, multiple studies demonstrated that deep learning could improve decoding accuracy [34][35][36].
The first experiment showed that the incremental decoder was significantly slower than the continuous one when the robotic hand was locked in place. However, this strategy was more stable and precise based on the smoother and lower tracking error (0.18 rad less).
In the second experiment, subjects had to reach target boxes on an inclined surface, requiring the subjects to move their arms and to rotate their wrist. Although forearm orientation is a major source of noise for real-time applications [37], the control technique developed in this study was robust enough for the subjects to complete the task. Calibrating the model in the various arm and wrist orientations could improve the robustness to wrist rotation, and arm orientation [38], but this would imply a substantially longer calibration time. Using a virtual hand to synchronize EMG and hand kinematics was also a limitation. Indeed, unreliable samples were introduced during training due to a non-constant latency between the virtual hand and the subject's movement. Furthermore, because muscle synergies can cause finger coflexion [39], the movements expected by the virtual hand were not always natural and, in some situations, deviated from the actual movements of the individuals. Kinematic tracking of real finger motions (on the contralateral hand in the case of unilateral amputation) by using a glove [40], or cameras [7] could more precisely track intended movements.
To accomplish in-hand object rotation with a robotic hand, precise finger motions in association with sensory input are necessary [5]. In this study, information such as object position, displacement, and grasp position is obtained directly from the subjects. Computing such variables and states is challenging when approaching complex manipulation tasks. The controller had to remain reactive to the user's commands with a minimal execution time delay to increase intuitiveness and sense of agency [41]. At the same time, the controller had to be robust to inconsistencies coming from the EMG decoder due to the variability of the signals. Although there was no statistical difference in EMG validation loss between days, we observed that the number of failed attempts and the completion time decreased over test sessions across all tasks and subjects. This confirmed that subjects learned how to improve their performance on this modified grooved peg test and did not rely on an increase in EMG decoding accuracy. In this control approach, we estimated the object's pose relative to the robotic hand from the forward kinematic and the fingers' joint position. This estimation proved helpful in these tasks with a cuboid object; on the contrary, for more complex object shapes and to obtain higher accuracy, more sophisticated object pose estimation methods like vision-based [42] or tactile images [43] are required. From figure 13, target tasks that required larger in-hand rotation foreseeably took more time to be completed. To reach humanlevel performance, reducing delay in user command execution and increasing the decoding precision [44] are instrumental for future enhancement. Furthermore, we can use the information from the robotic hand to provide sensory feedback to the users. For instance, contact, force, and finger angle information gathered from the robotic hand can be given to users with trans-radial amputation through invasive [45] or non-invasive channels [46]. Sensory feedback can help users to send more accurate high-level commands to the robotic hands, improve embodiment [47], and reduce cognitive load [48].
Numerous strategies could have been employed to control object rotation indirectly. However, the chosen strategy cannot interfere with other motor functions when used by individuals with a transradial amputation in a real-world scenario. Redundancy in the DoFs of the human body can be used. The shoulder elevation and depression movements are not essential for many activities of daily living and could therefore be leveraged. A widely available portable solution to record such movement is EMG. This solution could alter muscle activity when patients wear their RPH. However, we hypothesize that this shoulder muscle activity would not be that different from a healthy subject moving his arm in space as prosthetic hands weight is now similar to natural hands. Another limitation would then appear when a patient lifts heavy objects. Nevertheless, heavy objects are rarely manipulated with one hand. Therefore, we investigated two decoding strategies based on shoulder muscle EMG. The continuous decoder was proportional to the shoulder motion but in the first experiment, this decoder showed a noisier behavior with slightly more oscillations in tracking the target angle. Moreover, due to the cross-talk with the arm motion in the functional assessment, we observed noisy behaviors in real-time during the task execution. For instance, when subjects had to lower their arm to reach the fifth target (placed directly on the table), they had to lift their shoulder upwards to rotate the object in the right direction, which increased the number of failed attempts with the continuous shoulder decoder. On the contrary, when activity thresholds were set, the incremental shoulder decoder did not show this unwanted behavior partly because subjects could modulate their shoulder activity to rotate the object before placing it in the correct box. The increments' value determined the objects' desired rotation and allowed the subjects to regulate the rotation accurately. This highlighted the robustness of the incremental decoder for such a task.
In the first experiment (in-hand rotation only), subjects were more accurate (0.18 rad less tracking error) but slower in task completion with the incremental decoder compared to the continuous decoder. We observed the same behavior in the functional assessment experiment when both decoders were used without the assisted interface. However, when using the assisted interface, there was no statistical difference between the two shoulder decoders in task completion time. Overall, the assistance reduced the mean number of failures by 41% compared to unassisted.
Indeed, the continuous shoulder decoder was expected to rotate objects faster since it is directly linked to the shoulder angle. On the other hand, in some cases, a too fast rotation could cause the object to slip and fall, increasing the number of failed attempts compared to the incremental decoder. As a result, subjects developed a strategy to overcome this issue. They reduced the shoulder motion speed, which can explain the results obtained. This was valid only in the case of the assisted transition mode as, without assistance, the number of failed attempts was large both with and without assistance. Finally, assistance from the robotic state became greatly beneficial for targets 1 and 2, where decoding the subject's intention and encoding the desired command was more convoluted. Indeed, the two inclined targets had a high angle and elevation. For these targets, the assisted state transition mode significantly decreased the number of failures compared to the unassisted interface.

Conclusion
We showed that combining a shared control condition with EMG decoding for finger motions and object rotation could be a realistic alternative for users with trans-radial amputation to improve the dexterity and versatility of RPHs. The shared control created in this study could allow users with an amputation to manipulate objects in their prosthetic hand, which is practical in numerous daily tasks. In our teleoperated system, subjects maintained complete control over the robotic hand. The condition that obtained the best results was the incremental shoulder EMG decoder with the assisted state transition mode. Surely, integration of an RPH and validation on people with trans-radial amputation with an amputation would be necessary to quantify functional improvements. Another future direction of this work includes the implementation of fine-scale force modulation on objects to allow patients to grasp and manipulate fragile objects. Nonetheless, this study took one step toward more advanced control systems, implying that future RPH development in the direction of sensorized hands with compliant controllers would benefit people with trans-radial amputation.

Data availability statement
The data and code that support the findings of this study are available upon reasonable request.

Funding sources
This work was supported as a part of NCCR Robotics, a National Centre of Competence in Research, funded by the Swiss National Science Foundation (Grant Number 51NF40_18 5543), by the Bertarelli Foundation.

Conflict of interest
S M is co-founder of SensArs Neuroprosthetics, a small company working on the commercialization of bidirectional hand prostheses

Ethical statement
This study including the experimental procedures was approved by the cantonal ethical committee of Vaud. Informed consent was obtained from all participants in the study. The authors have confirmed that any identifiable participants in this study have given their consent for publication.