Paper The following article is Open access

An EOG-based wheelchair robotic arm system for assisting patients with severe spinal cord injuries

, , , , , , , and

Published 12 February 2019 © 2019 IOP Publishing Ltd
, , Citation Qiyun Huang et al 2019 J. Neural Eng. 16 026021 DOI 10.1088/1741-2552/aafc88

1741-2552/16/2/026021

Abstract

Objective. In this study, we combine a wheelchair and an intelligent robotic arm based on an electrooculogram (EOG) signal to help patients with spinal cord injuries (SCIs) accomplish a self-drinking task. The main challenge is to accurately control the wheelchair to ensure that the randomly located object is within a limited reachable space of the robotic arm (length: 0.8 m; width: 0.4 m; height: 0.6 m), which requires decimeter-level precision, and is still undemonstrated for EOG-based systems as well as EEG-based systems. Approach. A novel high-precision EOG-based human machine interface (HMI) is proposed which can effectively translate two kinds of eye movements (i.e. blinking and eyebrow raising) into various commands. For the wheelchair, positional precision can reach decimeter-level and the minimal steering angle is . For the intelligent robotic arm, shared control is implemented based on an EOG-based HMI, two cameras and the arm's own intelligence. Main results. After brief training, five healthy subjects and five paralyzed patients with severe SCIs successfully completed three experiments. For the healthy subjects/patients with SCIs, the system achieved an average accuracy of 99.3%/97.3%, an average response time of 1.91 s/2.02 s per command and an average stop-response time of 1.30 s/1.36 s with a 0 false operation rate. Significance. The EOG-based HMI can provide sufficient precision control to integrate a wheelchair and a robotic arm into a system which can help patients with SCIs to accomplish a self-drinking task. (ChiCTR1800019764)

Export citation and abstract BibTeX RIS

Original content from this work may be used under the terms of the Creative Commons Attribution 3.0 licence. Any further distribution of this work must maintain attribution to the author(s) and the title of the work, journal citation and DOI.

1. Introduction

Patients who are paralyzed from the neck down by spinal cord injuries (SCIs), amyotrophic lateral sclerosis or other disorders lose, or partially lose, control of their limbs. For paralyzed patients, external assistance is necessary to accomplish certain daily tasks, such as moving from one place to another and grasping a bottle to drink. Usually, such assistance is provided by a nurse or other paramedics, which may be time consuming and cost tremendous manpower. To help paralyzed patients take care of themselves, artificial devices, such as robotic arms and wheelchairs, are widely used to realize two basic and essential limb functions: (i) the walking function of the lower limbs and (ii) the grasping function of the upper limbs.

A common solution to compensate for the walking function of the lower limbs is to use a nonmanual wheelchair. Since most paralyzed patients still maintain normal brain functions and eye movements, an electroencephalogram (EEG) and an electrooculogram (EOG) may be used as input signals of nonmanual human-machine interfaces (HMIs) to control a wheelchair [15]. EEG-based HMIs, known as brain–computer interfaces, generate control commands by decoding the user's intention from various EEG patterns [69]. Compared with EEG, EOG has a higher signal-to-noise ratio [10] but requires users to maintain normal eye movements, such as winking, blinking, gazing and frowning [4, 11, 12]. In [3], the four directions (i.e. up, down, left and right) of eye movement were translated into four wheelchair-control commands (for example, moving forward, stopping, turning left and turning right), respectively. In one of our prior works [13], subjects drove the wheelchair by blinking synchronously with the button flashes on screen.

For the grasping function of the upper limbs, a robotic arm is commonly used to simulate arm movements and help paralyzed patients accomplish daily tasks [1418]. In [15], the patient's motion intentions were decoded to control a robotic arm to drink coffee from a bottle. In [16], a noninvasive EEG-based HMI was proposed for robotic arm control. In [17], Witkowski et al fused the EEG and EOG to build an enhanced HMI to control a hand exoskeleton. To the best of our knowledge, existing robotic arm systems require the origin of the arm to be fixed on a stationary platform (for example, a table). To ensure a successful grasp, the target object needs to be within the fixed reachable area of the robotic arm, which is unpractical for daily applications, for example, when the user is far away from the reachable area.

Although prior works have developed effective EEG-/EOG-based wheelchair systems and robotic arm systems, as far as we know, these two devices have not been integrated together and controlled by an EEG-/EOG-based HMI. The main challenge in combining the two devices is that it requires high precision control (i.e. positional precision and directional precision). To ensure a successful grasp, the user needs to effectively drive and accurately stop the wheelchair such that the randomly located object is within reach of the robotic arm, which requires decimeter-level positional precision and accurate direction control. Until now, it has not been reported that EEG-/EOG-based wheelchair systems can provide such precision control.

In this study, we install an intelligent robotic arm on a wheelchair to help patients with SCIs. A novel high-precision EOG-based HMI is proposed which can effectively translate two kinds of eye movements (i.e. blinking and eyebrow raising) into various commands. For the wheelchair, the positional precision can reach decimeter-level and the minimal steering angle is $5^{\circ}$ . For the intelligent robotic arm, shared control is implemented based on the EOG-based HMI, two cameras and the arm's own intelligence. Five patients with severe SCIs and ten healthy subjects participated in the experiments. After brief training, all subjects were able to successfully complete the experiments. For healthy subjects/patients with SCIs, the system achieved an average accuracy of 99.3%/97.3% and an average response time of 1.91 s/2.02 s per command with a 0 false operation rate. The experimental results demonstrated that the proposed EOG-based HMI can provide sufficient precision control to integrate a wheelchair and a robotic arm, which can help patients with SCIs to accomplish a self-drinking task.

The remainder of this paper is organized as follows: section 2 illustrates the methodologies, including the signal acquisition, system outline, EOG-based HMI, wheelchair control, and shared control for the robotic arm, section 3 presents the experimental results, further discussions are included in section 4, and section 5 concludes the paper.

2. Methods

2.1. Signal acquisition

In this study, a self-designed EOG acquisition device was used to collect and amplify the EOG signals. The sampling rate and the overall gain of the device are 250 Hz and 1000, respectively. Three channels ('EOG', 'REF', and 'DRL'), corresponding to three electrodes, are attached to the skin, as shown in figure 1. Among the three channels, 'EOG' is the data channel to record vertical EOG signals from the forehead; 'DRL' is the feedback channel through which the common mode signal in the differential amplifier is sent back to the subject's body; and 'REF' is the reference channel. The impedances between the skin and the three electrodes are kept below 5 k$\Omega$ .

Figure 1.

Figure 1. Electrode placements. 'EOG': the data channel that is placed above the right eyebrow to record vertical EOG signals; 'DRL': the feedback channel that is located on the left mastoid; 'REF': the reference channel that is located on the right mastoid.

Standard image High-resolution image

2.2. System outline

The main components of the proposed system include an EOG-based HMI, a wheelchair (UL8W, 0.67 m $\times$ 0.6 m, Pihsiang Machinery Co. Ltd), an intelligent robotic arm (JACO6 DOF-S, Kinova Robotics) and two cameras (Kinect v2, Microsoft). Kinect v2 is a customized sensing device that provides both color-image and depth information about surrounding objects. The placement of the components is shown in figure 2. The robotic arm was installed at the front-left side of the wheelchair. One of the cameras (camera A) was fixed near the robotic arm facing forward; the other camera (camera B) was installed on a steel column towards the user's face.

Figure 2.

Figure 2. Overall view of the system, which consists of an EOG-based HMI, a wheelchair, a robotic arm, and two cameras.

Standard image High-resolution image

The system workflow is illustrated in figure 3. The graphic user interface (GUI) of the EOG-based HMI is presented on the screen of a laptop with several flashing buttons, each of which corresponds to a specific command. The buttons flash one by one in a predefined sequence. To select a button, the user first performs an intended blink in response to a flash of the button. The algorithm analyzes the EOG signals, detects the intended blink, and preselects a target button. Then, visual feedback of the preselected button is presented on the GUI. If the feedback is correct, the user is required to intentionally raise his/her eyebrows to verify it. Otherwise, the user ignores the visual feedback and the GUI continues to flash. If the user misses a button flash, he/she can wait for the next flashing round. A command is executed only when the corresponding button has been preselected and verified. For the wheelchair control, the user generates commands, such as turning and stopping, through the EOG-based HMI. For the robotic arm, shared control is implemented. Two cameras (i.e. cameras A and B) are used to identify the positions of the objects and the user's mouth. After the user preselects and verifies a button that represents a target object through the EOG-based HMI, the intelligent robotic arm automatically plans the path to grasp the object and bring it to the user's mouth according to the positions of the object and the mouth identified by the two cameras. Other commands for controlling the robotic arm are also provided by the EOG-based HMI, such as resetting the robotic arm to its home position, backtracking it to put the object back, and stopping and restarting it to avoid emergencies.

Figure 3.

Figure 3. System workflow. The wheelchair is completely controlled by the EOG-based HMI, and shared control is implemented for the intelligent robotic arm.

Standard image High-resolution image

2.3. EOG-based HMI

A GUI is presented on the screen of a laptop, which consists of two levels: the wheelchair control panel (figure 4(a)) and the robotic arm control panel (figure 4(b)). Each control panel contains several flashing buttons, and each button corresponds to a specific command (see the details of the button-command mappings in sections 2.5 and 2.6). Initially, the wheelchair control panel is presented and the 'on/off' button flashes at 1 Hz while the other buttons remain unchanged, indicating that the GUI is off. The user can select this button to turn on the GUI. Once the GUI is turned on, the 14 buttons of the wheelchair control panel start to flash one by one in a predefined sequence. Each flash lasts 50 ms, and the interval between the onsets of two continuous flashes is 80 ms. Thus, the period of a flashing round (in which each button flashes once) is 1.12 s (14 $\times$ 80 ms). For the robotic arm control panel, nine buttons are presented and flash one by one sequentially. The duration of a single flash is also 50 ms, and the interval between the onset of two continuous flashes is 120 ms. As a result, the period of a flashing round is 1.08 s (9 $\times$ 120 ms).

Figure 4.

Figure 4. The GUI consists of two levels: the wheelchair control panel (a) and the robotic arm control panel (b). Each panel contains several buttons that correspond to various commands. Initially, the wheelchair control panel is presented. The user can switch to the robotic arm control panel by selecting the 'arm' button on the GUI (a) and switch back by selecting the 'wheelchair' button on the GUI (b).

Standard image High-resolution image

2.4. EOG detection algorithm

The proposed EOG detection algorithm can be divided into three parts: (i) the blink detection; (ii) the preselection; and (iii) the verification.

2.4.1. Blink detection.

At the end of each button flash, a 600 ms data epoch (starting from the onset of a flash) of the vertical EOG signal is recorded and analyzed. Since the sampling rate of the EOG acquisition device is 250 Hz (i.e. the sampling period is 4 ms), a data epoch contains 150 sampling points. For each button, the data epoch is first filtered by a bandpass filter (1–10 Hz), and then the first-order difference of the filtered epoch is calculated to achieve the differential epoch. To detect the blink, a multithreshold algorithm similar to the one proposed in [7] is implemented. Several waveform features are extracted from the filtered epoch and the differential epoch, such as the maximum (i.e. the peak) of the filtered epoch $a_{{\rm max}}$ , the peak of the differential epoch $d_{{\rm max}}$ , the minimum (the valley) of the differential epoch $d_{{\rm min}}$ , the position of the peak and valley tp and $t_{v}$ in the differential epoch, and the duration td (as shown in figure 5(a)). The duration is defined as the interval between tp and $t_{v}$ :

Equation (1)
Figure 5.

Figure 5. A typical differential shape of the EOG signals for blinking (a) and raising eyebrows (b).

Standard image High-resolution image

The algorithm detects the intended blink by checking whether these waveform features satisfy specific threshold conditions. For a successful detection, the extracted features described above should satisfy the following inequalities:

Equation (2)

where ath1 is the amplitude threshold of the filtered epoch, dth1 and dth2 are the minimal and maximal amplitude thresholds of the differential epoch, respectively, and [tth1, tth2] represents the range in which the duration td should be located in the differential epoch.

2.4.2. Preselection.

Since the generation of unintended blinks is almost a random process, we further check if the detected blink was generated within a certain range of delay after the button flash. The delay check is based on the following formula:

Equation (3)

where tp is the peak position in the differential epoch, which represents the delay of the blink to the target button flash, Tp is the desired value of tp, and e is the error range. In this study, e is set to be 30 ms.

If the detected blink is generated within the delay range after the button flash, the corresponding button is preselected and highlighted as visual feedback.

2.4.3. Verification.

Sometimes an unintended blink may be recognized as an intended one and result in a false preselection. Thus, we add a verification process. Specifically, when a button is preselected, the algorithm starts a 600 ms time window for verification. The user is required to evaluate the selection when he/she receives the visual feedback and raise his/her eyebrows if it is correct. Only when the eyebrow raising is detected within the 600 ms time window is the preselected button verified. A typical differential shape of the EOG signal for raised eyebrows is illustrated in figure 5(b). Similar to that in section 2.4.2, the 600 ms EOG data epoch of the verification window is filtered and differentiated to achieve the filtered epoch and differential epoch. Then, the same waveform features are extracted for detecting the raising of eyebrows. A successful detection should satisfy the following inequalities:

Equation (4)

where ath2 and ath3 are the upper and lower amplitude thresholds of the filtered epoch, respectively, and dth3 and dth4 represent the upper and lower amplitude thresholds of the differential epoch, respectively. In addition, [tth3, tth4] represents the range in which the duration td should be located in the differential epoch.

We chose eyebrow raising as the verification method for two reasons: (i) it is rarely performed unintentionally and (ii) it results in a vertical EOG signal that can be recorded by the electrode placed on the forehead and distinguished from that produced by an eye-blink.

2.4.4. Calibration.

Before the experiments, each user was asked to complete a calibration process which contained two intended blinking blocks, one unintended block and one eyebrow raising block. In an intended blinking block, the user performed five intended eye-blinks in sync with the button flashes (1 Hz) on screen. An intended block lasted 6–7 s, and users were informed to avoid any unintended eye-blink during this period. In the eyebrow raising block, the user raised his/her eyebrows ten times in sync with the button flashes. In the unintended blinking block, the users just relaxed for 3 min and only blinked unintentionally. EOG features, such as $a_{{\rm max}}$ , $d_{{\rm max}}$ , $d_{{\rm min}}$ , tp and td, were extracted from those recorded EOG signals of intended and unintended blinks, and eyebrow movements. For blinking, the amplitude threshold ath1 was set as the mean of the average EOG amplitude of intended eye-blinks and that of unintended eye-blinks to differentiate them. The average values [$\overline{d}_{{\rm min}}$ , $\overline{d}_{{\rm max}}$ ] and $\overline{t}_p$ of intended blinks were selected to be the differential thresholds [dth1, dth2] and the desired delay value Tp, respectively. The duration thresholds [tth1, tth2] were calculated by multiplying the average value $\overline{t}_d$ with empirical factors (for example, $t_{th1} = 0.8 \times \overline{t}_d$ and $t_{th2} = 1.2 \times \overline{t}_d$ ). Similarly, the thresholds of eyebrow raises were calculated using the average values of the EOG features extracted from the eyebrow movements.

2.5. Wheelchair control

The wheelchair control requires sufficient commands, such as forward and backward motions, left and right turns, acceleration and deceleration, and stopping. In the EOG-based HMI, the wheelchair control panel (as shown in figure 4(a)) provides 14 buttons, each of which corresponds to a specific command. Among these buttons, the 'on/off' button is used to turn on/off the GUI. When the wheelchair is static, the user can select (i.e. preselect and verify) the 'speed 1' button to move the wheelchair forward at 0.2 m s−1, and the 'speed 2' button is used to accelerate the speed to 0.3 m s−1. If the wheelchair is moving at 0.3 m s−1 and the user selects 'speed 1', the speed is decelerated to 0.2 m s−1. For safety, 'speed 2' is available only when the wheelchair is moving forward. For the backward motion, the speed is fixed at 0.2 m s−1. Directional control is achieved by eight direction buttons ('L1'–'L4', 'R1'–'R4'). The four buttons on the left from 'L1'–'L4' correspond to turning left at $5^{\circ}$ , $30^{\circ}$ , $45^{\circ}$ , and $90^{\circ}$ , respectively. Similarly, 'R1'–'R4' correspond to turning right at $5^{\circ}$ , $30^{\circ}$ , $45^{\circ}$ , and $90^{\circ}$ , respectively. The wheelchair can be turned when it is either moving or static. If the user selects the 'arm' button when the wheelchair is static, the GUI switches to the robotic arm control panel. The switch buttons (i.e. 'arm' and 'on/off') are unavailable when the wheelchair is moving. To achieve quick stopping performance, the stop command is executed as long as the 'stop' button is preselected without verification.

2.6. Shared control for the robotic arm

Shared control is implemented for the robotic arm, based on the EOG-based HMI, two cameras (i.e. cameras A and B) and the arm's own intelligence. Camera A is used to identify the position of the objects in front of the wheelchair, and camera B is used to identify the position of the user's mouth. When the user stops the wheelchair and preselects and verifies the 'arm' button, the GUI switches to the robotic arm control panel as shown in figure 4(b), and the nine buttons flash one by one in a predefined sequence.

To ensure a successful grasp, the target object should be located in an effective zone of the robotic arm, which is determined by the arm length and camera A's viewpoint, as shown in figure 6. The defined zone is a rectangular space (length: 0.8 m; width: 0.4 m; height: 0.6 m), whose element coordinate system is parallel to the coordinate system of the robotic arm. All of the parameters of the effective zone are determined according to the motion range of the arm and the camera's viewpoint. When the user switches the GUI to the robotic arm control panel, camera A starts to calculate 3D coordinates of the objects ahead using the region-growing algorithm [19]. Only the objects within the effective zone are maintained and assigned to the three object buttons (i.e. 'object 1', 'object 2', and 'object 3'). The result of the assignment is shown on the monitor in real time as the feedback. The mouth position is identified by implementing camera B's face detection function.

Figure 6.

Figure 6. A rectangular effective zone is defined in the robotic arm space (length: 0.8 m; width: 0.4 m; height: 0.6 m). The closest distance between the effective zone and the vertical plane of camera A is 0.4 m.

Standard image High-resolution image

In this study, the system is used to assist patients with SCIs to accomplish a self-drinking task. Thus, the target objects to be grasped are bottles with some water and a straw in each of them. To grasp a target bottle for drinking, the user first checks if the bottle has been detected and assigned to an object button. If so, the user selects (i.e. preselects and verifies) the corresponding button through the EOG-based HMI. Then, camera A and camera B report the positions of the bottle and the user's mouth, respectively, to the robotic arm. According to the positions, the arm plans the path to grasp and brings the bottle to the user's mouth. After the bottle arrives near the user's mouth, he/she can select the 'rotate' button on the GUI to rotate the wrist joint of the robotic arm to bite the straw ($15^{\circ}$ per rotation). The 'back' button is used to return the robotic arm to put the bottle back, and the 'home' button is used to reset the arm to its home position. During the arm movements, the user can stop and restart it by selecting the 'stop' and 'continue' button, respectively. The 'wheelchair' button is used to switch the GUI back to the wheelchair control panel when the task is completed.

2.7. Experimental procedure

Five healthy subjects (one female and four males, numbered S1–S5, aged between 22 and 29 years) and five male paralyzed patients (numbered P1–P5, aged between 17 and 32 years) with severe SCIs participated in three experiments to demonstrate the effectiveness of the proposed system. Individual information on the patients is presented in table 1. All subjects maintained normal eye movements. The experiments were approved by the Ethics Committee of Sichuan Provincial Rehabilitation Hospital. We also registered on the Chinese Clinical Trial Register (ChiCTR) with the registration number ChiCTR1800019764. Written informed consent for the experiment and the publication of individual information was obtained from the patients and their legal guardians. The performance indices used in this study are listed below:

  • (i)  
    Accuracy—the probability of selecting a button correctly;
  • (ii)  
    False operation rate (FOR)—the number of false output commands per minute when the subject has no intention of selecting any buttons;
  • (iii)  
    Response time (RT)—the time used to generate a command;
  • (iv)  
    Information transfer rate (ITR)—bits of information transferred per minute;
  • (v)  
    Operations—the number of commands generated to accomplish the mobile self-drinking task;
  • (vi)  
    Collisions—the number of collisions during the mobile self-drinking task;
  • (vii)  
    Unsuccessful grasps—the number of times that the robotic arm tries but fails to grasp the target object.

Table 1. Individual information on patients with SCIs.

Patients Gender Age Pathogenesis Time of onset (month) ASIA
P1 Male 26 Falling injury 54 C5-B
P2 Male 17 Swimming injury 2 C5-C
P3 Male 32 Falling injury 8 C7-B
P4 Male 23 Myelitis 40 C5-C
P5 Male 20 Car accident 2 C7-A

Note: AISA: American Spinal Injury Association [20].

2.7.1. Experiment I.

All ten subjects participated in this experiment to evaluate the overall performance of the EOG-based HMI without involving the wheelchair or the robotic arm. The experiment contained three blocks, each of which consisted of 30 trials. In the trial, the system first presented a cue regarding a random target button on the wheelchair control panel for 4 s. Then, the 14 buttons flashed one by one in a predefined sequence. The subject was asked to preselect and verify the button associated with the cue as soon as possible. Once a button had been selected and verified, the GUI stopped flashing and highlighted the selected button for 300 ms. All commands were invalid in this experiment. Both the cue and the selected button were recorded for the accuracy calculation. Then, the system started a new trial. When a block was finished, there was a 1 min break. After the three blocks were completed, a 30 min idle period was conducted during which the subject took a rest with no intention to select any buttons. Any verified selection during the idle period was recorded as a false output to calculate the FOR. Moreover, an important index for evaluating nonmanual HMIs was the ITR [21], which describes the system efficiency for transferring effective information. In this study, the ITR was calculated using the following formula:

Equation (5)

where N is the number of commands, P is the average accuracy and T is the average RT.

2.7.2. Experiment II.

This experiment was designed to simulate a common scene in daily life: moving from a random position to reach a table also randomly located in the room, selecting a target bottle on the table and grasping it to drink water with a straw. As shown in figure 7(a), the experimental field was built on a rectangular space (9 m $\times$ 5 m) to simulate an indoor environment. Seven obstacles and a bench were randomly placed in the field. An ordinary table (0.35 m high) with three different bottles on it was located at one side of the field. Each bottle contained some water and a straw. Before the experiment, the subjects were given a preparation period to become familiar with the system and the field. All subjects participated in this experiment, and each of them completed five trials. In each trial, by using the proposed HMI, the subject was required to drive the wheelchair from a random position in the starting zone to bypass the obstacles and stop right in front of the table, manipulate the robotic arm to grasp a target bottle for drinking and put the bottle back after drinking.

Figure 7.

Figure 7. (a) General view of the experimental field (9 m $\times$ 5 m). Seven obstacles (red circles) and a bench (red rectangle) were placed randomly between the starting zone (blue area) and the table (yellow area). In an experimental trial, the subject drove the wheelchair from a random point in the starting zone to reach the table and grasp a bottle to drink. (b) Actual scene of the experimental field.

Standard image High-resolution image

2.7.3. Experiment III.

All of the ten subjects participated in this experiment to evaluate the classification accuracy between the three classes, i.e. blinking, eyebrow raising and idle state. The experiment contained six blocks, each of which consisted of 20 trials. At the beginning of a trial, the system randomly presented one of the three cues ('blink', 'eyebrows' and 'idle'). The subject was asked to perform the corresponding eye movement after a beeping sound. The system analyzed the recorded EOG signals and the result was presented as feedback. When a block was completed, there was a 1 min break. Any inconsistent result between a cue and the subsequent feedback was considered a mistake, which was used to calculate the classification accuracy.

3. Results

The experimental results of experiment I are shown in tables 2 and 3, respectively. The accuracy represents the probability of preselecting a target button and further verifying it correctly. If the subject preselected a wrong button, he/she could wait for the next flash of the target button while not verifying the wrong result. This situation was not counted as an error, but it would extend the RT. For healthy subjects, the system achieved an accuracy of 99.3 $\pm$ 1.0%, an RT of 1.91 $\pm$ 0.07 s per command, and an ITR of 119.2 $\pm$ 4.2 bits min−1. As for patients with SCIs, the average accuracy, RT and ITR were 97.3 $\pm$ 2.2%, 2.02 $\pm$ 0.15 s, and 110.6 $\pm$ 4.1 bits min−1, respectively. Furthermore, since the stop command is executed without the verification process, the RT for stopping is reduced to 1.30 $\pm$ 0.03 s and 1.36 $\pm$ 0.04 s for healthy subjects and patients with SCIs, respectively. For all of the subjects, the FOR during the idle period was 0. The false positive probability of preselections (FPP-pre) represented the probability that an unintended blink was recognized as an intended one (without the verification process) during the idle state. For healthy subjects/patients with SCIs, the probability that an unintended blink was recognized as an intended one was 7.7%/17.3%.

Table 2. Results for the healthy subjects in experiment I.

Subjects Accuracy (%) RT (s) Stop RT (s) FOR (events min−1) ITR (bits min−1) FPP-pre (%)
H1 100 1.95 1.31 0 117 8.7
H2 98.9 1.91 1.33 0 118 9.7
H3 97.7 1.98 1.30 0 115 2.5
H4 100 1.81 1.25 0 126 10.9
H5 100 1.89 1.30 0 120 6.5
Mean $\pm$ SD 99.3 $\pm$ 1.0 1.91 $\pm$ 0.07 1.30 $\pm$ 0.03 0 119.2 $\pm$ 4.2 7.7 $\pm$ 3.3

Table 3. Results for the patients with SCIs in experiment I.

Subjects Accuracy (%) RT (s) Stop RT (s) FOR (events min−1) ITR (bits min−1) FPP-pre (%)
P1 94.4 1.83 1.33 0 115 25.5
P2 100 2.03 1.41 0 112 17.7
P3 96.7 2.08 1.38 0 112 21.5
P4 98.9 2.04 1.33 0 110 12.2
P5 96.7 2.13 1.34 0 104  9.5
Mean $\pm$ SD 97.3 $\pm$ 2.2 2.02 $\pm$ 0.15 1.36 $\pm$ 0.04 0 110.6 $\pm$ 4.1 17.3 $\pm$ 6.6

All subjects completed experiment II without any collisions. The average time and operations used were calculated over five trials for each subject. According to the results, to complete one trial, healthy subjects approximately spent 247 s and performed 26 operations, while patients with SCIs approximately spent 315 s and performed 38 operations, on average. The number of unsuccessful grasps was 0 for all subjects. No emergency stop was recorded during the test. To measure the workload, each patient with SCIs completed the Hart and Staveland NASA task load index (TLX) questionnaire after the task. The NASA TLX measures the workload from six aspects: mental demand, physical demand, temporal demand, performance, effort, and frustration level [22]. According to the results, the average ratings of all six aspects were maintained below 25.

The experimental results of experiment III are shown in tables 4 and 5. The classification accuracies between two of the three classes (blinking, raising eyebrows and idle state) were calculated. For example, 'blink–eyebrows' represents the classification accuracy between blinking and raising eyebrows, which was calculated by dividing the sum of the number of correct 'blinking' trials and that of correct 'raising eyebrow' trials by the total number of 'blinking' and 'raising eyebrow' trials. According to the results, the algorithm could properly distinguish between two of the three classes.

Table 4. Results for the healthy subjects in experiment III.

Subjects Blink–eyebrows (%) Blink–idle (%) Eyebrows–idle (%)
H1 84.8 95.0 89.8
H2 95.1 97.6 97.5
H3 95.6 100 95.6
H4 100 100 100
H5 92.9 97.3 95.6
Mean $\pm$ SD 93.7 $\pm$ 5.6 97.9 $\pm$ 2.1 95.7 $\pm$ 3.7

Table 5. Results for the patients with SCIs in experiment III.

Subjects Blink–eyebrows (%) Blink–idle (%) Eyebrows–idle (%)
P1 85.0 85.0 100
P2 90.6 95.0 95.6
P3 75.0 87.5 87.5
P4 92.6 97.5 95.1
P5 87.3 95.0 92.3
Mean $\pm$ SD 86.1 $\pm$ 6.8 92.0 $\pm$ 5.4 94.1 $\pm$ 4.6

4. Discussion

For severely paralyzed patients, it is difficult to perform some daily tasks, such as moving from a random place to reach a table and to select and grasp a target object for self-feeding and self-drinking. In this study, we combined a wheelchair and an intelligent robotic arm with the aim of helping patients with severe SCIs to accomplish a self-drinking task.

The main challenge in combining a robotic arm with a wheelchair is that it requires high precision control. To successfully grasp the bottle, the user needs to effectively drive and accurately stop the wheelchair such that the randomly located bottle is within the effective zone of camera A (a cuboid space with 0.4 m ahead of camera A, length: 0.8 m; width: 0.4 m; height: 0.6 m), as shown in figure 6. This task requires a positional precision of 0.4 m and a directional precision of $30^{\circ}$ . To satisfy these demands, the performance of the system should have a short RT (especially for stopping), a low FOR and a high selection accuracy. As far as we know, it has not been previously reported that EEG-/EOG-based wheelchair systems can provide such precision control. For example, in [3], unintended eye movements were not distinguished from intended ones, which resulted in a high FOR and affected the selection accuracy. In [4], the system could only turn left/right by $90^{\circ}$ , and in [5], it took nearly 6 s to stop the wheelchair, and in our previous work [13], the RT to issue a command was 3.7 s (the wheelchair speed: 0.2 m s−1). In this study, the proposed system achieves a positional precision of 0.27 m (wheelchair speed: 0.2 m s−1; stop RT: 1.36 s), which is much smaller than the required precision (0.4 m). The minimal direction angle of the proposed system is 5, which is also sufficient to satisfy the required direction precision. Moreover, the FOR of the proposed system is nearly 0. According to the experimental results, all subjects successfully completed the self-drinking task, which demonstrated that the proposed EOG-based HMI provided sufficient precision control to combine the wheelchair and the robotic arm.

Another challenge is to eliminate the false positive commands caused by unintended blinks. Compared with intended ones, unintended blinks usually have smaller EOG signal amplitudes due to weaker muscle strength. In this study, the false positive commands caused by unintended blinks were eliminated by the following steps: (i) in the online experiments, the EOG algorithm first checked if the data epoch contained a blink. If so, then the algorithm calculated the EOG amplitude of the detected blink. Only when the amplitude exceeded the amplitude threshold ath1 would the detected blink be recognized as an intended one. According to the results of experiment I, for healthy subjects/patients with SCIs, the probability that an unintended blink was recognized as an intended one was 7.7%/17.3%. (ii) If an unintended blink was falsely recognized as an intended one, the verification process was used to prevent any false positive command in this case. Only when the user raised his/her eyebrows to verify the preselection result would the corresponding command be triggered.

FOR is an important index to evaluate an EOG-based HMI, which is defined as the number of false output commands per minute during the idle period in which the user has no control intention. In this study, if an unintended blink was recognized as an intended one and happened to be generated within the delay range after the 'STOP' button flash, it would result in a false positive selection of 'STOP'. The probability of false positive stops resulting from unintended blinks was reduced from two aspects: (i) the EOG algorithm could distinguish unintended blinks from intended ones by using the amplitude threshold ath1. For healthy subjects/patients with SCIs, the average FPP-pre was 7.7%/17.3%. (ii) The probability that an unintended blink happened to be located within the delay range after the 'STOP' button flash was about 7% (1/14). Thus, the overall probability of a false stop caused by unintended blinks was very small. During the 30 min idle test, no false positive selection of 'STOP' was recorded.

We further compared the proposed EOG-based HMI with several state-of-the-art ones, as shown in table 6. Compared with these works, the proposed HMI provides more commands (14) with higher accuracy (97.3 $\pm$ 2.2%) and a 0 FOR. The RT for the proposed HMI to issue a command (approximately 2 s) is shorter than most of the listed works, but higher than that in [3]. The reason is that the authors in [3] did not implement a verification process to distinguish between intended and unintended eye movements, which might result in a high FOR. We also compared the proposed HMI with that reported in our previous paper [13]. In [13], the subjects usually needed to perform three–four blinks in sync with the button flashes to select a button and the decision making process occurred at the end of a flashing round. The average RT reported in [13] was around 3.7 s, which could not meet the requirement of positional precision (0.4 m) considering the wheelchair speed (approximate 0.2 m s−1). In this study, the button preselection and verification process needed one blink and an eyebrow movement, which occurred immediately after each button flash. Therefore, the average RT was reduced to 2 s for all button selections, the stop RT was 1.36 s, and the minimal steering angle was $5^{\circ}$ . This performance could meet the requirement of positional precision to combine a wheelchair and a robotic arm, as demonstrated by the experimental results.

Table 6. Comparison with other EOG-based HMIs.

EOG-based HMIs Accuracy (%) FOR (events min−1) RT (s) No. of commands
Ma et al [4] 86.2 3.15 3.1 7
Barea et al [3] / High 1 4
Huang et al [13] 91.7 0 3.7 13
Rui et al [23] 93.6 0.1 8.1 12
The proposed HMI 97.3 0 2 14

Safety is also a major concern for practical applications. In case of an emergency in which the user has no time to react, two ultrasonic range finders (HC-SR04) are installed on the front and back sides of the wheelchair to stop it as long as one of the returned distance values is below 0.2 m.

5. Conclusion

In this paper, we combine a wheelchair and an intelligent robotic arm and control both of them by a novel EOG-based HMI to help patients with severe SCIs accomplish a self-drinking task. The wheelchair is purely controlled by the EOG-based HMI. For the robotic arm, shared control is implemented, based on the EOG-based HMI, two cameras and the arm's own intelligence. Five healthy subjects and five patients with SCIs successfully completed the self-drinking task. Moreover, the user's workload of manipulating the system was maintained at an acceptable level. The experimental results demonstrated that the proposed EOG-based HMI can provide sufficient precision control to integrate a wheelchair and a robotic arm, which could help patients with SCIs in daily life. In a future work, we will expand the application range of the proposed system and test the system on more patients.

Acknowledgments

This work was supported in part by the National Natural Science Foundation of China under Grant 61633010, in part by the National Natural Science Foundation of China under Grant 61703101, and in part by the Guangdong Natural Science Foundation under Grant 2014A030312005. The authors have confirmed that any identifiable participants in this study have given their consent for publication.

Please wait… references are loading.