Home About the Journal Latest Work Current Issue Archive Special Issues Editorial Board

TABLE OF CONTENTS

2022,  4 (2):   89 - 114

Published Date:2022-4-20 DOI: 10.1016/j.vrih.2021.10.002

Abstract

A brain-computer interface (BCI) facilitates bypassing the peripheral nervous system and directly communicating with surrounding devices. Navigation technology using BCI has developed—from exploring the prototype paradigm in the virtual environment (VE) to accurately completing the locomotion intention of the operator in the form of a powered wheelchair or mobile robot in a real environment. This paper summarizes BCI navigation applications that have been used in both real and VEs in the past 20 years. Horizontal comparisons were conducted between various paradigms applied to BCI and their unique signal-processing methods. Owing to the shift in the control mode from synchronous to asynchronous, the development trend of navigation applications in the VE was also reviewed. The contrast between high-level commands and low-level commands is introduced as the main line to review the two major applications of BCI navigation in real environments: mobile robots and unmanned aerial vehicles (UAVs). Finally, applications of BCI navigation to scenarios outside the laboratory; research challenges, including human factors in navigation application interaction design; and the feasibility of hybrid BCI for BCI navigation are discussed in detail.

Content

1 Introduction
The brain-computer interface (BCI), an emerging technology, can directly convert human intentions into control instructions without the involvement of the peripheral nervous system, thus effectively improving the life of a patient by providing interaction tools that they could not use because of their lack of motor functions.
Patients suffering from diseases such as amyotrophic lateral sclerosis (ALS), cerebral palsy, muscular dystrophies, brainstem stroke, and multiple sclerosis cannot use standard facilities to drive assisted vehicles (such as wheelchairs). BCI allows these patients to interact with assistive devices in an effective and self-paced manner. Starting from the study of the principle prototype in a virtual environment (VE), different BCI paradigms to assist these patients in regaining the ability to move freely with the help of different degrees of automation systems have been studied. These BCI-based navigation applications, which combine multiple BCI paradigms and hierarchical command systems, have unique functions and applicability to unique scenarios based on the characteristics of the BCI paradigm used.
There are two types of BCI: an invasive (EcoG, LFP) and noninvasive BCI (EEG, MEG, fNIRS, fMRI, PET). The possible risk of infection from clinical surgery means invasive BCIs, such as electrocorticography (EcoG) and local field potential (LFP), are infeasible for signal acquisition for the control of actuators outside the lab. Meanwhile, limited by the requirement of relatively low-cost and fast processing speed for embedded devices of the actuator, magnetoencephalography (MEG) and functional magnetic resonance imaging (fMRI) unsuitable for controlling electric wheelchairs or robots, and the low time resolution of functional near-infrared spectroscopy (fNIRS) prevents the robot from performing actions that meet the task conditions. Consequently, we only discuss electroencephalography-BCI (EEG-BCI) for navigation in virtual and real environments in this progress report.
We collected over one hundred articles from the Web of Science database using keywords "brain-computer Interface," "BCI" or "BMI"; "virtual reality" or "VR" and "Navigation." The contributions of this review are two-fold. First, we present an in-depth review of BCI navigation systems with respect to signal acquisition devices, feature extraction and classification methods, and control modes, providing comprehensive material for BCI navigation system designers. Second, we discuss the current trends and challenges of BCI navigation systems, namely the use of hybrid BCI and human factors affecting system configuration, thus paving the way for more intelligent and human-centered brain-controlled actuators.
The remainder of this paper is organized as follows: Section 2 briefly introduces the paradigm, acquisition equipment, feature extraction and classification method, user interface, and control mode of the BCI system used in navigation applications. Section 3 introduces three typical applications of BCI navigation: navigation in VEs, navigation of brain-actuated robots, and navigation of unmanned aerial vehicles (UAVs). Section 4 analyzes the development trend of the existing BCI navigation system and summarizes existing problems. Finally, Section 5 summarizes the study.
2 Brain-computer interface system for navigation
Figure 1 shows a block diagram of a navigation application based on a BCI. The signal processing unit receives the user's brain signal collected by the acquisition device and outputs the control signal. The actuator in the control unit receives the control signal and creates the command with the help of the autonomous navigation system, which then returns the command to the user interface through the feedback unit to form a closed-loop control. The paradigm specifies the cognitive tasks used in interacting with the BCI. This progress report will briefly review the acquisition equipment, experimental protocol, and signal processing method for each paradigm used in navigation applications.
2.1 Brain signal acquisition and processing
2.1.1 Acquisition devices
Owing to the poor spatial resolution of EEG signals, a conductive paste must be applied, and a professional level amplifier must be adopted to ensure signal quality. Meanwhile, some applications use commercial-grade EEG acquisition equipment, such as Emotive EPOC. Compared to wet electrodes, the signal-to-noise ratio (SNR) of the dry electrode is relatively low owing to its high input impedance[1]. The cost of this equipment is low; however, they are only suitable for algorithms that do not require high signal quality and simple classification algorithms, such as steady-state visual evoked potential (SSVEP). According to Ref[2] and[3], SSVEP BCI systems only need one or two channels of EEG signals for feature extraction and intention decoding (Table 1).
Different acquisition devices used in BCI navigation applications
Acquisition Devices Typical products
Wet electrode EEG cap BioSemi EEG cap[4], gTec[514], Biopac MP150[15], ADInstrument[16], Compumedic[1719], actiCHamp[20,21], BIOPAC[22], g.MOBIlab+[23]
Dry electrode EEG device Emotive EPOC[24-27]
2.1.2 Feature extraction and classification methods
The sensorimotor rhythms (SMR)-based paradigm usually employs the frequency band energy in the frequency domain as a distinguishing feature[4,16]. The extraction of sensorimotor features mainly consists of two parts: band-pass filtering and spatial filtering, which can effectively search for feature spaces that replace sensor domain data containing sufficient discriminative information. Typical spatial filtering methods include neurological-based knowledge-driven methods[28,29] and data-driven methods[30]. To increase the resolution of features, feature selection methods[4] have been developed. The common spatial pattern is a feature extraction algorithm used in the motor imagery paradigm, which can maximize the difference in variance between the features of the two types of signals after spatial filtering. It is also widely used in navigation applications[6,3133].
The P300-based paradigm usually directly uses the original signal for feature extraction because the features of the P300 signal often exist in the time domain. Similar to the SMR-based paradigm, the SSVEP-based paradigm also uses frequency-domain features. The commonly used feature extraction algorithm is canonical correlation analysis (CCA)[18], and the reference signal frequency with the largest correlation coefficient with the current signal was selected as the classification output. Simple linear classifiers include linear discriminant analysis (LDA)[6,21,34] and support vector machine (SVM)[15,31,35], whereas complex nonlinear classifiers include neural networks[16].
The features of P300 are mainly represented in the time domain. The features of SSVEP mainly exist in the frequency domains, and the classification algorithm distinguishes different categories of signals by calculating the correlation coefficients of the EEG and specific frequency bands.
Early applications typically used simple classifiers. They can be divided into linear and nonlinear classifiers based on the decision boundary of the classifier. The former includes LDA and SVM, whereas the latter consists of several types, including neural networks, hidden Markov models, and Bayes quadratic classifiers.
The major challenge of the EEG classifier is to generalize over multiple subjects. In the past decade, numerous algorithms have been developed to solve the intersubject variation problem. Transfer learning solves the covariance-shift problem by transforming the features of the target and source domains into an invariant feature space. The regularization method restricts the complexity of the classifier to avoid overfitting caused by only minimizing the empirical risks. An adaptive classifier[36] updates the classifier parameters in real time using data from the target subject, solving the issue of intersession and intersubject variations simultaneously, making it significantly better than static classifiers[37]. Meanwhile, end-to-end matrix classifiers[38] and deep learning methods[3941] are also widely used. The matrix classifier uses Riemannian geometry theory to classify the stream of the signal's covariance matrix and achieves the state-of-the-art performance on multiple tasks. Convolutional neural networks based on deep learning are also used in a variety of paradigms, although their performance remains suboptimal owing to the lack of sufficient data[37].
2.2 Paradigms of BCI
2.2.1 Sensorimotor rhythm
The imagery movement of specific body parts can modulate the amplitude of SMRs. The SMR paradigm based on this phenomenon enables subjects to output one of several predefined commands (usually four or fewer) at a pace that can be controlled. This paradigm is usually divided into training and online phases. In the training phase, the subjects were asked to perform a motor imagery task for a period based on visual or sound cues, usually involving only a certain part without specific actions. The decoder was trained using previously collected training data of the subjects. The decoder is then used in the online phase to determine the type of motor imagery task currently being performed by the subject, given a segment of the EEG signal.
There are two advantages of using an SMR-based BCI to control the movement of virtual characters or physical machines. First, the interaction is intuitive. Human movement in a real environment has the characteristics of autonomous control, and the paradigm of stimulus-evoked response control cannot achieve real self-paced control because of its time-locked nature. This results in users responding to commands frequently when they need to maintain motion control. There is also a lack of corresponding real-time control mechanisms in stimulus-evoked paradigms where there is a need for a rapid response. An SMR-based paradigm with a robust signal processing method can compensate for these shortcomings. Meanwhile, because the movement of the human body in the environment always involves the movement of the limbs, the use of imaginary movement is also intuitive and easy for the user to remember.
The second pertains to the navigation applications of the SMR paradigm. Although the P300 or SSVEP paradigm usually has higher signal decoding accuracy[42,43], they often require the user to stare at the user interface screen during the entire process of inputting instructions. This process easily leads to visual fatigue and limits the ability of the user to perform multitasking while navigating.
A major challenge plaguing the SMR paradigm is the suboptimal signal decoding method, which results from the inability to control dimensions. Owing to the poor spatial resolution and extremely low SNR of EEG, even the most advanced signal processing algorithms can only output less than five control signals with only decent accuracy. Because it is currently impossible to obtain a signal with a higher SNR with nonintrusive equipment, more sophisticated and robust signal analysis methods should be developed to improve the accuracy of SMR signal classification. Regarding the problem of limited control dimensions, the control of cursor movement and selection can effectively expand the number of control commands at the cost of introducing more unnecessary intermediate steps.
Different subjects will produce personalized EEG features for different motor imagery tasks, which leads to a serious intersubject variability problem[42]. Researchers have recently proposed various methods to solve the covariance shift phenomenon caused by this problem. Transfer learning[44] is used to constrain the features of the subjects to the same invariant feature plane (domain generalization). It also utilizes the data of the target domain to improve the classifier performance trained with the source domain data (domain adaptation). An adaptive classifier[45] solves the problem of the difference between the training and testing stages by repeatedly updating the classifier parameters with the data of the target subjects in the testing stage.
2.2.2 Steady-state visual evoked potential
SSVEP is a visual evoked potential elicited by a fixed-frequency visual stimulus. It contains a sinusoidal-like waveform whose frequencies are the same as the fundamental and harmonic frequencies of the visual stimulus. By identifying this component, the area of the screen that is currently viewed by the user can be decoded from the EEG signal. The user interface usually contains multiple blocks of blinking patterns, each representing a distinct option predefined by the experimental protocol. The options menu can be combined with multiple media presentations, such as LCD screens, LED lights, and HMD. Figure 2 shows the application of SSVEP induced by flicker stimulation, independent of the computer screen.
A major advantage of SSVEP-based BCI is its relatively high information transfer rate (ITR)[3] compared to spontaneous signals. According to Ref. [1], the locomotion applications of BCI require less ITR than neuroprosthesis applications.
At the onset of BCI development, LCD and even CRT screens were used in the user interface subunit. Subsequently, LED flicker[46] was proven to have a larger frequency amplitude than LCD and CRT when evoking brain signals, especially in the lower frequency bands.
Ref. [47] concluded that SSVEP-based BCI systems are the most practical and least resource-expensive approach for a BCI-controlled wheelchair system because of their low SNR and low channel number requirements for acquisition equipment, easy-to-implement signal processing methods, and high information transmission rate.
A major challenge faced by SSVEP is the occupation of stimulus patterns that block the visual field. Recently, the proposal of minimum asymmetric visual evoked potentials[48] has alleviated this problem. Meanwhile, the use of more advanced information coding makes it possible to improve the ITR of SSVEP. By referring to the tiny stimulation placed on the lateral side outside the foveal vision, combined with the space code division multiple access scheme coding command output, the problem of occluding the visual field is solved, and the ITR of the entire system is improved by improving the accuracy and the number of instructions.
2.2.3 P300
Similar to the SSVEP paradigm, the P300 paradigm also requires subjects to focus their visual attention on the visual representation of the options that they want to choose. Since its first appearance in Ref. [50], the P300 paradigm has been widely used in BCI-based communication applications such as the P300 speller because it can provide more control commands than SMR-based paradigms and does not require additional training. Because this paradigm was originally used for communication, the number of commands provided by the P300-based system is adequate for locomotion applications that use high-level commands (Section 4.2.2).
In P300-based navigation applications, the paradigm that provides low-level instructions allows users with specific options to control the movement of the actuator and outputs real-time control instructions by decoding the user's visual attention points, as shown in Figure 3a. The paradigm that provides high-level instructions presents spatial information to the user through real-time video or a 3-dimensional map established using SLAM and provides the user with the option of a 3-dimensional location in the scene that is likely to be regarded as a destination. By selecting these target points, the actuator moves to the target area with the help of an autonomous navigation system, as shown in Figure 3b.
A common problem with event-related potential-based control signals is that they require the subjects to allocate part of their visual attention to the visual stimuli of the user interface, which results in an exhausting heavier workload. the hardware presents another disadvantage: more control commands correspond to shorter stimulation frequency intervals. In the specific hardware implementation, the actual refresh rate of the visual feedback of the user interface may be unstable, which poses potential risks to the accuracy of the signal decoding algorithm.
2.3 Control modes for navigation
According to the processing modality of input data, BCI systems for navigation can be classified as synchronous or asynchronous[1].
2.3.1 Synchronous control
Early navigation applications based on BCIs used a synchronous approach. In the synchronous control mode, the user performs cognitive tasks within a fixed period according to the prompts of the system, such as imagining body movements or looking at the location of the options that they want to select.
Because the visual stimulus that induces the control signal cannot be controlled by the user in a self-paced manner, BCI based on the P300 and SSVEP paradigms is generally considered synchronous. In the application shown in Figure 4c, the map of the space where the user is located is simplified and presented to the user. The user selects the location in the plane to output his/her movement intention and controls the avatar to move to the target area. This is done by taking elements at specific locations in the plane as visual options in the P300 paradigm and making them flash in a certain sequence. This control method has the advantage of direct and rapid movement; however, it is suitable for known environments and the degree of modification is not high. The application shown in Figure 4d uses the P300 interface, which enables users to flexibly control every movement of the virtual character using low-level instructions; as such, it is applicable to unknown scenes.
The SMR paradigm is generally considered asynchronous. However, the early part of the SMR paradigm followed a certain time sequence; that is, users could only use EEG to generate control signals within a specified time range, and the time interval is specified by the experimenter. Such applications include the application shown in Figure 4a. Participants use a type of motion imagination in the immersive CAVE system to control the forward and backward aspects of the perspective of the virtual character. To increase the dimensionality of control, the application shown in Figure 4b allows the user to use EEG signals to control the virtual rotating pointer and select multiple movement instructions; thus, the SMR BCI can be effectively used under the limitation of suboptimal signal classification accuracy.
2.3.2 Asynchronous control
The synchronous control mode allows users to output control commands only at specific instances, which may not be able to provide a valid response speed in some complex situations. The time interval of the subsequent command is limited by the length of the EEG signal required by the classification algorithm. To solve this problem, asynchronous control was proposed, in which the EEG signal is continuously decoded in real time to enable the user to control the navigation in a self-paced manner.
To realize asynchronous control, the detection of noncontrol (NC)/intention control (IC) status must be implemented. In the NC state, the EEG signal of the user will not be used to output the estimated current mental state of the user until the system detects the IC state of the user.
The SMR paradigm is more suitable for the asynchronous mode because of its endogenous origin. As mentioned earlier, the signal classification accuracy of BCI based on the SMR paradigm is not satisfactory, especially in the case of multiple classifications. If the task of resolving the IC/NC status is forcibly added to the classifier, the accuracy of the classifier is likely to degrade. To solve this problem, a hierarchical structure is typically adopted. As shown in Figure 5a, one-class motor imagery classification control signals are combined with the cue action of the actuator to achieve the control dimension of 4 types of control signals. The control method shown in Figure 5b uses the joint output of the classifiers of the three imaginary movements as inputs for different mental states in the IC state and to judge whether the system is currently in the IC state. The methods shown in Figures 5c and 5d control the translation and rotation of the virtual pointer through the continuous output value of the one-class motor imagery signal to separately switch the NC/IC state and different output commands.
Currently, motor imagery is the most widely used method in asynchronous control because of its endogenous origin. Suboptimal classification accuracy and intersubject variability are the main obstacles to the successful implementation of motor imagery-based asynchronous control. To address these issues, a variety of transfer learning methods have been proposed[56,57]. Meanwhile, the control dimension of the asynchronous control method is also improved using more complex paradigms, such as sequential motor imagery[32] .
2.4 User interface
The design of a user interface for a brain-controlled navigation system depends on the specific application requirements. For instance, when designing applications such as brain-actuated electric wheelchairs, the safety of the driver must be ensured. To achieve this goal, BCI systems that provide high-level commands must integrate surrounding environmental information into the user interface or implicitly embody environmental perception in the movement options provided to users.
Visual feedback can be provided to users through real-time video streaming or reconstructed 3D environmental images. In the application shown in Figure 6a, the environment information is reconstructed and combined with the provided movement options to the user and updated in real time following the movement of the user's perspective. In the application shown in Figure 6b, the physical location that can be reached is spatially registered on the video stream in real time. The user navigates the electric wheelchair to the target location by gazing at the corresponding blue point in the video.
3 Typical applications
BCI-based navigation was first applied to a VE. Early applications are limited by the real-time processing capabilities of the equipment and can only use simple signal processing methods; therefore, they can only output a few control signal types. The user controls the movement of the avatar via the BCI and receives feedback through changes in the VE. With the development of BCI technology, the ITR of the BCI system continues to improve, and it has become possible to use BCI to control devices in the real world. This chapter will first introduce BCI navigation in a VE and then review two major applications of BCI navigation in the real world: navigation of a brain-actuated robot and navigation of UAVs.
3.1 Navigation in a virtual environment
Feedback based on a VE provides users with more realistic visual, auditory, and tactile information, thus accelerating the learning process of BCI usage. Pfurtscheller et al. used a motor imagery BCI to control the virtual character navigating in a virtual apartment[13]. The experiment used a CAVE system to build a virtual reality (VR) environment. Before entering the VR environment, the user received additional training according to the standard Graz-BCI paradigm, and they were asked to walk to the end of the route on a one-dimensional trajectory. In addition, the experiment found that, compared to the PC stimulation presentation, the VE did not significantly improve the user's performance. The author attributed this to the distraction caused by excessive visual stimulation. Robert et al. designed a cue-based BCI paradigm. Ten naïve BCI users could move freely in a virtual apartment after 3 training sessions[60]. When the subjects reached any node, they performed an appropriate motor imagery task to determine the path to would follow.
Similar to Ref. [13], Friedman et al. added a dimension to the movement of an avatar controlled by the subject in the CAVE VE[61]. Participants rotated in the virtual bar room by imagining left- and right-hand movements. In the virtual street, they can control the movement back and forth by imagining foot and hand movements. It was seen from the analysis of the questionnaire after the experiment that the feelings of the different subjects regarding presence were inconsistent, whereas the presence of different tasks differed insignificantly.
The emergence of more reliable decoding algorithms and new interactions can help improve the capabilities of existing BCI systems. Vourvopoulos et al. proposed a BCI-VR system using multimodal interaction methods[10]. Participants were asked to drive a boat across a series of designated locations on the lake in a VE. Every time a goal is reached, the subject gets a certain score, and the goal is to obtain as many scores as possible. The participants used left- and right-hand motor imagery to control the direction of the hull in a self-paced manner. In the training phase, visual feedback and vibrotactile feedback were used to instruct the user to imagine the action.
Compared to BCI navigation in a real environment, the VE can prevent the navigation application from making actions endangering user safety because the subjects are not familiar with BCI operations or system misclassifications (Table 2). The use of motion-imagined BCI in the VE can promote the excitation of a user's motion-related brain regions[62], to facilitate motion rehabilitation. The underlying mechanism is that motor imagination and motor execution have similar neural representation regions and dynamic mechanisms[63].
Different BCI navigation applications in the virtual environment
Paradigm Preprocess Feature extraction Classifier Feedback Mode
[64] SMR 0.1-100Hz Band power threshold Avatar movement Asynchronous
[65] P300 Change of item state Synchronous
[14] SMR 0.5-30Hz Logarithmic band-power LDA Avatar rotation Synchronous
[66] SMR 8-12Hz Band power threshold Avatar rotation Asynchronous
[12] SMR 0.5-30Hz Logarithmic band-power LDA Subject forward, stop or backward Synchronous
[61] SMR 0.5-30Hz Logarithmic band-power LDA Subject forward, stop or backward Synchronous
[60] SMR synchronous
[11] SMR 8-30Hz Band power LDA Single back-projected stereoscopic wall with subject wearing shutter glasses Synchronous
SMR 0.1-100Hz Logarithmic band-power LDA Asynchronous (classifier threshold)
[67] SMR 0.5-30Hz Logarithmic band-power threshold Subject forward, stop or backward Asynchronous (Classifier threshold)
[68] SMR Asynchronous
[58] SMR Band power LDA Asynchronous
[69] SMR 8-30Hz CSP SVM Avator movement Synchronous
[70] SMR 6-36Hz Band power with feature selection Distinction sensitive learning vector quantization LDA Asynchronous
[53] SMR bandpower LDA Synchronous
Other works using SMR [4], [6], [10], [16], [17], [19], [20], [21], [25], [27], [31], [32], [71], [72], [73], [74], [75], [76]
Other works using SSVEP [18], [23], [35], [77], (high frequency) [78], [79], [80], [81], [82], [83], [84], [85], [86], [87], [88]
Other works using P300 [5], [7], [8], [9], [15], [22], [26], [34], [51], [89], [90], [91], [92], [54]
3.2 Navigation of brain-actuated robot
Assistive robots can help patients with impaired motor skills complete the tasks in their daily lives, thereby providing convenience for the disabled community. However, for patients with more severe damage to motor function like those with ALS disease, interactive methods designed for the disabled, such as sip-and-puff systems or simple switches, cannot meet their interactive needs, let alone ordinary keyboard and mouse input. Conversely, despite the assistance of automatic navigation systems, the user interface of such systems still requires the user to provide high-level control instructions beyond the user's ability owing to their loss of muscle functionality. The BCI-controlled robot can help people with severely impaired muscle function to operate auxiliary equipment and enable the daily use of electronic equipment, as well as improve their quality of life.
Brain-controlled mobile robots can be divided into two categories: one is similar to that of the low-level command paradigm, which directly controls every movement command of the robot through brain electrical signals. The other is a shared control that combines user intention output and intelligent navigation system functions. The main advantages of the former are low price and low technical complexity; however, they also have all the shortcomings of low-level command control, as mentioned above. Although the shared control method is costly and the equipment is complicated, it can effectively prevent users from fatigue and ensure the safety of users by using highly intelligent planning algorithms. An example of shared control is the double-layer control method, which outputs a control intent to the virtual control layer using the subject's brain signal[92], and the virtual control layer determines whether the subject's command conforms to the security requirements defined dynamically according to the current environment (Table 3).
Different Brain-controlled mobile robot navigation applications
Paradigm Preprocess Feature extraction Classifier Navigation mode UI Actuator Characteristics
[72] Alpha-activity Raw signal Raw signal LinearClassifier Semi-autonomous Predefined target images images
[93] Rhythmic activity Semi-autonomous Predefined target images Sony AIBO, ambient control Adaptaion to different level of disabilities,robust to the setting
[33] SMR Laplacian filter Bandpower,Fisher score Intentional activity classifier(LDA)+Motor direction classifier(QDA) Manual control Humanoid robot
[52] P300 0.5-30Hz band-pass filter r2 metric StepWise Linear Discriminant Analysis (SWLDA) Semi-autounomous Dynamic video-based GUI Wheeled robot Robot navigation and camera exploration
[19] SMR 8-16Hz band-pass filtered CSP LDA Manual control Humanoid robot Only one motor imagery
[94] high frequency SSVEP 1-100Hz band-pass filtered Bandpower Max feature Manual control with collision detection SSVEP flickers Robot Pioneer 3-DX with Canon VC-C4 camera
[23] SSVEP 8-19Hz band-passed Canonical correlation analysis (CCA) LDA Manual control SSVEP flickers Wheeled robot
[95] SSVEP 5-30Hz band-pass filtered Multivariate synchronization index (MSI) Max feature Semi-autonomous SSVEP flickers Wheeled robot Use of vSLAM to produce low-level commands
[96] SSVEP 0.5-30Hz band-pass filter CCA Max feature Semi-autonomous SSVEP flickers Wheeled robot Cooperation with a manipulator,switching between low-level control and high-level control
[79] SSVEP 5-12Hz band-pass filtered Bandpower Fuzzy feature threshold Manual control SSVEP flickers Wheeled robot One-channel
[9] P300 0.5-30Hz band-pass filtered Raw signal with channel selection Stepwise linear discriminant analysis High-level command Dynamic video-based GUI Wheeled robot 3 modes
A tradeoff exists between the enhancement of the relative adaptability to the environment and the reduction of the mental load of the user. On the one hand, a navigation system with low-level control can provide users with more freedom of movement than a navigation system with high-level control. The high-degree-of-freedom navigation system enables users to make reasonable movement commands according to the environment, avoiding the dilemma of a rigid control paradigm that is unable to travel through complex terrain. On the other hand, it is difficult for the user to maintain a mental state that provide signals that can be effectively identified by the classifier for a considerable period, and the prompt adjustment of the mental state according to the interaction between the wheelchair and the environment is beyond the capabilities of some users (Table 4). This can easily cause the mental load of users to be too high, resulting in fatigue and influencing task performance. Ref. [5] combined the two control modes. Users can switch between high-level and low-level commands through the P300 interface (Figure 7), which provides a valuable solution to this balance problem.
Different BCI-controlled wheelchair applications
Paradigm Preprocess Feature extraction Classifier Navigation mode Mode Feedback Characteristics
[3] SMR 0.1Hz high-pass filtered, common average reference Power spectrum density +canonical variates analysis Gaussian classifier Low-level command with collision detection Asynchronous Visual observation Virtual environment training in advance
[5] P300 0.5-30Hz band-pass filtered Raw signal with channel selection Stepwise linear discriminant analysis High-level command and low-level command Synchronous 3d reconstruction virtual environment Hierarchy of command,3d reconstruction of environment
[15] P300 1-35Hz band-pass filtered Raw signal SVM Low-level command Synchronous Visual observation
[16] SMR 8-40Hz band-pass filtered Band power dynamic Elman neural network Low-level command Asynchronous Visual observation
[6] SMR 8-32Hz band-pass filtered and common average reference CSP LDA Low-level command Asynchronous Visual observation
[7] P300 Switching between low-level command and high-level,only requires user input if necessary Synchronous Visual observation
[17] Mental Non-motor Imagery 0.1-40Hz band-pass filtered Power spectral density Artificial Neural Network Low-level command Asynchronous Visual observation
[18] SSVEP 6-30Hz band-pass filtered CCA Max value Switching between low-level command and high-level,only requires user input if necessary Synchronous Visual observation
[34] P300 0.1-20Hz Raw signal with segment selection SVM High-level command Synchronous 2d interface
[20] SMR 5-17Hz band-pass filtered Band power LDA Low-level command Asynchronous Visual observation Trained in virtual environment in advance
[77] SSVEP Common Average Reference,0.1-100 Hz band-pass filtered Spectral F-test Rule-based classifier Low-level command Synchronous Visual observation
[35] SSVEP 8-30Hz band-pass filtered Power spectrum density SVM Low-level command with collision detection Synchronous Visual observation
[22] P300 0.5-25Hz band-pass filtered Raw signal Template matching Switching between low-level command and high-level,switching by eye blinking Asynchronous Visual observation
[31] sequential SMR 8-12Hz band-pass filtered CSP SVM Low-level with motion switch Asynchronous Visual observation Motion switch
[8] P300 0.1-30Hz band-pass filtered raw signal LDA-LASSO Switching between low-level command and high-level,only requires user input if necessary Synchronous
[89] P300
[21] sequential SMR 9-15Hz band-pass filtered band power LDA Low-level command Asynchronous Visual observation Trained in virtual environment in advance
[32] sequential SMR 4-30Hz band-pass filtered CSP LDA Low-level command Asynchronous Visual observation Virtual environment training in advance
3.3 Navigation of unmanned aerial vehicles
UAVs can perform tasks that cannot be completed by land robots, such as rapid movement in large scenes, and overcome ground obstacles. These functions can provide extended possibilities for patients with impaired motor functions and can be used for entertainment by healthy people. However, the instability of UAVs causes great difficulties in the design of robust UAV control algorithms.
UAVs can be divided into three types according to their kinematic dynamics: rotary crafts, airships, and fixed-wing crafts[97]. Rotary crafts are widely used in indoor applications because of their easy handling and economic characteristics, making them the first choice for the development of brain-controlled UAV prototypes.
The control system of the UAV comprises a mixture of manual and automatic controls. Owing to the low information transmission rate of BCI and the instability of UAVs, the current mainstream brain-controlled UAVs mostly adopt semi-autonomous control, that is, shared control of autonomous systems and manual instructions.
Autonomous attitude control of drones guarantees a stable motion output. Therefore, research on brain-controlled drones mainly focuses on the optimization of the time to convert high-level brain-controlled commands into low-level motion commands to balance the tradeoff between the robustness of the command and the pace of operation caused by low BCI ITR and the high-speed requirements of UAV control. A possible solution is to improve the accuracy of the BCI classification system through training, thereby increasing the ITR of the command input. The paradigm used by most brain-controlled drones is SMR[98100], which requires considerable calibration effort for a subject-independent classifier to be built. Meanwhile, the improvement in the control dimension also helps improve the ITR. In Ref. [101], the subjects used kinesiology to control the upward and downward, as well as the left and right movements of the virtual helicopter, and in Ref. [102] the two-dimensional movement of the virtual helicopter was extended to 3D. Another method is to increase the degree of autonomous control of UAVs by improving the control mode to reduce the demand for BCI ITR. In contrast to the control strategy, which directly controls the moving direction of the UAV (Figure 8), Ref. [98] allows the subjects to select the flight trajectory of the UAV for the next period, thus making full use of the UAV's on-chip computing resources.
It is necessary to find more innovative application fields for drones controlled by BCIs that are in line with market needs. Compared to the unstable BCI control, which is limited by poor signal quality and suboptimal algorithms, there are more easy-to-learn, robust, and mature UAV control methods using other modalities. Advances in intelligent planning and control algorithms have resulted in some UAV systems no longer relying on operators to manually provide motion instructions.
In contrast, the main advantage of using brain signals to control drones is their considerable potential. In the future, signal processing algorithms based on error-related potentials can help the UAV control system adjust the UAV's pose and actions in real time according to the user's movement intention. The hierarchical user cognition model can decode the user's high-level semantic instructions into the individual low-level motion instructions of the drone swarm such that the operator can obtain the macroscopic semantic control ability of the autonomous navigation drone swarm. Related applications can help people who temporarily or permanently lose conventional input methods to control surrounding devices and further improve the ability of healthy users to interact with electronic devices.
4 Trends and challenges
From the initial type of application where the avatar's one-dimensional motion is controlled by a single motor imagery task to systems using more control instructions where the IC/NC states of users are detected in real time by intelligent algorithms, navigation applications based on BCIs are becoming increasingly refined with the development of more robust signal classification methods and user-friendly training protocols.
4.1 Human factors in BCI navigation
Human factors must be considered when designing BCI-based navigation applications. This influences the learning efficiency of the BCI system, which indirectly influences the information transmission rate of the system. Taking the brain-controlled electric wheelchair as an example, human factors are controlled such that minimum energy is expended by the user while meeting their expectations. Factors that can be considered include the design of the seat, pedals, and arm rests[103]. However, the most important factor is the design of the control mode for wheelchair movement.
In the design of the BCI-based electric wheelchair, the target user suffers from a certain degree of trauma and is prone to mental fatigue. Concentrating on the generation of control commands that meet the characteristics of the EEG recognition system for too long may increase the mental burden of the user, thereby reducing the usability of the brain-controlled auxiliary device.
To enhance the ergonomics of the system, it is necessary to optimize the design of the brain-controlled navigation system with respect to both the hardware and software. In terms of hardware, wireless devices that can provide high-speed data transmission will effectively expand the activity space of the user and reduce user fatigue caused by the mental feeling of being restrained. Future software trends are more diverse. High-level commands need to cooperate with advanced autonomous navigation systems to provide users with efficient interaction paradigms and comparable obstacle avoidance performance to low-level command controls. The development of an adaptive user model will allow the navigation system to switch between multiple tasks in a complex environment with minimal human instructions, thereby reducing the user's cognitive load. Accurate detection of the user's intent to stop the interaction can also reduce the user's mental workload by transferring control authority to an autonomous navigation system whenever the user feels exhausted.
Several requirements exist for the design of a BCI with regard to navigation applications. In Ref. [104], the design space of a virtual navigation application is divided into two dimensions: speed control and direction control. The two dimensions of control imply different requirements for the BCI design. The BCI for speed control does not require continuous input in most applications of this review; however, it maintains a specific speed value unless the user wants to switch the state. The BCI for the control direction needs to output continuous values and stop the steering intention in time. The former is more suitable for goal-oriented BCI, whereas the latter is suitable for BCI implementation of the process control type.
In a real environment, the navigation application designed for wheelchairs should consider the safety of the users. Owing to the unstable characteristics of BCI[37], the recognition results of motor imagery should be confirmed multiple times to achieve stable control. Meanwhile, objective detection and dodge should be incorporated into the shared control framework to further reduce security risks.
4.2 Hybrid BCI paradigm and brain-switch design for asynchronous control
Based on the different combined signals that the system output depends on, the hybrid BCI can be roughly divided into two categories: those that produce the final system output by mixing the classification results of multiple BCI paradigms and those that combine BCI with other biological signals such as EMG or EOG. The former requires subjects to adjust their mental state and simultaneously generate multiple control signals. The introduction of the second control paradigm provides the possibility for applications other than navigation or adds new dimensions to motion control. Representatives of the former include the control of additional robotic arms to complete task instructions or electrical facilities in the environment. The newly added motion control dimension in the latter includes changing the acceleration of the motion[105] or switching the states of the system[106].
A major motivation for the introduction of the hybrid control paradigm is to increase the speed of movement of a brain-controlled wheelchair or a brain-controlled robot. A typical system[107] uses the SSVEP paradigm to control the forward and backward movements of the wheelchair, whereas SMR is used to control the steering. Compared with the nonhybrid system, it significantly reduces the completion time of the movement task.
Meanwhile, the hybrid BCI also provides the possibility for the design of "brain switches". Patients who use low-level commands to continuously operate a brain-controlled wheelchair usually report that they cannot maintain the mental state required to output forward and backward commands, which accounts for most of the free control process[108]. This is where "brain switch" comes in. In Ref. [109] brain switch is defined as the control paradigm whose output can decide which of its subsequent paradigms should be activated. In the context of a brain-controlled wheelchair, brain switch can be used to activate/deactivate the forward command of the wheelchair[110], thus reducing the burden on users to maintain their mental states. Ref. [111] used the postimagery beta event-related synchronization detected in the EEG during motor imagery to realize a practical brain switch. A brain switch can also rely on SSVEPs with a high amplitude threshold[112].
In the future, an intelligent "brain switch" combined with a novel user mental states recognition algorithm can be developed to enable an automatic navigation system to take control of the electric wheelchair as needed by the user, such that the user can not only control the wheelchair to safely drive through the complex environment using low-level commands but also enjoy the convenience of intelligent high-level control instructions.
5 Conclusion
BCI is used in navigation applications in both virtual and real environments for users with impaired motor functions to convey locomotion intentions. This paper summarizes BCI navigation applications that have been used in both real and virtual environments in the past 20 years. The shift in the control mode from synchronous to asynchronous is shared by navigation applications in both virtual and real environments because the latter is more intuitive and poses a much higher information transfer rate. The contradiction between high-level and low-level commands is critical when reviewing the two major applications of BCI navigation designed for electric wheelchairs and mobile robots. Research challenges, including suboptimal classification methods and insufficient consideration of human factors, should be addressed to provide a better user experience. Finally, novel trends, including hybrid BCI for BCI navigation, may provide possibilities for broad BCI navigation applications to scenarios outside the laboratory.

Reference

1.

Nicolas-Alonso L F, Gomez-Gil J. Brain computer interfaces, a review. Sensors (Basel, Switzerland), 2012, 12(2): 1211–1279 DOI:10.3390/s120201211

2.

Beverina F, Palmas G, Silvoni S, Piccione F, Giove S. User adaptive BCIs: SSVEP and P300 based interfaces. PsychNology Journal, 2003, 1(4): 331–354

3.

Wang Y J, Wang R P, Gao X R, Hong B, Gao S K. A practical VEP-based brain-computer interface. IEEE Transactions on Neural Systems and Rehabilitation Engineering, 2006, 14(2): 234–240 DOI:10.1109/tnsre.2006.875576

4.

Galán F, Nuttin M, Lew E, Ferrez P W, Vanacker G, Philips J, Millán J D R. A brain-actuated wheelchair: Asynchronous and non-invasive Brain-computer interfaces for continuous control of robots. Clinical Neurophysiology, 2008, 119(9): 2159–2169 DOI:10.1016/j.clinph.2008.06.001

5.

Iturrate I, Antelis J M, Kubler A, Minguez J. A noninvasive brain-actuated wheelchair based on a P300 neurophysiological protocol and automated navigation. IEEE Transactions on Robotics, 2009, 25(3): 614–627 DOI:10.1109/tro.2009.2020347

6.

Tsui C S, Gan J Q, Hu H. A self-paced motor imagery based brain-computer interface for robotic wheelchair control. Clinical EEG and Neuroscience, 2011, 42(4): 225–229 DOI:10.1177/155005941104200407

7.

Lopes A C, Pires G, Nunes U. RobChair: Experiments evaluating Brain-Computer Interface to steer a semi-autonomous wheelchair. In: 2012 IEEE/RSJ International Conference on Intelligent Robots and Systems. Vilamoura-Algarve, Portugal, IEEE, 2012, 5135–5136 DOI:10.1109/iros.2012.6386276

8.

Piña-Ramirez O, Valdes-Cristerna R, Yanez-Suarez O. Scenario screen: a dynamic and context dependent P300 stimulator screen aimed at wheelchair navigation control. Computational and Mathematical Methods in Medicine, 2018, 2018: 7108906 DOI:10.1155/2018/7108906

9.

Escolano C, Ramos Murguialday A, Matuz T, Birbaumer N, Minguez J. A telepresence robotic system operated with a P300-based brain-computer interface: Initial tests with ALS patients. 2010 Annual International Conference of the IEEE Engineering in Medicine and Biology, 2010, 4476–4480 DOI:10.1109/iembs.2010.5626045

10.

Vourvopoulos A, Ferreira A, Badia S B I. NeuRow: an immersive VR environment for motor-imagery training with the use of brain-computer interfaces and vibrotactile feedback. In: Proceedings of the 3rd International Conference on Physiological Computing Systems. Lisbon, Portugal, SCITEPRESS—Science and Technology Publications, 2016, 43–53 DOI:10.5220/0005939400430053

11.

Leeb R, Settgast V, Fellner D, Pfurtscheller G. Self-paced exploration of the Austrian National Library through thought. International Journal of Bioelectromagnetism, 2007, 9(4): 237–244

12.

Leeb R, Keinrath C, Friedman D, Guger C, Scherer R, Neuper C, Garau M, Antley A, Steed A, Slater M, Pfurtscheller G. Walking by thinking: the brainwaves are crucial, not the muscles!. Presence: Teleoperators and Virtual Environments, 2006, 15(5): 500–514 DOI:10.1162/pres.15.5.500

13.

Pfurtscheller G, Leeb R, Keinrath C, Friedman D, Neuper C, Guger C, Slater M. Walking from thought. Brain Research, 2006, 1071(1): 145–152 DOI:10.1016/j.brainres.2005.11.083

14.

Leeb R, Scherer R, Keinrath C, Guger C, Pfurtscheller G. Exploring virtual environments with an EEG-based BCI through motor imagery. Biomedizinische Technik Biomedical Engineering, 2005, 50(4): 86–91 DOI:10.1515/BMT.2005.012

15.

Shin B G, Kim T, Jo S. Non-invasive brain signal interface for a wheelchair navigation. In: ICCAS 2010. Gyeonggi-do, Korea(South), IEEE, 2010, 2257–2260 DOI:10.1109/iccas.2010.5669830

16.

Hema C R, Paulraj M P. Control brain machine interface for a power wheelchair. 5th Kuala Lumpur International Conference on Biomedical Engineering 2011, 2011, 287–291 DOI:10.1007/978-3-642-21729-6_75

17.

Chai R, Ling S H, Hunter G P, Nguyen H T. Mental non-motor imagery tasks classifications of brain computer interface for wheelchair commands using genetic algorithm-based neural network. In: The 2012 International Joint Conference on Neural Networks (IJCNN). Brisbane, QLD, Australia, IEEE, 2012, 1–7 DOI:10.1109/ijcnn.2012.6252499

18.

Duan J D, Li Z J, Yang C G, Xu P. Shared control of a brain-actuated intelligent wheelchair. In: Proceeding of the 11th World Congress on Intelligent Control and Automation. Shenyang, China, IEEE, 2014, 341–346 DOI:10.1109/wcica.2014.7052737

19.

Jiang J, Wang A, Ge Y, Zhou Z T. Brain-actuated humanoid robot control using one class motor imagery task. In: 2013 Chinese Automation Congress. Changsha, China, IEEE, 2013, 587–590 DOI:10.1109/cac.2013.6775803

20.

Varona-Moya S, Velasco-Álvarez F, Sancha-Ros S, Fernández-Rodríguez Á, Blanca M J, Ron-Angevin R. Wheelchair navigation with an audio-cued, two-class motor imagery-based brain-computer interface system. 2015 7th International IEEE/EMBS Conference on Neural Engineering (NER), 2015, 174–177 DOI:10.1109/ner.2015.7146588

21.

Ron-Angevin R, Fernández-Rodríguez Á, Velasco-Álvarez F. Brain-controlled wheelchair through discrimination of two mental tasks. Proceedings of SAI Intelligent Systems Conference (IntelliSys) 2016, 2018, 563–574 DOI:10.1007/978-3-319-56994-9_38

22.

Puanhvuan D, Khemmachotikun S, Wechakarn P, Wijarn B, Wongsawat Y. Navigation-synchronized multimodal control wheelchair from brain to alternative assistive technologies for persons with severe disabilities. Cognitive Neurodynamics, 2017, 11(2): 117–134 DOI:10.1007/s11571-017-9424-6

23.

Farmaki C, Christodoulakis G, Sakkalis V. Applicability of SSVEP-based brain-computer interfaces for robot navigation in real environments. In: 2016 38th Annual International Conference of the IEEE Engineering in Medicine and Biology Society (EMBC). Orlando, FL, USA, IEEE, 2016, 2768–2771 DOI:10.1109/embc.2016.7591304

24.

Kucukyildiz G, Ocak H, Karakaya S, Sayli O. Design and implementation of a multi sensor based brain computer interface for a robotic wheelchair. Journal of Intelligent & Robotic Systems, 2017, 87(2): 247–263 DOI:10.1007/s10846-017-0477-x

25.

Malete T N, Moruti K, Thapelo T S, Jamisola R S. EEG-based control of a 3D game using 14-channel emotiv epoc+. In: 2019 IEEE International Conference on Cybernetics and Intelligent Systems (CIS) and IEEE Conference on Robotics, Automation and Mechatronics (RAM). Bangkok, Thailand, IEEE, 2019, 463–468 DOI:10.1109/cis-ram47153.2019.9095807

26.

Lamti H A, Gorce P, Ben Khelifa M M, Alimi A M. When mental fatigue maybe characterized by Event Related Potential (P300) during virtual wheelchair navigation. Computer Methods in Biomechanics and Biomedical Engineering, 2016, 19(16): 1749–1759 DOI:10.1080/10255842.2016.1183198

27.

Hazrati M K, Hofmann U G. Avatar navigation in Second Life using brain signals. In: 2013 IEEE 8th International Symposium on Intelligent Signal Processing. Funchal, Portugal, IEEE, 2013, 1–7 DOI:10.1109/wisp.2013.6657473

28.

Congedo M, Lotte F, Lécuyer A. Classification of movement intention by spatially filtered electromagnetic inverse solutions. Physics in Medicine and Biology, 2006, 51(8): 1971–1989 DOI:10.1088/0031-9155/51/8/002

29.

Lotte F, Lecuyer A, Arnaldi B. FuRIA: an inverse solution based feature extraction algorithm using fuzzy set theory for brain-computer interfaces. IEEE Transactions on Signal Processing, 2009, 57(8): 3253–3263 DOI:10.1109/tsp.2009.2020752

30.

Rivet B, Souloumiac A, Attina V, Gibert G. xDAWN algorithm to enhance evoked potentials: application to brain–computer interface. IEEE Transactions on Biomedical Engineering, 2009, 56(8): 2035–2043 DOI:10.1109/tbme.2009.2012869

31.

Wang H T, Bezerianos A. Brain-controlled wheelchair controlled by sustained and brief motor imagery BCIs. Electronics Letters, 2017, 53(17): 1178–1180 DOI:10.1049/el.2017.1637

32.

Yu Y, Liu Y D, Jiang J, Yin E W, Zhou Z T, Hu D W. An asynchronous control paradigm based on sequential motor imagery and its application in wheelchair navigation. IEEE Transactions on Neural Systems and Rehabilitation Engineering, 2018, 26(12): 2367–2375 DOI:10.1109/tnsre.2018.2881215

33.

Chae Y, Jo S, Jeong J. Brain-actuated humanoid robot navigation control using asynchronous Brain-Computer Interface. In: 2011 5th International IEEE/EMBS Conference on Neural Engineering. Cancun, Mexico, IEEE, 2011, 519–524 DOI:10.1109/ner.2011.5910600

34.

Zhang R, Li Y Q, Yan Y Y, Zhang H, Wu S Y. An intelligent wheelchair based on automated navigation and BCI techniques. 2014 36th Annual International Conference of the IEEE Engineering in Medicine and Biology Society, 2014, 1302–1305 DOI:10.1109/embc.2014.6943837

35.

Lin Y T, Kuo C H. Development of SSVEP-based intelligent wheelchair brain computer interface assisted by reactive obstacle avoidance. In: 2016 IEEE International Conference on Industrial Technology (ICIT). Taipei, Taiwan, China, IEEE, 2016, 1572–1577 DOI:10.1109/icit.2016.7474995

36.

Woehrle H, Krell M M, Straube S, Kim S K, Kirchner E A, Kirchner F. An adaptive spatial filter for user-independent single trial detection of event-related potentials. IEEE Transactions on Biomedical Engineering, 2015, 62(7): 1696–1705 DOI:10.1109/tbme.2015.2402252

37.

Lotte F, Bougrain L, Cichocki A, Clerc M, Congedo M, Rakotomamonjy A, Yger F. A review of classification algorithms for EEG-based brain-computer interfaces: a 10 year update. Journal of Neural Engineering, 2018, 15(3): 031005 DOI:10.1088/1741-2552/aab2f2

38.

Kalunga E K, Chevallier S, Barthélemy Q, Djouani K, Monacelli E, Hamam Y. Online SSVEP-based BCI using Riemannian geometry. Neurocomputing, 2016, 191: 55–68 DOI:10.1016/j.neucom.2016.01.007

39.

Lawhern V J, Solon A J, Waytowich N R, Gordon S M, Hung C P, Lance B J. EEGNet: a compact convolutional neural network for EEG-based brain–computer interfaces. Journal of Neural Engineering, 2018, 15(5): 056013 DOI:10.1088/1741-2552/aace8c

40.

Kwak N S, Müller K R, Lee S W. A convolutional neural network for steady state visual evoked potential classification under ambulatory environment. PLoS One, 2017, 12(2): e0172578 DOI:10.1371/journal.pone.0172578

41.

Schirrmeister R, Gemein L, Eggensperger K, Hutter F, Ball T. Deep learning with convolutional neural networks for decoding and visualization of EEG pathology. 2017 IEEE Signal Processing in Medicine and Biology Symposium (SPMB), 2017, 1–7 DOI:10.1109/spmb.2017.8257015

42.

Sannelli C, Vidaurre C, Müller K R, Blankertz B. A large scale screening study with a SMR-based BCI: Categorization of BCI users and differences in their SMR activity. PLoS One, 2019, 14(1): e0207351 DOI:10.1371/journal.pone.0207351

43.

Ahn M, Jun S C. Performance variation in motor imagery brain-computer interface: a brief review. Journal of Neuroscience Methods, 2015, 243, 103–110 DOI:10.1016/j.jneumeth.2015.01.033

44.

Gayraud N T H, Rakotomamonjy A, Clerc M. Optimal transport applied to transfer learning for P300 detection. BCI 2017-7th Graz Brain-Computer Interface Conference, 2017, 6

45.

Hsu W Y. EEG-based motor imagery classification using enhanced active segment selection and adaptive classifier. Computers in Biology and Medicine, 2011, 41(8): 633–639 DOI:10.1016/j.compbiomed.2011.05.014

46.

Wu Z H, Lai Y X, Xia Y, Wu D, Yao D Z. Stimulator selection in SSVEP-based BCI. Medical Engineering & Physics, 2008, 30(8): 1079–1088 DOI:10.1016/j.medengphy.2008.01.004

47.

Stamps K, Hamam Y. Towards inexpensive BCI control for wheelchair navigation in the enabled environment-A hardware survey. Brain Informatics, 2010, 336–345 DOI:10.1007/978-3-642-15314-3_32

48.

Xu M P, Xiao X L, Wang Y J, Qi H Z, Jung T P, Ming D. A brain-computer interface based on miniature-event-related potentials induced by very small lateral visual stimuli. IEEE Transactions on Biomedical Engineering, 2018, 65(5): 1166–1175 DOI:10.1109/tbme.2018.2799661

49.

Diez P F, Mut V A, Laciar E, Perona E M A. Mobile robot navigation with a self-paced brain-computer interface based on high-frequency SSVEP. Robotica, 2014, 32(5): 695–709 DOI:10.1017/s0263574713001021

50.

Farwell L A, Donchin E. Talking off the top of your head: toward a mental prosthesis utilizing event-related brain potentials. Electroencephalography and Clinical Neurophysiology, 1988, 70(6): 510–523 DOI:10.1016/0013-4694(88)90149-6

51.

Nawroj A I, Wang S Y, Yu Y C, Gabel L. A brain-computer interface for robot navigation. In: 2012 38th Annual Northeast Bioengineering Conference (NEBEC). Philadelphia, PA, USA, IEEE, 2012: 15–16 DOI:10.1109/nebc.2012.6206941

52.

Escolano C, Antelis J M, Minguez J. A telepresence mobile robot controlled with a noninvasive brain-computer interface. IEEE Transactions on Systems, Man, and Cybernetics, Part B (Cybernetics), 2012, 42(3): 793–804 DOI:10.1109/tsmcb.2011.2177968

53.

Ron-Angevin R, Díaz-Estrella A, Velasco-Alvarez F. A two-class brain computer interface to freely navigate through virtual worlds. Biomedizinische Technik. Biomedical Engineering, 2009, 54(3): 126–133 DOI:10.1515/bmt.2009.014

54.

Edlinger G, Holzner C, Guger C, Groenegress C, Slater M. Brain-computer interfaces for goal orientated control of a virtual smart home environment. In: 2009 4th International IEEE/EMBS Conference on Neural Engineering. Antalya, Turkey, IEEE, 2009, 463–465 DOI:10.1109/ner.2009.5109333

55.

Gentiletti G G, Gebhart J G, Acevedo R C, Yáñez-Suárez O, Medina-Bañuelos V. Command of a simulated wheelchair on a virtual environment using a brain-computer interface. IRBM, 2009, 30(5/6): 218–225 DOI:10.1016/j.irbm.2009.10.006

56.

Jayaram V, Alamgir M, Altun Y, Scholkopf B, Grosse-Wentrup M. Transfer learning in brain-computer interfaces. IEEE Computational Intelligence Magazine, 2016, 11(1): 20–31 DOI:10.1109/mci.2015.2501545

57.

Kindermans P J, Schreuder M, Schrauwen B, Müller K R, Tangermann M. True zero-training brain-computer interfacing: an online study. PLoS One, 2014, 9(7): e102504 DOI:10.1371/journal.pone.0102504

58.

Geng T, Dyson M, Tsui C S, Gan J Q. A 3-class asynchronous BCI controlling A simulated mobile robot. In: 2007 29th Annual International Conference of the IEEE Engineering in Medicine and Biology Society. Lyon, France, IEEE, 2007, 2524–2527 DOI:10.1109/iembs.2007.4352842

59.

Velasco-Álvarez F, Ron-Angevin R. Asynchronous brain-computer interface to navigate in virtual environments using one motor imagery. Bio-Inspired Systems: Computational and Ambient Intelligence, 2009, 698–705 DOI:10.1007/978-3-642-02478-8_87

60.

Leeb R, Lee F, Keinrath C, Scherer R, Bischof H, Pfurtscheller G. Brain-computer communication: motivation, aim, and impact of exploring a virtual apartment. IEEE Transactions on Neural Systems and Rehabilitation Engineering, 2007, 15(4): 473–482 DOI:10.1109/tnsre.2007.906956

61.

Friedman D, Leeb R, Guger C, Steed A, Pfurtscheller G, Slater M. Navigating virtual reality by thought: what is it like? Presence: Teleoperators and Virtual Environments, 2007, 16(1): 100–110 DOI:10.1162/pres.16.1.100

62.

Zich C, Debener S, Kranczioch C, Bleichner M G, Gutberlet I, De Vos M. Real-time EEG feedback during simultaneous EEG-fMRI identifies the cortical signature of motor imagery. NeuroImage, 2015, 114, 438–447 DOI:10.1016/j.neuroimage.2015.04.020

63.

Jeannerod M. Mental imagery in the motor context. Neuropsychologia, 1995, 33(11): 1419–1432 DOI:10.1016/0028-3932(95)00073-c

64.

Pineda J A, Silverman D S, Vankov A, Hestenes J. Learning to control brain rhythms: making a brain-computer interface possible. IEEE Transactions on Neural Systems and Rehabilitation Engineering, 2003, 11(2): 181–184 DOI:10.1109/tnsre.2003.814445

65.

Bayliss J D. Use of the evoked potential P3 component for control in a virtual apartment. IEEE Transactions on Neural Systems and Rehabilitation Engineering, 2003, 11(2): 113–116 DOI:10.1109/tnsre.2003.814438

66.

Suh D, Cho H S, Goo J, Park K S, Hahn M. Virtual navigation system for the disabled by motor imagery. Advances in Computer, Information, and Systems Sciences, and Engineering. Springer, Dordrecht, 2006, 143–148 DOI:10.1007/1-4020-5261-8_24

67.

Leeb R, Friedman D, Müller-Putz G R, Scherer R, Slater M, Pfurtscheller G. Self-paced (asynchronous) BCI control of a wheelchair in virtual environments: a case study with a tetraplegic. Computational Intelligence and Neuroscience, 2007, 1–8 DOI:10.1155/2007/79642

68.

Tsui C S L, Gan J Q. Asynchronous BCI control of a robot simulator with supervised online training. Intelligent Data Engineering and Automated Learning, Springer, Berlin, Heidelberg, 2007, 125–134 DOI:10.1007/978-3-540-77226-2_14

69.

Fujisawa J, Touyama H, Hirose M. EEG-based navigation of immersing virtual environment using common spatial patterns. In: 2008 IEEE Virtual Reality Conference. Reno, NV, USA, IEEE, 2008, 251–252 DOI:10.1109/vr.2008.4480786

70.

Scherer R, Lee F, Schlogl A, Leeb R, Bischof H, Pfurtscheller G. Toward self-paced brain-computer communication: navigation through virtual worlds. IEEE Transactions on Biomedical Engineering, 2008, 55(2): 675–682 DOI:10.1109/tbme.2007.903709

71.

Lu W, Wei Y N, Yuan J X, Deng Y M, Song A G. Tractor assistant driving control method based on EEG combined with RNN-TL deep learning algorithm. IEEE Access, 2020, 8, 163269–163279 DOI:10.1109/access.2020.3021051

72.

Eleni A. Control of medical robotics and neurorobotic prosthetics by noninvasive Brain-Robot Interfaces via EEG and RFID technology. In: 2008 8th IEEE International Conference on BioInformatics and BioEngineering. Athens, Greece, IEEE, 2008, 1–4 DOI:10.1109/bibe.2008.4696838

73.

Wang F, Zhou C C, Hao X, Wang S, Yang G D. BCI control system for humanoid robot based on motor imaginary. In: 2013 25th Chinese Control and Decision Conference (CCDC). Guiyang, China, IEEE, 2013, 5140–5143 DOI:10.1109/ccdc.2013.6561868

74.

Chin Z Y, Ang K K, Wang C C, Guan C T. Navigation in a virtual environment using multiclass motor imagery Brain-Computer Interface. In: 2013 IEEE Symposium on Computational Intelligence, Cognitive Algorithms, Mind, and Brain (CCMB). Singapore, IEEE, 2013, 152–157 DOI:10.1109/ccmb.2013.6609179

75.

Scherer R, Friedrich E C V, Allison B, Pröll M, Chung M, Cheung W, Neuper C. Non-invasive brain-computer interfaces: Enhanced gaming and robotic control. In: International Work-Conference on Artificial Neural Networks. Springer, Berlin, Heidelberg, 2011, 362–369

76.

Velasco-Álvarez F, Ron-Angevin R, da Silva-Sauer L, Sancha-Ros S, Blanca-Mena M J. Audio-cued SMR brain-computer interface to drive a virtual wheelchair. In: Advances in Computational Intelligence. Berlin, Heidelberg: Springer Berlin Heidelberg, 2011, 337–344 DOI:10.1007/978-3-642-21501-8_42

77.

Müller S M T, Diez P F, Bastos-Filho T F, Sarcinelli-Filho M, Mut V, Laciar E, Avila E. Robotic wheelchair commanded by people with disabilities using low/high-frequency SSVEP-based BCI. In: World Congress on Medical Physics and Biomedical Engineering. Toronto, Canada, 2015, 1177–1180 DOI:10.1007/978-3-319-19387-8_285

78.

Diez P F, Mut V A, Laciar E, Perona E M A. Mobile robot navigation with a self-paced brain-computer interface based on high-frequency SSVEP. Robotica, 2014, 32(5): 695–709 DOI:10.1017/s0263574713001021

79.

Chen S C, Chen Y J, Zaeni I A E, Wu C M. A single-channel SSVEP-based BCI with a fuzzy feature threshold algorithm in a maze game. International Journal of Fuzzy Systems, 2017, 19(2): 553–565 DOI:10.1007/s40815-016-0289-3

80.

Liu Y L, Li J J, Li Z J. An indoor navigation control strategy for a brain-actuated mobile robot. In: 2018 3rd International Conference on Advanced Robotics and Mechatronics (ICARM). Singapore, Singapore, IEEE, 2018, 13–18 DOI:10.1109/icarm.2018.8610705

81.

Yuan Y X, Li Z J, Liu Y L. Brain teleoperation of a mobile robot using deep learning technique. In: 2018 3rd International Conference on Advanced Robotics and Mechatronics (ICARM). Singapore, Singapore, IEEE, 2018, 54–59 DOI:10.1109/icarm.2018.8610711

82.

Liu Y, Li Z, Zhang T, Zhao S. Brain-robot interface-based navigation control of a mobile robot in corridor environments. IEEE Transactions on Systems, Man, and Cybernetics: Systems, 2018, 50(8): 3047–3058 DOI:10.1109/TSMC.2018.2833857

83.

Farmaki C, Krana M, Pediaditis M, Spanakis E, Sakkalis V. Single-channel SSVEP-based BCI for robotic car navigation in real world conditions. In: 2019 IEEE 19th International Conference on Bioinformatics and Bioengineering (BIBE). Athens, Greece, IEEE, 2019, 638–643 DOI:10.1109/bibe.2019.00120

84.

Koo B, Lee H G, Nam Y, Choi S. Immersive BCI with SSVEP in VR head-mounted display. In: 2015 37th Annual International Conference of the IEEE Engineering in Medicine and Biology Society (EMBC). Milan, Italy, IEEE, 2015, 1103–1106 DOI:10.1109/embc.2015.7318558

85.

Bevilacqua V, Tattoli G, Buongiorno D, Loconsole C, Leonardis D, Barsotti M, Frisoli A, Bergamasco M. A novel BCI-SSVEP based approach for control of walking in Virtual Environment using a Convolutional Neural Network. In: 2014 International Joint Conference on Neural Networks (IJCNN). Beijing, China, IEEE, 2014, 4121–4128 DOI:10.1109/ijcnn.2014.6889955

86.

Diez P F, Mut V A, Avila Perona E M, Laciar Leber E. Asynchronous BCI control using high-frequency SSVEP. Journal of NeuroEngineering and Rehabilitation, 2011, 8(1): 1–9 DOI:10.1186/1743-0003-8-39

87.

Chung M, Cheung W, Scherer R, Rao R P N. Towards hierarchical BCIs for robotic control. In: 2011 5th International IEEE/EMBS Conference on Neural Engineering. Cancun, Mexico, IEEE, 2011, 330–333 DOI:10.1109/ner.2011.5910554

88.

Legény J, Abad R V, Lécuyer A. Navigating in virtual worlds using a self-paced SSVEP-based brain-computer interface with integrated stimulation and real-time feedback. Presence: Teleoperators and Virtual Environments, 2011, 20(6): 529–544 DOI:10.1162/pres_a_00075

89.

Annese V F, Mezzina G, De Venuto D. Wireless Brain-computer interface for wheelchair control by using fast machine learning and real-time hyper-dimensional classification. In: International Conference on Web Engineering. Springer, Cham, 2017, 61–74 DOI:10.1007/978-3-319-74433-9_5

90.

Yu Y C, Nawroj A, Wang S Y, Gabel L. Mobile robot navigation through a brain computer interface. In: 2012 IEEE Signal Processing in Medicine and Biology Symposium (SPMB). New York, NY, USA, IEEE, 2012, 1–5 DOI:10.1109/spmb.2012.6469469

91.

Curtin A, Ayaz H, Liu Y C, Shewokis P A, Onaral B. A P300-based EEG-BCI for spatial navigation control. In: 2012 Annual International Conference of the IEEE Engineering in Medicine and Biology Society. 2012, 3841–3844 DOI:10.1109/embc.2012.6346805

92.

Lopes A C, Pires G, Vaz L, Nunes U. Wheelchair navigation assisted by Human-Machine shared-control and a P300-based Brain Computer Interface. In: 2011 IEEE/RSJ International Conference on Intelligent Robots and Systems. San Francisco, CA, USA, IEEE, 2011, 2438–2444 DOI:10.1109/iros.2011.6094748

93.

Cherubini A, Oriolo G, Macri F, Aloise F, Babiloni F, Cincotti F, Mattia D. Development of a multimode navigation system for an assistive robotics project. In: Proceedings 2007 IEEE International Conference on Robotics and Automation. Rome, Italy, IEEE, 2007, 2336–2342 DOI:10.1109/robot.2007.363668

94.

Diez P F, Mut V A, Laciar E, Perona E M A. Mobile robot navigation with a self-paced brain-computer interface based on high-frequency SSVEP. Robotica, 2014, 32(5): 695–709 DOI:10.1017/s0263574713001021

95.

Yuan Y X, Su W B, Li Z J, Shi G M. Brain-computer interface-based stochastic navigation and control of a semiautonomous mobile robot in indoor environments. IEEE Transactions on Cognitive and Developmental Systems, 2019, 11(1): 129–141 DOI:10.1109/tcds.2018.2885774

96.

Zhang Z W, Wang W J, Song P P, Sheng S L, Xie L Y, Duan F, GuanSoo Y, Odagaki M. Design of an SSVEP-based BCI system with vision assisted navigation module for the cooperative control of multiple robots. In: 2017 IEEE 7th Annual International Conference on CYBER Technology in Automation, Control, and Intelligent Systems (CYBER). Honolulu, HI, USA, IEEE, 2017, 558–563 DOI:10.1109/cyber.2017.8446149

97.

Nourmohammadi A, Jafari M, Zander T O. A survey on unmanned aerial vehicle remote control using brain-computer interface. IEEE Transactions on Human-Machine Systems, 2018, 48(4), 337–348 DOI:10.1109/thms.2018.2830647

98.

Akce A, Johnson M, Bretl T. Remote teleoperation of an unmanned aircraft with a brain-machine interface: Theory and preliminary results. In: 2010 IEEE International Conference on Robotics and Automation. Anchorage, AK, USA, IEEE, 2010, 5322–5327 DOI:10.1109/robot.2010.5509671

99.

LaFleur K, Cassady K, Doud A, Shades K, Rogin E, He B. Quadcopter control in three-dimensional space using a noninvasive motor imagery-based brain-computer interface. Journal of Neural Engineering, 2013, 10(4): 046003 DOI:10.1088/1741-2560/10/4/046003

100.

Kim B H, Kim M, Jo S. Quadcopter flight control using a low-cost hybrid interface with EEG-based classification and eye tracking. Computers in Biology and Medicine, 2014, 51, 82–92 DOI:10.1016/j.compbiomed.2014.04.020

101.

Royer A S, Doud A J, Rose M L, He B. EEG control of a virtual helicopter in 3-dimensional space using intelligent control strategies. IEEE Transactions on Neural Systems and Rehabilitation Engineering, 2010, 18(6): 581–589 DOI:10.1109/tnsre.2010.2077654

102.

Doud A J, Lucas J P, Pisansky M T, He B. Continuous three-dimensional control of a virtual helicopter using a motor imagery based brain-computer interface. PLoS One, 2011, 6(10): e26322 DOI:10.1371/journal.pone.0026322

103.

Alrajhi W, Hosny M, Al-Wabil A, Alabdulkarim A. Human factors in the design of BCI-controlled wheelchairs. In: Human-Computer Interaction. Advanced Interaction Modalities and Techniques. Cham: Springer International Publishing, 2014, 513–522 DOI:10.1007/978-3-319-07230-2_49

104.

Al Zayer M, MacNeilage P, Folmer E. Virtual locomotion: a survey. IEEE Transactions on Visualization and Computer Graphics, 2020, 26(6): 2315–2334 DOI:10.1109/tvcg.2018.2887379

105.

Long J Y, Li Y Q, Wang H T, Yu T Y, Pan J H, Li F. A hybrid brain computer interface to control the direction and speed of a simulated or real wheelchair. IEEE Transactions on Neural Systems and Rehabilitation Engineering, 2012, 20(5): 720–729 DOI:10.1109/tnsre.2012.2197221

106.

Cao L, Li J, Ji H F, Jiang C J. A hybrid brain computer interface system based on the neurophysiological protocol and brain-actuated switch for wheelchair control. Journal of Neuroscience Methods, 2014, 229, 33–43 DOI:10.1016/j.jneumeth.2014.03.011

107.

Li J, Ji H, Cao L, Zang D, Gu R, Xia B, Wu Q. Evaluation and application of a hybrid brain computer interface for real wheelchair parallel control with multi-degree of freedom. International Journal of Neural Systems, 2014, 24(4): 1450014 DOI:10.1142/s0129065714500142

108.

Fernández-Rodríguez Á, Velasco-Álvarez F, Bonnet-Save M, Ron-Angevin R. Evaluation of switch and continuous navigation paradigms to command a brain-controlled wheelchair. Frontiers in Neuroscience, 2018, 12, 438 DOI:10.3389/fnins.2018.00438

109.

Pfurtscheller G, Allison B Z, Brunner C, Bauernfeind G, Solis-Escalante T, Scherer R, Zander T O, Mueller-Putz G, Neuper C, Birbaumer N. The hybrid BCI. Frontiers in Neuroscience, 2010, 4, 30 DOI:10.3389/fnpro.2010.00003

110.

Li Y Q, Pan J H, Wang F, Yu Z L. A hybrid BCI system combining P300 and SSVEP and its application to wheelchair control. IEEE Transactions on Biomedical Engineering, 2013, 60(11): 3156–3166 DOI:10.1109/tbme.2013.2270283

111.

Pfurtscheller G, Neuper C, Brunner C, da Silva F L. Beta rebound after different types of motor imagery in man. Neuroscience Letters, 2005, 378(3): 156–159 DOI:10.1016/j.neulet.2004.12.034

112.

Naito E, Kochiyama T, Kitada R, Nakamura S, Matsumura M, Yonekura Y, Sadato N. Internally simulated movement sensations during motor imagery activate cortical motor areas and the cerebellum. The Journal of Neuroscience, 2002, 22(9): 3683–3691