Introduction Imagine being able to control a robot or other machine using only your thoughts – this fanciful notion has long since captured the imagination of humankind, and, within the past decade, the ability to actually bypass conventional channels of communication (i.e., muscles or speech) between a user's brain and a computer has become a demonstrated reality. Known as brain–computer interfaces (BCIs), the field has already seen several early prototypes (Nicolelis, 2001; Millán, 2002; Wolpaw et al., 2002; Wickelgren, 2003; Allison et al., 2007; Dornhege et al., 2007). A BCI monitors the user's brain activity and translates their intentions into commands without activating any muscle or peripheral nerve. BCI as a proof-of-concept has already been demonstrated in several contexts; driving a robot or wheelchair (Millán et al., 2004a,b, 2009), operating prosthetic devices (Müller-Putz et al., 2005, 2006; Pfurtscheller et al., 2000, 2003), selecting letters from a virtual keyboard (Birbaumer et al., 1999; Donchin et al., 2000; Millán, 2003; Obermaier et al., 2003; Millán et al., 2004a; Scherer et al., 2004; Müller and Blankertz, 2006; Sellers et al., 2006; Williamson et al., 2009), internet browsing (Karim et al., 2006; Bensch et al., 2007; Mugler et al., 2008), navigating in virtual realities (Bayliss, 2003; Leeb et al., 2007a,b), and playing games (Millán, 2003; Krepki et al., 2007; Nijholt et al., 2008b; Tangermann et al., 2008). Such a kind of BCI is a natural way to augment human capabilities by providing a new interaction link with the outside world and is particularly relevant as an aid for disabled people. The central tenet of a BCI is the capability to distinguish different patterns of brain activity, each being associated to a particular intention or mental task. Hence adaptation is a key component of a BCI because users must learn to modulate their brainwaves so as to generate distinct brain patterns. In some cases, user training is complemented with machine learning techniques to discover the individual brain patterns characterizing the mental tasks executed by the user. With the field now entering a more mature phase of development, the time is ripe to focus on the development of practical BCI applications aimed at improving the lives of physically disabled individuals1. Furthermore, if our goal is to offer solutions to these people, and to augment their capabilities, then BCIs must be combined with existing assistive technologies (AT), especially those they already utilize. Most BCIs for human subjects rely on non-invasive electroencephalogram (EEG) signals; i.e., the electrical brain activity recorded from electrodes placed on the scalp. The reason is that EEG is a practical modality if we want to bring BCI technology to a large population2. For this reason, in this review, we focus on EEG-based BCIs and how to combine them with AT. We also identify and review some principles and research challenges that we consider fundamental to bring BCI technology out of the lab. These principles include the development of hybrid BCI (hBCI) architectures, the design of user–machine adaptation algorithms, the exploitation of users’ mental states for BCI reliability and confidence measures, the incorporation of principles in human–computer interaction (HCI) to improve BCI usability, and the development of novel BCI technology including better EEG devices. Note, however, that most of the principles we put forward here can also be applied to other types of BCI, either invasive (single-unit activity, Carmena et al., 2003; Hochberg et al., 2006; electrocorticogram, Leuthardt et al., 2004; Pistohl et al., 2008) or non-invasive (MEG, Kauhanen et al., 2006; Mellinger et al., 2007; fMRI, Weiskopf et al., 2004; Yoo et al., 2004; NIRS, Coyle et al., 2007; Sitaram et al., 2007). In this paper, we identify four application areas where BCI assistive technology can have a real, measurable impact for people with motor disabilities; namely “Communication and Control”, “Motor Substitution”, “Entertainment”, and “Motor Recovery”. The remainder of the paper is organized as follows. The rest of this section is devoted to the research challenges currently faced by BCI-based assistive technology. Then, in Sections “Communication and Control”, “Motor Substitution”, “Entertainment”, and “Motor Recovery”, each application area is discussed, and the current state-of-the-art of each is reviewed. Finally, in Section “Summary”, we summarize the main message of this review paper. Hybrid BCI What kind of assistance can BCI actually offer to disabled persons? Despite progress in AT, there is still a large number of people with severe motor disabilities who cannot fully benefit from AT due to their limited access to current assistive products (APs). For them, BCI is the solution. However, notwithstanding the impressive demonstrations of BCI technology around the world, today's state-of-the-art is such that BCI alone cannot make patients interact with and control assistive devices over long periods of time and without expert assistance. But this doesn't mean that there is no place for BCI. The solution is to use BCI as an additional channel. Such a hybrid approach, where conventional APs (operated using some residual muscular functionality) are enhanced by BCI technology, leads to what we call hybrid BCI (hBCI). As a general definition, a hBCI is a combination of different signals including at least one BCI channel. Thus, it could be a combination of two BCI channels but, more importantly, also a combination of BCI and other biosignals [such as electromyographic (EMG), etc.] or special AT input devices (e.g., joysticks, switches, etc.). The control channels (BCI and other modalities) can operate different parts of the assistive device or all of them could be combined to allow users to smoothly switch from one control channel to the other depending on their preference and performance. An example of the former case is a neuroprostheses that uses residual movements for reaching objects and BCI for grasping. In the latter case, a muscular dystrophy patient may prefer to speak in the morning and switch to BCI in the afternoon when fatigue prevents him from being able to speak intelligibly. Moreover, in the case of progressive loss of muscular activity [as in muscular dystrophy, amyotrophic lateral sclerosis (ALS), and spinal muscular atrophies] early BCI training while the user can still exploit her/his residual motor functions will increase long-term use of APs by smoothing the transition between the hybrid assistive device and pure BCI when muscular activity is too weak to operate the APs. An effective way for a hBCI to combine all the control channels is to merge their individual decisions – i.e., the estimation of the user's intent – by weighting the contribution of each modality. These weights reflect the reliability of the channel, or confidence/certainty the system has regarding its output. The weights can be estimated from supervision signals such as mental states [e.g., fatigue, error potentials (ErrPs)] and physiological parameters (e.g., muscular fatigue). Another source to derive the weights is to analyze the performance of the individual channels in achieving the task at hand (e.g., stability over time). There exist a few examples of hybrid BCIs. Some are based on multiple brain signals. One of such hBCIs is the combination of motor imagery (MI)-based BCI with ErrP detection and correction of false mental commands (Ferrez and Millán, 2008b). A second example is the combination of MI with steady state visual evoked potentials (SSVEPs) explored in some offline studies (Allison et al., 2010; Brunner et al., 2010). Other hBCIs combine brain and other biosignals. For instance, Scherer et al. (2007b) combined a standard SSVEP BCI with an on/off switch controlled by heart rate variation. Here the focus is to give users the ability to use the BCI only when they want or need to use it. Alternatively, and following the idea of enhancing people's residual capabilities with a BCI, Leeb et al. (2010b) fused EMG with EEG activity, so that the subjects could achieve a good control of their hBCI independently of their level of muscular fatigue. Finally, EEG signals could be combined with eye gaze (Danoczy et al., 2008). Pfurtscheller et al. (2010) have recently reviewed preliminary attempts, and feasibility studies, to develop hBCIs combining multiple brain signals alone or with other biosignals. Finally, hybrid BCIs could exploit several brain imaging techniques simultaneously; i.e., EEG together with MEG, fMRI, NIRS, and even TMS. As mentioned above, our focus in this review paper is on principles to develop hBCI that, when coupled with existing AT used by disabled people, can effectively improve their quality of life. Adaptation The kind of switch mentioned above offers a first level of self-adaptation, in that the user can dynamically choose the best interaction channel at any time. To the best of our knowledge, this is a aspect of BCI that has not been addressed before. A second level of self-adaptation concerns the choice of the EEG phenomena that each user better controls, which can range from evoked potentials like P300 (Farwell and Donchin, 1988; Nijboer et al., 2008) or SSVEP (Sutter, 1992; Gao et al., 2003; Brunner et al., 2010) to spontaneous signals like slow cortical potentials (Birbaumer et al., 1999) and rhythmic activity (Babiloni et al., 2000; Wolpaw et al., 2000; Pfurtscheller and Neuper, 2001; Millán et al., 2002; Blankertz et al., 2007). This necessitates the development of novel training protocols to determine the optimal EEG phenomenon for each user, building upon work on psychological factors in BCI (Neumann and Kübler, 2003; Nijboer et al., 2007). Still another aspect of self-adaptation is the need for online calibration of the decoding module (which translates EEG activity into external actions) to cope with the inherent non-stationarity of EEG signals. Recently, a number of papers have studied how EEG signals change during BCI sessions (Shenoy et al., 2006; Sugiyama et al., 2007; Vidaurre et al., 2008; von Bünau et al., 2009). This non-stationarity can be addressed in three different ways. First, by rejecting the variation of the signals and retaining the stationary part as in Kawanabe et al. (2009) and von Bünau et al. (2009). In these works, different methods to design robust BCI systems against non-stationarities are described. Second, by choosing features from the EEG that carry discriminative information and, more importantly, that are stable over time (Galán et al., 2007, 2008). Third, by applying adaptation techniques. This adaptation can, as well, be carried out at different modules of the BCI: in the feature extraction (for example with the use of adaptive autoregressive coefficients or time domain parameters, Schlögl, 2000; Vidaurre et al., 2009) in the spatial filtering (Zhang et al., 2007; Vidaurre and Blankertz, 2010) or at the classifier side. Adaptation of any of the modules can be done in a supervised way (when the task to perform is known beforehand) or in an unsupervised manner (no class labels are used to adapt the system). Although not very common, supervised adaptation of the classifier has been explored in several studies (Millán, 2004; Buttfield et al., 2006; Shenoy et al., 2006; Vidaurre et al., 2006; Millán et al., 2007). Recently, some groups have also performed unsupervised adaptation of the features (Schlögl, 2000; Vidaurre et al., 2009) and of the classifier. Unsupervised classifier adaptation has also been applied to P300 data (Lu et al., 2009) and to MI data (Blumberg et al., 2007; Sugiyama et al., 2007; Vidaurre et al., 2008). Regardless whether adaptivity is applied in one or more modules of the BCI, it allows the simultaneous co-adaptation of the BCI to the user and vice versa. A recent study with healthy volunteers (Vidaurre and Blankertz, 2010) who either had no experience or had not been able to control a BCI with sufficient level of control for a communication application (70% of accuracy in a two class system) has shown the advantage of this approach. During the BCI session, of approximately 2 h, some users could develop SMR. This is a big step forward in BCI research, because at least 25–30% of all users are not able to use a BCI with sufficient level of control (Guger et al., 2003). We hypothesize that selection of stable discriminant features and BCI adaptation could facilitate and accelerate subject training. Indeed, these techniques increase the likelihood of providing stable feedback to the user, a necessary condition for people to learn to modulate their brain activity. Human–computer interaction A related issue is how to improve the performance and reliability of current BCIs, which are characterized by noisy and low-bit-rate outputs. A promising possibility is the use of modern HCI principles to explicitly take into account the noisy and lagged nature of the BCI control signals to adjust the dynamics of the interaction as a function of the reliability of user's control capabilities. Such a HCI approach can also include the ability to “degrade gracefully” as the inputs become increasingly noisy (Williamson, 2006). Human–computer interaction principles can lead to a new generation of BCI assistive devices by designing more suitable and comfortable interfaces that will speed up interaction, as demonstrated by the recent virtual keyboard “Hex-O-Spell” (Müller and Blankertz, 2006; Williamson et al., 2009). Regarding interaction and control of complex devices like neuroprostheses and mobile robots (or wheelchairs), it has recently been shown how shared autonomy techniques can drastically enhance the performance and robustness of a brain-controlled wheelchair (Vanacker et al., 2007; Galán et al., 2008; Millán et al., 2009). In a shared autonomy framework, the outputs of the BCI are combined with the information about the environment (obstacles perceived by the robot sensors) and the robot itself (position and velocities) to better estimate the user's intent. Some broader issues in human–machine interaction are discussed in Flemisch et al. (2003), where the H-Metaphor is introduced, suggesting that interaction should be more like riding a horse, with notions of “loosening the reins”, allowing the system more autonomy. Shared autonomy (or shared control) is a key component of future hybrid BCI as it will shape the closed-loop dynamics between the user and the brain-actuated device such that tasks are able to be performed as easily as possible. As mentioned above, the idea is to integrate the user's mental commands with the contextual information gathered by the intelligent brain-actuated device so as to help the user to reach the target or override the mental commands in critical situations. In other words, the actual commands sent to the device and the feedback to the user will adapt to the context and inferred goals. In such a way, shared control can make target-oriented control easier, can inhibit pointless mental commands, and can help determine meaningful motion sequences (e.g., for a neuroprostheses). Examples of shared control applications are neuroprostheses such as robots and wheelchairs (Millán et al., 2004b, 2009; Vanacker et al., 2007; Galán et al., 2008; Tonin et al., 2010), as well as smart virtual keyboards (Müller and Blankertz, 2006; Wills and MacKay, 2006; Williamson et al., 2009), and other AT software with predictive capabilities. The issue of improving the user interface is not a new problem; in addition some of the issues such as error rate and time taken to make a selection are not new in the general area of AT. The emphasis has mainly been on improving controllability and accuracy. Applications designed for BCI should be able to use different methods of BCI control, account for individual differences, optimize the user interface and incorporate artificial intelligence techniques. Simulation techniques can provide helpful information about the expected usability of a system. For instance, Biswas and Robinson (2007) describes a simulator which incorporates models of the application, interface and user to predict the performance of assistive technology devices. Finally, until fairly recently, the focus of software design and evaluation has been on usability and functionality in what are referred to as instrumental qualities. Current trends emphasize non-instrumental aspects of interface design and evaluation. These can be separated into the three categories; hedonics (concerned with [un]pleasant sensations), esthetics, and pleasure/fun (Mahlke, 2005). This development might be seen as attempting to establish the basics first before fine-tuning the details later. Nevertheless, Tractinsky et al. (2000) demonstrates that the perception of how usable a system is increases as visual esthetics of the system increases, although its actual usability remains unchanged. A valid question to ask, then, is whether BCI application design and evaluation can and should follow the same pattern of developing usable systems first before “targeting” the non-instrumental qualities. Given the current limitations of BCIs, how much of the existing knowledge of HCI design and evaluation can be applied to BCIs? It depends on the purpose of the application and how much control is required for the application to be used. Computer applications for BCI might be divided into three broad categories – programs for communication, tools for functional control, and entertainment applications. Entertainment programs can further be subdivided into games, tools for creativity and interactive media. The focus of evaluation for communication and functional applications should be on usability and functionality, while the focus of entertainment applications should be on pleasure and entertainment. Mental states A fourth area where BCI assistive technology can benefit from recent research is in the recognition of the user's mental states (mental workload, stress level, tiredness, attention level) and cognitive processes (awareness to errors made by the BCI), which could facilitate interaction and reduce the user's cognitive effort by making the BCI assistive device react to the user. This is again another aspect of self-adaptation: for instance, in case of high mental workload or stress level, the dynamics and complexity of the interaction will be simplified or it will trigger the switch to stop brain interaction and move on to muscle-based interaction (see above). As another example, in the case of detection of excessive fatigue, the mobile robot would take over complete control and move autonomously to its base station close to the user's bed. Pioneering work in this area deals with the recognition of mental states (such as mental workload, Kohlmorgen et al., 2007; attention levels, Hamadicharef et al., 2009; and fatigue, Trejo et al., 2005) and cognitive processes (such as error-related potentials, Blankertz et al., 2003; Ferrez and Millán, 2005, 2008a,b; and anticipation, Gangadhar et al., 2009) from EEG. In the latter case, Ferrez and Millán (2008a,b) have shown that errors made by the BCI can be reliably recognized and corrected, thus yielding significant improvements in performance. Also, as mentioned before, mental states can provide useful information to estimate the reliability of the individual channels. For instance, in the case of a high attention level, we could assign a large weight to the EEG channel, while this weight would be small in the case of high mental workload. Also, repetitive error-related potentials should reduce the weight of the channels that mainly contributed to the estimation of the user's intent. New EEG devices The fifth and final area of necessary progress surrounds the development of a new class of BCI devices based on easy-to-use and esthetic EEG equipment. So far, laboratory experimentation has never required attention to issues like portability, esthetic design, conformity, certification, etc. Many current BCI applications exist in the form of software running on a personal computer; but many users will not accept the burden of a desktop PC and its screen to utilize a BCI. In addition, there is a need for a common implementation architecture to facilitate commercial take-up – and the field is taking steps toward standardization in the design of BCI (Cincotti et al., 2010). The merger of esthetic and engineering design is a key issue that any practical BCI for disabled people must overcome. Users don't want to look unusual– therefore, social acceptability is a key concern for them3. For this reason we expect new EEG technology based on dry electrodes and esthetic wireless helmets. Different teams have recently developed some prototypes of dry electrodes that overcome the need of gel, one of the main limitations of current EEG technology (Popescu et al., 2007). Moreover, different companies like Quasar Inc. (San Diego, USA) (Sellers et al., 2009), Emotiv Systems Inc. (San Francisco, USA), NeuroSky Inc. (San Jose, USA) (Sullivan et al., 2008), and Starlab (Barcelona, Spain) (Ruffini et al., 2007) are now commercializing dry electrodes, mainly for gaming. Although some doubts exist about the kind of physiological signals these systems actually exploit for control, they are definitely pushing the field forward. Discussion In summary, time is ripe to develop a new generation of hybrid BCI assistive technology for people with physical disabilities that will advance the state of the art in a number of ways: Conventional AP will be enhanced by BCI technology: the incorporation of a brain channel can provide an additional degree of freedom, enhance the robustness of the control signals by combining EEG and other AT, or be the only means of interaction. BCI will expand the range of opportunities available to AT teams worldwide for building flexible and personalized solutions for their clients needs. BCI assistive devices will be endowed with novel self-adaptive capabilities: this will be achieved through the incorporation of fusion techniques for combining EEG with other signals, automatic choice of EEG phenomena, online adaptation to changing EEG signals, the use of modern HCI principles for shaping the interaction, and recognition of user's mental states and cognitive processes. Brain–computer interaction will become more robust: combination of EEG with other signals allow users to become more autonomous and interact over long periods of time. Brain–computer interaction will increase its performance and reliability significantly: the use of modern HCI and shared autonomy principles will make it possible. Brain–computer interaction will reduce user's cognitive effort: this will be possible because of the use of modern HCI as well as the recognition of user's mental states and cognitive processes. Brain–computer interaction will be easier: the design of efficient training protocols will accelerate, improve and make more intuitive user's mastering of the BCI assistive technology; also, the development of new electrodes and esthetic helmets will facilitate operation of BCI by laypeople. Novel BCI designs will ensure the outcome follows standard BCI assistive technology: past lack of coordination in BCI research has thus far impeded the creation of a shared model and standards among BCI groups.