Assistive mobility A second area where BCI technology can support motor substitution is in assisting user's mobility, either directly through brain-controlled wheelchairs (e.g., Millán et al., 2009) or by mentally driving a telepresence mobile robot – equipped with sensors for obstacle detection as well as with a camera and a screen – to join relatives and friends located elsewhere and participate in their activities (Tonin et al., 2010). Several commercial platforms already exist for allowing this kind of interaction: e.g., peoplebot (Mobile Robots Inc., Amherst, USA), iRobot (iRobot Corp., Bedford, USA), robotino (Festo AG, Dietikon, Switzerland). Underlying all assistive mobility scenarios, there is the issue of shared autonomy. The crucial design question for a shared control system is: who – man, machine or both – gets control over the system, when, and to what extent? Several approaches have been developed, in particular for intelligent wheelchairs. A common aspect in all these approaches is the presence of different assistance modes. These modes can either be different levels of autonomy or different algorithms for different maneuvers. Based on these modes, existing approaches can be classified into two categories. Firstly, there are approaches where mode changes are triggered by a user's action through the operation of an extra switch or button. Examples of smart wheelchairs of this category are SENARIO (Katevas et al., 1997), OMNI (Hoyer, 1995), MAid (Prassler et al., 2001), Wheelesley (Yanco, 1998), VAHM (Bourhis and Agostini, 1998), and SmartChair (Parikh et al., 2004). However, those explicit interventions can be difficult and tiring for the users. These users have problems operating a conventional interface, and adding buttons or functionality for mode selection makes this interface only more complex to operate and less user-friendly. Secondly, there are approaches with implicit mode changes where the shared control system automatically switches from one mode to another without the need for a manual user intervention. The NavChair (Levine et al., 1999; Simpson and Levine, 1999) and the Bremen Autonomous Wheelchair (Röfer and Lankenau, 2000) are examples of this second category. The problem with all these approaches is, however, that the switching is hard-coded and independent of the individual user and his specific handicap. An extensive literature overview of intelligent wheelchair projects can also be found in Simpson (2005). In the case of brain-controlled robots and wheelchairs, Millán's group has lead the development of a shared autonomy approach in the framework of the European MAIA project that solves the two problems mentioned above. This approach estimates the user's mental intent asynchronously and provides appropriate assistance for navigation of the wheelchair. This approach has shown to drastically improve BCI driving performance (Vanacker et al., 2007; Galán et al., 2008; Millán et al., 2009; Tonin et al., 2010). Despite that asynchronous spontaneous BCIs seem to be the most natural and suitable alternative, there are a few examples of evoked BCIs for the control of wheelchairs (Rebsamen et al., 2007; Iturrate et al., 2009). Both systems are based on P300, a potential evoked by an awaited infrequent stimulus. To evoke the P300, the system flashes the possible predefined target destinations several times in a random order. The subject's choice is the stimulus that elicits the largest P300. Then, the intelligent wheelchair reaches the selected target autonomously. Once there, it stops and the subject can select another destination – a process that takes around 10 s. A similar P300 approach has been followed to control a humanoid robot (Bell et al., 2008).