Underlying all assistive mobility scenarios, there is the issue of shared autonomy. The crucial design question for a shared control system is: who – man, machine or both – gets control over the system, when, and to what extent? Several approaches have been developed, in particular for intelligent wheelchairs. A common aspect in all these approaches is the presence of different assistance modes. These modes can either be different levels of autonomy or different algorithms for different maneuvers. Based on these modes, existing approaches can be classified into two categories. Firstly, there are approaches where mode changes are triggered by a user's action through the operation of an extra switch or button. Examples of smart wheelchairs of this category are SENARIO (Katevas et al., 1997), OMNI (Hoyer, 1995), MAid (Prassler et al., 2001), Wheelesley (Yanco, 1998), VAHM (Bourhis and Agostini, 1998), and SmartChair (Parikh et al., 2004). However, those explicit interventions can be difficult and tiring for the users. These users have problems operating a conventional interface, and adding buttons or functionality for mode selection makes this interface only more complex to operate and less user-friendly. Secondly, there are approaches with implicit mode changes where the shared control system automatically switches from one mode to another without the need for a manual user intervention. The NavChair (Levine et al., 1999; Simpson and Levine, 1999) and the Bremen Autonomous Wheelchair (Röfer and Lankenau, 2000) are examples of this second category. The problem with all these approaches is, however, that the switching is hard-coded and independent of the individual user and his specific handicap. An extensive literature overview of intelligent wheelchair projects can also be found in Simpson (2005).