Abstract
In automated driving, the human driver and an automation form a joint human-machine system. In this system, each partner has her own individual cognitive as well as perceptual processes, which enable them to perform the complex task of driving. On different layers of the driving task, both, drivers and automation systems, assess the situation and derive action decisions. Although the processes can be divided between human and machine, and are sometimes very elaborate, the outcome should be a joint one because it affects the entire driver-vehicle system. In this paper, the individual processes for decision-making are defined and a framework for joint decision-making is proposed. Joint decision-making relies on common goals and norms of the two subsystems, human and automation, and evolves with experience.
1 Introduction
In recent years, automated driving has grown beyond being a research topic into becoming a reality in today’s traffic. With rapidly developing information and communication technology, state-of-the-art and next generation vehicles gain in automated driving capabilities. Nonetheless, the human driver with her skills and knowledge will remain the only entity to solve critical driving situations. Therefore, both, the human driver and the automation, share the task of guiding the vehicle. For this purpose, feasible concepts for human-machine interaction are required. One solution to this is cooperative vehicle guidance and control [22] with suitable interfaces and an appropriate system design [21].
Cooperative driving is characterized by its highly dynamic nature and by the efforts of two partners striving to achieve a common goal, which can be as basic as the goal of reaching the destination safely. Those two aspects of dynamics and efforts bring about some very specific processes and requirements. Dynamic means that new stimuli and information are constantly available and need to be processed and evaluated, often within strict time constraints. At any given moment, the current behavior and preceding decision need to be re-examined in the light of the new information. While the human driver and the automation are completely different entities, the outcome is always a joint one because the vehicle is moving with the human driver and the technical systems on board.
Before reaching a decision, the process leading to this joint outcome needs to be triggered. Usually, this happens as a response to a relevant change in the environment, such as an upcoming obstacle on the road ahead. Since driving is a dynamic task, this takes place continuously. The human driver visually perceives, the technical sensors detect the obstacle and as soon as this information is obtained and processed, a new mental (or technical) representation of the current situation is formed. When one of the common goals is not to crash the vehicle into an obstacle and the object gets closer, further assessment of the situation and the use of experience from similar situations should lead to a set of possible and feasible actions (such as braking or evading). Once these actions are determined, a decision needs to be taken on which action is best suited to the goals and norms of the driver-vehicle system. The system then implements the action and subsequently receives feedback from its effects. In a later decision situation, this experience can be used to better evaluate the action in the given circumstances.
The paper is structured as follows. Section 2 gives an overview of cooperative vehicle guidance and control, including definitions of the different levels of automation and the dynamic driving task. Section 3 describes the individual processes of situation awareness and decision making that are part of the dynamic driving task. It thereby lays the foundation for Section 4, where the joint decision-making process is developed. In the Section 5, an implementation of such a system is described. Section 6 discusses the findings and gives a practical example. Section 7 concludes and gives an outlook to further research.
2 Cooperative Vehicle Guidance and Control
In a cooperative control setup [17], the human driver and an automation system are working together by sharing the control over the dynamic driving task. Hereby, the control between the human driver and the cooperative automation can be distributed variously. Therefore, different levels of automated driving can be defined, which might have diverse implications on operational, technical, legal, and even social factors.
2.1 The Dynamic Driving Task
The dynamic driving task is defined by the sum of processes, which are required for operating a vehicle in on-road traffic. These activities include the detection, recognition and classification of objects and events, appropriate responses to these objects and events, maneuver planning, control of the longitudinal and lateral vehicle dynamics, and enhancing conspicuity [42].
DONGES [12] describes the driving task as a target-oriented sensomotoric activity in a three-layered model using the layers navigation, guidance, and control. Hereby, the layers relate to different subtasks, which again can be categorized in different levels of human behavior [11]. These can be explained in terms of the model by RASMUSSEN [41]. He distinguishes skill-based behavior, which happens automatically and enables the execution of a secondary activity, from the rule-based behavior, which is required for frequently occurring situations. Knowledge-based behavior is required for complex situations, which have not been experienced very often.
The first layer of this cognitive model is the navigational level, which can be regarded as knowledge-based behavior and is not interpreted as a dynamic task, but as a planning process [11]. The dynamic driving task consists of the next two levels, guidance and control. The guidance level describes the selection of the maneuvers during the driving task, whereas the control – or stabilization – level encompasses control of longitudinal and lateral vehicle dynamics. The current state of the vehicle is perceived by the driver, thus appropriate maneuvers can be selected and applied via motoric actions. Depending on the driver’s experience, her state, and the driving situation, both levels of the driving task can require rule-based or skill-based behavior.
MICHON [33] proposes a similar model, arranging the levels along the strategic, tactical and operational driver efforts. The strategical effort includes planning of the destination and routes, whereas the tactical effort involves e. g. overtaking and lane-changing maneuvers. Operational effort comprises reactions on braking and steering for stabilizing the vehicle.
LÖPER et al. [31] expand these models to a four-layered model, which divides the guidance level into a maneuver planning and a trajectory planning level. This model builds a very suitable basis for the design of a cognitive automation, as described in Section 2.4. According to FASTENMEIER & GSTALTER [16], navigational, guidance and control tasks can be summed up in the category “basic driving tasks”, which occur in a wide range of traffic situations. Additional, continuously performed tasks during driving are supervision of car conditions as well as introspection and control of selective observation (e. g. ignoring distraction, paying attention to traffic rules).
GROEGER [26] describes the driver behavior as being goal-directed, whereby drivers may have multiple goals (safety, speed, “green”-driving etc.). These goals can be in conflict at any time, thus drivers have to evaluate the situations and plan their driving adequately [54]. Various studies in the past investigated the possibilities of supporting certain goal-oriented driving behavior for e. g. green driving [44, 49, 54]. Based on these theories, an integrated model of the dynamic driving task was constructed (see Figure 1). This model encompasses the four levels proposed by LÖPER et al. [31], which are assigned to the different behaviors identified by [41]. Furthermore, the different processes leading to an action at different levels are sketched and will be elaborated on in more detail in later sections. This inclusion of processing steps enables to embrace the influences of goals.
2.2 Levels of Assistance and Automation
The concept of defining levels of automation as a spectrum or continuum is well established [45, 47], has been successfully expanded by integrating assistant systems, and was applied to the automotive domain (e. g. [20, 21]). It is important to keep in mind that such a spectrum is a simplification of a more complex reality: For example, the distribution of the actual control over the driving dynamics between the human driver and the automation might not be equal to the distribution of the responsibility or authority for the vehicle guidance during the same driving situation [19]. Showing another perspective to the different dimensions of automation, WICKENS et al. [52] define a degree of automation which combines the automation level and the stages of human information processing [39] and therefore add a process and time-related component to this [32, 36]. FLEMISCH et al. [19] structure the relationship between control, ability, authority and responsibility toward a spectrum of assistance and automation (cf. [20, 21]). In summary and without any claim of completeness, the following dimensions of automation in human-machine systems for automated driving or motion in general can be identified: automation level, responsibility, authority for transitions, capability for autonomy, and stages of information processing. In many cases, these mostly different dimensions are aggregated to unidimensional representations for the purpose of reducing system and analysis complexity.
Thus, current definitions, such as the one by SAE [42] or the definition by the German Federal Highway Research Institute (BASt) [23], combine the execution of the dynamic driving task and the responsibility mostly into a single spectrum. Therefore, in medium automation levels, such as partial (or level 2) automation, according to the SAE taxonomy, the automated system controls both the lateral and the longitudinal dynamics. At the same time, the human driver is still responsible in the sense of monitoring the environment and being able to take over control immediately in case of an automation error or failure.
In recent years and with increasing technological capability, the development shifts to higher levels of automation. The SAE J3016, being one of the most commonly used classification schemes, explicitly focuses on level 3 and above, while mentioning the lower automation levels mainly for the reason of “taxonomical completeness” [42]. Figure 2 proposes a comparison of the different automation levels as defined by SAE and BASt, which differ from one another.
In this paper, the scope ranges from assisted to conditionally / highly automated driving. This means, that the human driver as well as the automation participate in the driving task at all times.
2.3 Joint, Shared and Cooperative Control
In human-machine systems, cooperativeness represents the conformity of the partners’ behavior and interactions, implying that the behavior and design of the machine /automation should be complementary to the requirements of the human [8]. Therefore, a certain degree of interaction between the partners must be enabled for e. g. arbitration in cases of conflict. Additionally, the intents and abilities of the partners need to be predictable and traceable [22], which requires complementary perception of the environment and compatible representations of information and action. Moreover, an important part of cooperation is that the partners share a common understanding of the current situation, since they cannot cooperate properly without the same perception of the present and future situation [14, 43]. PACAUX-LEMOINE [38] defines know-how (to operate) and know-how–to-cooperate via a common work space as a model of cooperation [37] in the context of safety critical situations. The know-how is the agent’s capability to control the process, while the agent’s ability to cooperate with other agents involved in the process control, is the know-how-to-cooperate. The know-how-to-cooperate is subdivided into an external and internal part. The internal part of the know-how-to-cooperate enables an agent to establish a model of other agents while the external part describes the ability of an agent to get information about other agents and to provide information to other agents.
Related to the concept of cooperativeness is the concept of shared control. Shared control describes the operational actions, which have direct impact on the common task of the partners e. g. the lateral dynamics of a vehicle can be controlled by the driver and the automation jointly, whereas the two partners are connected to each other by haptic interfaces (e. g. steering wheel). Thus, shared control has no influence on the tasks on the tactical and strategic /navigational level (e. g. [1, 33]).
The concepts of cooperative and shared control are overlapping. Shared control can be the “sharp end” of a human-machine cooperation on the control level. Cooperation might also happen on the “blunt end” on a guidance or navigational level, and is possible without explicit shared control [17].
2.4 Cooperative Automation
For the relation of human-machine systems, the term cooperation was indicated by RASMUSSEN [41], HOLLNAGEL & WOODS [28] and SHERIDAN [46]. Moreover, the term was used for vehicle control by, e. g. FLEMISCH et al. [21], HOC [27], BIESTER [7], HOLZMANN [29], FLEMISCH et al. [20]. Additional definitions of joint, shared or cooperative control can, for example, be found in MULDER et al. [34].
Depending on the current mode, the level of automation and therefore the corresponding responsibility as well as the authority given to each individual partner might vary. BAINBRIDGE [4] describes how the automation of systems leads to ironies which might engender an overall decrease of performance or even safety. Especially, if the human operator stays responsible only for the tasks which cannot be automated or for monitoring in general. As a consequence, for reducing such effects, human-machine systems in automated driving are more likely to increase their performance and safety, if the automation is fully capable of conducting the driving task by itself. Thus, even in situations when a lower cooperative automation mode is in use, the technical system should in principle be able to execute lateral and longitudinal control all by itself and therefore not rely on the human driver for certain complicated tasks. At the same time, also the human driver should, in general, be fully capable of conducting the driving task herself, without the support of any technical system. Such a system design guarantees – at least in most scenarios – that the automation can work as a reliable partner and that both cooperation partners can form a powerful joint human-machine system. For achieving high compatibility between the partners, an automation solution for cooperative guidance and control can be modeled on the basis of human cognition using the models described in Section 2.1. The human driver as well as the technical system, dispose of their own perception, cognition, and access to the actuators for the driving task. The stages from information acquisition to action implementation [38] might be different for different layers of vehicle guidance and control [12, 41] and can be executed in parallel. Thus, both partners can individually and simultaneously monitor and update navigation information, plan, evaluate, and execute driving maneuvers, calculate and adapt trajectories and derive information about actual steering and acceleration inputs. An implementation of a cooperative automation is described in section 5.
2.5 Arbitration and Cooperative Interaction Mediation
When different actors are required to take a common action, but have a different understanding of the situation, conflicts are sometimes unavoidable. The acquisition of the situation awareness is discussed in detail in the next section (Section 3.1). Generally, in human-machine cooperation the underlying value systems that lead to the respective decision might differ as might the perception and situation assessment. Nonetheless, driving requires a very fast decision in certain situations, e. g. when an obstacle is detected or a construction site compromising the functionality of an automation is approached. In such situations, human machine arbitration can be applied. Unlike a switch, arbitration is not a hard decision in favor of the human’s or the automation’s action plan, but more like a fast negotiation between human and automation. Cooperative interaction mediation incorporates human-machine arbitration [5]. Interaction mediation can be understood as a common roof connecting the human’s and the automation’s interfaces, facilitating the decision-making process and enhancing the decision quality. Interaction mediation incorporates several arbitration units for maneuver selection, trajectory adaption and control arbitration. Furthermore, the assistance and automation mode is arbitrated depending on availability and current control contribution.
3 The Joint Decision Making Process
In this section, the first two steps (signal processing and perception) of the layers of the dynamic driving task model introduced above (see Figure 1) will be investigated, while the following subsections elaborate on the last two steps (i. e. decision and response selection and response implementation). It is important to note that in concordance with PARASURAMAN et al. [39] the step of perception also includes the cognitive evaluation of the perceived stimuli.
3.1 Situation Assessment of the Human-machine System
While the automation has different sensors for assessing the situation, the human driver accomplishes this task with her senses. WICKENS [50, 51] describes how sensory modalities (e. g. auditory, visual) can be triggered by stimuli using different codes (e. g. spatial, verbal). These codes can be used for the stages of encoding and central processing. Analogically, the last stage – responding – can be executed manually or vocally. This system enables a person to act on her perception of the environment and therefore, the model is suitable for application to the driving task. Since the haptic channel is already an important source of information during the driving task and is becoming increasingly important as a human-machine interface through the use of active actuators, such as pedals or the steering wheel, the haptic modality can also be successfully used for cooperative automation.
The model of the dynamic driving task introduced above (see Figure 1) has been inspired by the multiple resources model by WICKENS [50, 51]. The steps within each layer correspond to the stages of the multiple resources model; the two steps perception and decision & response selection can be seen as summarizing the encoding stage.
Based on this fundamental compatibility, influential factors for each stage in the dynamic driving task model can be identified. Wickens’ theory includes that during multi-tasking, interferences are likely to occur when different tasks share stages, sensory modalities and codes [50, 51]. As the complex task of driving is composed of different subtasks, especially during the signal processing and perception stages, interferences are likely to occur when different layers of the driving task are drawing on processing capacities of the same modality simultaneously (e. g. visual). During signal processing, it could for example be impossible to look at the navigation system, while also checking the blind spot. During perception on the other hand, even though there might be enough time to monitor both, the visual working memory capacity is limited and therefore not all information from the visual modality can always be processed (see Section 6 for an example of such a situation).
While Wickens’ model emphasizes the role of different sensory modalities, the model by Endsley has a focus on the interaction of a person with the environment [14, 15]. Specifically, her model on situation awareness (SA) focusses on how an understanding of a situation and a projection of its future development is formed. Hereby, SA is categorized into three levels: level 1 SA, which encompasses the perception of relevant elements in the environment; level 2 SA, which denotes an understanding of the current situation; and level 3 SA, which includes the projection of the situation’s development in the near future. While the three levels of SA describe the state of the knowledge (thus the product), situation assessment describes the processes of acquiring SA. This process incorporates features of top-down processing, in which e. g. goals and expectations play a role, as well as bottom-up processing, where stimuli in the environment direct attention [14]. When applying Endsley’s model to the dynamic driving task, situation awareness can be seen as corresponding to the steps of signal processing and perception (as perception includes the cognitive evaluation of acquired information). This in turn implies that situation assessment is the process which covers the first two stages of the dynamic driving task model.
Research on situation awareness by BAUMANN & KREMS [6] elaborates on Endsley’s model and gives further insights into factors that might influence the first two stages of the dynamic driving task. They identify cognitive mechanisms during situation assessment by applying Kintsch’s [30] construction-integration theory to the context of driving. In particular, they argue that as working memory capacity is limited, cues in the driving environment activate knowledge schemes in the long-term memory. These activated schemes help the driver to form an unambiguous representation of the situation and help predict its future development. Several predictions can be drawn from this explanation and one aspect the authors focus on is the influence of driving experience. They assume that more experienced drivers have more knowledge about driving situations and can therefore encode driving-related information better than inexperienced drivers. Confirmatory empirical evidence is provided for this and other hypotheses derived from their model [6].
Accordingly, in the signal processing and perception stages of the dynamic driving task model, during which situation awareness is acquired, influential factors (such as driving experience) can be deduced from the model by BAUMANN & KREMS. Conclusively, in the view of the two described models and the additional research presented, the human driver with her different sensory modalities and multiple resources, which she can direct in a top-down fashion, while also being open for bottom-up influences, seems to be well equipped to assess the driving situation correctly in most instances and reach good situation awareness. Factors which can improve and hinder the fluent execution of all stages identified in the dynamic driving task model can be derived by further investigating the two models described above and associated research.
The automation on the other hand uses its sensors to obtain information about the environment, which is comparable to the signal processing stage. Sensor data are then compared to a database of schemes in order to assess the situation. This process is parallel to the activation of information stored in long-term memory by the human driver as described by BAUMANN & KREMS [6]. Furthermore, the system executes sensor data fusion. Combined, these processes are comparable to the perception stage in the dynamic driving task model.
These similar processes of situation assessment of the automation and the driver are carried out in parallel. However, the product of situation assessment (i. e. situation awareness) is only valid for very short time intervals and constant monitoring and updating is needed because of the highly dynamic nature of driving situations. Therefore, it is important that the automation is compatible to the human and that the interaction and arbitration system is well designed. This enables fast and correct communication between the two partners, which is needed to integrate the results of their situation assessment. Accordingly, they can reach a common decision-making for an integrated response of the driver-automation system. The processes are depicted in Figure 3.
Because of the fundamental similarity of processes and the integration of the results of these processes into a single decision of vehicle control, a definition of situation assessment for the human-machine system in cooperative driving can be presented:
Situation assessment of the human-machine system in automated driving denotes the process of acquiring a representation of the driving situation, which is accomplished by signal processing and the perception of signals from the environment and the other partner. Furthermore, it includes communicating the information acquired. The goal of this process is preparing an integrated decision making about the control of the vehicle.
3.2 Elements of the Decision Making Process
In this and the following subsection, the two last steps of the layers of the dynamic driving task model (i. e. decision and response selection and response implementation) will be elaborated on. According to DIEDERICH [10] a decision-making process can be subdivided into the stages of information selection, evaluation, integration, and choice. The information processing stages (selection, evaluation and integration) are, in the case of the dynamic driving task model (see Figure 1), covered by the situation assessment process, which, as has been mentioned above, prepares the actual decision making and thereby the choice. The process is thus triggered by an external or internal stimulus that requires a response. Each level of the dynamic driving task is interpreted as its own decision-making process, where time horizons become more narrow and decisions need to be reassessed with increasing frequency from the upper levels to the lower levels of the dynamic driving task.
When considering the complexity of the driving task, it quickly becomes clear that a large number of factors creates the necessity for renewed decisions and that an even larger number of underlying preferences influences the decision’s outcome. When considering the lower levels of trajectory planning or stabilization and control, decisions need to be made much more often and within an increasingly small period of time. In fact, the timeframe becomes so narrow that with experience, these are not conscious decisions anymore, but automated reactions. An example of this is that every driver, without actively thinking about it, slows down and starts braking when she approaches a red traffic light. The decision making process is, thus, a dynamic one, much like the entire driving task.
The idea of a dynamic decision making process lies at the foundation of decision field theory [9]. It is especially suited for explaining decision making behavior under time pressure, as it models the decision (or rather preference state) as a function of time. This means that for different cut-off points, the preference state reaches a different value. In its original form, the theory applies to unidimensional decision problems. An extension to this is presented by DIEDERICH [10] with her multiattribute dynamic decision model. Hereby, the attributes are assumed to serve as memory retrieval cues and used to anticipate and evaluate possible outcomes of the decision process. This is similar to the case-based decision theory by GILBOA & SCHMEIDLER [24], where utility levels of past decisions or decision strategies are compared to the current situation. This already reveals its roots in classical expected utility theory, as proposed, e. g. by von NEUMANN & MORGENSTERN [35]. The difference between the two former and two later theories is that the latter do not consider time. They could, thus, be described as static decision theories.
An important factor in both, the dynamic decision models and the case-based decision theory, is experience. For DIEDERICH [10], it contributes to forecasting the possible outcomes of a decision. This means that the more experience a decision maker has gathered, the larger the set of information she can draw on and the more detailed and the more educated her forecast becomes. GONZALEZ et al. [25] go into further detail with the underlying learning processes and present an instance-based learning theory. They assume that knowledge is accumulated and refined with every new decision and feedback gained after a decision has been made. They validate their model in an experiment with human decision makers and compare the results with those from an implementation in a cognitive system. The similarity of the results shows that there are algorithms for technical systems to “learn” with experience. Applied to human-machine interaction, this also means that the human-machine system can improve its cooperation over time and make better decisions with more driving hours.
3.3 Task Division and Arbitration During the Process
As situation assessment and decision making rely on different processes and capabilities in humans and machines, both partners may come to completely different, even opposing, decisions in certain situations (see Figure 3). Signal processing might be much faster with the automation’s sensors and sometimes only they have the capability to pick up important information, e. g. via car-to-car or car-to-infrastructure communication. Furthermore, the automation’s definition and interpretation of certain signals relies on predefined algorithms and rules, whereas a human makes use of experience and reasoning that have objective as well as subjective elements. In these two phases, the interpretation of perceived data should already be mediated between human and automation using reasonable interaction methods. If the automation’s sensors pick up obstacles or critical situations, such as the end of a traffic jam after a corner, which the human driver is not able to see, this information needs to be transferred to the human driver. The early exchange of information can improve situation awareness and is the basis for the upcoming decision-making process. Thereby, the quality of the decision can be enhanced and facilitated in terms of a joint, harmonized decision of human and automation, e. g. not to accelerate, not to take over or already to take the next exit to prevent encountering the traffic jam.
To improve a cooperative action, the other actor’s plan should be observable. The more an actor knows about the reasons for a certain decision, the easier it becomes to mediate the process. If an automation shows a trajectory of its planned route as well as markings for other available trajectories, the human can adapt or choose. Furthermore, the human can start reasoning why a certain maneuver is not proposed by the automation and be hinted e. g. to an obstacle or a car in blind spot she did not detect before.
In some situations, the timeframe for the whole process of situation assessment and decision-making might be too short to mediate between driver and automation. In such situations, a fast arbitration of the decision needs to take place, e. g. collision mitigation when approaching a stopping vehicle with high speed. In some cases, the driver might not be able to react fast enough herself and therefore the arbitrator decides to activate an emergency brake maneuver or a takeover maneuver if an adjacent lane is free. In other situations, e. g. when driving with an adaptive cruise control and the automation does not react to a stopping car that the human detected, a fast intervention must be possible. In these cases, the time is not long enough to make the other actor aware of the reason for a certain action, so the arbitrator ensures that a certain action is still engaged. When approaching a junction and both decisions to turn right or left are valid and both the human and the automation choose the other direction, this decision needs to be arbitrated to “right” or “left”. The maneuver “ahead” needs to be prevented in any case.
As research on mediation has shown (e. g. [13]), an increasing difference between the two actors makes it harder to decide in both sides’ interests. In order to reduce the conflict potential in the ongoing mediation process, it makes sense to mediate as soon as possible to improve the efficiency of the decision-making process and the quality of the final decision. To ensure that a safe decision will be made, the implemented arbitration units need the ability and authority to make final decisions incorporating both automation’s and human’s action implementations.
4 Joint Decision Making in Cooperative and Automated Driving
In the last sections, we have described the general elements of a decision-making process by both the human driver and the automation. The results of these processes are planned actions, which are supposed to affect the dynamic driving task. In this section, we focus on the joint decision-making process derived from the findings of the previous section as well as the role of common norms, goals, and experience.
Without further compatibility of the two partners in a cooperative human-machine system, their decisions would not depend on each other and can be seen as mostly individual processes. Nonetheless, both, the human driver and her technical partner, form a joint system and therefore, an implemented decision affects both of them. Thus, even without any further influence on the individual decision, the conducted action ultimately represents the final result of the decision-making for the whole system. In the case of a conflicted outcome of the individual decision-making processes, the concept of arbitration can be applied for reaching solutions in time-critical driving situations. However, if both cooperative partners conclude their decision-making processes with the same ideas, a conflict can be avoided. For ensuring reasonable actions, especially from the side of the automation, the decision making process should be compatible to the human’s side and lead to the same results – at least in cases where both have sufficient and congruent information about the situation. The next step is to understand how and where the joint decision is reached and how the corresponding processes can be designed.
In principle, there are two types of information about the reached decisions, namely the ones that are explicitly communicated and the ones that are implicitly communicated. The latter are transmitted via the actuators and can become factual decisions, while the former can be of informational nature only.
In order to find a joint decision, both sides should focus on the same goal in a given driving situation. Besides the common goals, this requires norms and experience. While goals and norms can gradually change over time, when e. g. with increasing fuel costs a more economical way of driving is adopted, experience is gathered and increases with every driving instance. Typical goals are reaching one’s destination and staying safe while doing so. A norm might be not to harm others or to comply with traffic regulations. Moreover, specific regulations that apply to a situation are norms relevant to the corresponding decision-making process. Experience, among others, includes knowledge of system limits, for instance the vehicle’s stability limits, the capabilities of the automation’s sensors or the skills of the human driver. All those dynamic components of the joint decision-making process illustrate not only the complexity of the processes and its influences, but also the variance of possible outcomes.
When a common goal is defined, a common representation of the situation is the next prerequisite for a joint solution. The processing of external signals and information has already been discussed for each individual partner. Also, in a joint human-machine system, both partners sense and perceive the situation distinct from each other. Nonetheless, they can inform their cooperative counterpart of certain aspects of the situation. A well-established example of such a passive assistance system is a blind spot warning system that informs the driver of objects in the corresponding areas of the vehicle’s surroundings. According to the presented model, this would result in a new perception cycle and still lead to individual situation assessment. In this case, a shared interpretation of the current situation is more likely, since the partners explicitly communicated about a result of their perception processes and therefore lowered the probability of misperception. When deriving a decision from the assessed situation, not only norms are applied and goals are pursued, but also the knowledge about different options in a particular situation is taken into account. Thus, for a complete representation of the situation, the internal information, i. e. the common representation of the system’s capabilities, is paramount. Only when both, the human driver and the automation have a common understanding of what the system can do and where the system’s boundaries lie, they can find the same solution to a situation. By reaching this common representation, the decision-making behavior is influenced and uncouples the joint decision from individual decision making.
Practice has demonstrated that this approach can yield good results. An example is the anti-lock braking system, which has been introduced in the 1970s. Before this system was available, drivers had to apply cadence braking in emergency situations in order to keep control of the vehicle while coming to a stop. Nowadays, full braking without pumping the pedal has become so self-evident that it is naturally taught in driving schools. This shift in behavior is a result of a different task distribution on the lowest, the stabilization and control, level.
As a conclusion, in the easiest and best case, a decision is reached by consensus, and can then be regarded as a real joint decision. In many cases, however, it is prone to conflict. As explained above, this can be solved in arbitration [5] or in negotiation [48].
5 Implementation
With the described approach, a system for cooperative vehicle control was implemented using the programming language C++ and the Robot Operating System framework (ROS, cf. [40]). Thus, the automation and interaction are modeled and programed as independent agents (cf. Section 3). Some of the necessary mathematical operations, such as vector or quaternion operations, are needed in more than a single program. For this reason, in addition to the actual programs, several static and dynamic libraries have been developed. On top of this, more libraries were created for connecting HMI components, such as active sidesticks or steering wheels. The communication between the different applications is realized using ROS. While some of the deployed messages are already delivered within the ROS standard messages, others are defined specifically for this application. The framework’s structure can be described as mainly decentralized and, therefore, the individual programs can be allocated to different machines, i. e. for performance reasons. With ROS being mainly developed and supported for Linux operating systems, subsequently the developed programs are designed for the use in a Linux environment. Nonetheless, Windows is supported for certain applications. To achieve this desired cross-platform compatibility, the tool CMake is used. CMake is an independent platform and allows building the developed software on various platforms (cf. www.cmake.org). The projects have been tested on Linux platforms running Ubuntu as well as in Microsoft Visual Studio with the compiler Visual C++ 2010, for being compatible to the currently available versions of ROS for Windows operating systems.
The driving simulator at the Institute of Industrial Engineering and Ergonomics (IAW) at RWTH Aachen University uses the professional simulation software Silab, which is currently restricted to Microsoft Windows operating systems. Thus, a use of ROS for Windows operating systems (win_ros) is mandatory for connecting this particular simulator to the automation and interaction software. As a consequence, at least two computers are used for each simulation with one running the simulation under Windows and the other one running the software package for cooperative vehicle guidance and control, even in scenarios with low requirements regarding the computational performance. For the reason of linking the simulation with the other programs, a dynamic ROS library for Windows has been created and is included in a custom made extension of the simulation software.
The automation has been implemented according to the presented multi-layer model. The layers navigation, maneuver planning, trajectory planning and control are represented within the framework. For the HMI, an interaction solution is implemented based on the specification in [5] and split into the interaction mediation, which also includes the arbitration, and the immediate HMI components meaning the haptic, visual and acoustic interfaces.
During the runtime, the control law is principally following the described top-down order of starting with the assessment of possible maneuvers, succeeded by the planning of corresponding trajectories. Additionally, bottom-up feedback-loops are included as well, e. g. for providing information about the short term aspects of the current driving situation, such as the current time to collision (TTC). The time to collision is the driving equivalent to the time of closest approach in aviation and is calculated from the intersection of the current velocity vectors (or more complex trajectories) of two vehicles. If a collision would occur based on the current data, a positive TTC exists. Usually, this is the case when a faster vehicle approaches a slower vehicle on the same lane from behind. Even more critical are possible collisions as a result of lane changes. Here, the TTC is usually comparatively low, indicating a highly critical driving situation. The automation receives two different types of inputs, one being the information about internal system states from the interaction programs or other automation programs, and the other one being aggregated sensor data. When a driving simulator is used, the simulation is providing these data. Typically, the data contain information about the own (ego) vehicle, the roads or tracks, other objects such as cars or pedestrians, and the environment. Based on the information about the surrounding roads, active traffic regulations and other traffic participants, the maneuver level assesses a probability for the different driving maneuvers. For this purpose, the current system state and preferences of the driver, such as risk aversion, are taken into account [2]. The assessed maneuvers are published as a ROS topic for further use in the HMI and other automation levels. Based on the current automation mode (cf. Section 2.2) and possible ongoing arbitration processes, the interaction mediation determines the maneuver which in total receives the most support of both, the human driver and the automation, and passes this information back to the automation. The trajectory level uses the selected maneuver in combination with the determined set of possible maneuvers and plans a detailed driving trajectory for that particular maneuver. Therefore, amongst others, the sensor data regarding the surrounding road, the road network and other objects are used. In addition, active traffic rules, such as speed limits, driving directions and possible lane changes are taken into account. The trajectory is computed with respect to the constraints of the vehicle dynamics regarding stability or comfort. This planned trajectory is used by the automation’s control or stabilization layer as a guide. Within this level, several controllers for the vehicle’s longitudinal and lateral dynamics are integrated. For the lateral control, i. e. steering, a model predictive control (MPC) algorithm is used. For this purpose, the relevant parameters of the used driving model are estimated online and fed into a single track dynamic model. A numerical optimization algorithm is used for determining the optimal controls, which in this case are the steering angle and boundaries for velocity. The objective function is calculated by comparing the predicted vehicle states, i. e. the position and orientation vectors and their derivatives, with the planned trajectory, calculating a joint measure for the angular and lateral offset. The solver has been implemented using a Newton’s method for a line search approach. When a satisfactory solution is found, this control is published together with the acceleration and braking values as a ROS topic for further use in either the interaction or directly for controlling the vehicle.
The longitudinal control is realized by an adaptive cruise control (ACC) algorithm as described by e. g. WINNER & SCHOPPER [53]. The controller includes two different control modes. In the cruise control mode, the controller is adjusting to the desired target velocity, which can be set by the user. In the follow mode, the controller keeps a constant time gap to the vehicle ahead on the track, whereas the desired time gap can be set by the user. If the time gap to the vehicle ahead is higher than the set value, the controller is in the cruise control mode and adjusts the velocity to the desired target velocity. In the follow mode, the target velocity of the vehicle is set by the followed vehicle, thus the controller keeps the time gap to the vehicle ahead constant by adjusting the velocity to the velocity of the vehicle ahead. The controller keeps the follow mode as long as the desired velocity set by the user is not lower than the velocity of the vehicle ahead. Otherwise, the controller is in the cruise control mode and is keeping the desired velocity as the set value.
6 Practical Example and Further Discussion
A very typical traffic situation is driving in a multi-lane road, e. g. a highway such as the German Autobahn. On both sides of the road, the driving direction is the same for two or more lanes. Thus, vehicles traveling in the same direction can change to another lane with the same direction for overtaking slower traffic. If a faster vehicle is approaching a slower vehicle different driving maneuvers are possible: changing the current lane or slowing down. When referring to the presented model, for both partners, the human driver and the automation, several layers and stages are involved simultaneously. Figure 6 shows such a driving situation with the ego-vehicle as second car on the right lane (blue vehicle, lower left part). The possible maneuvers, respectively their corresponding trajectories, are illustrated as arrows (dotted arrow as lane-changing maneuver). The ongoing traffic on the other lane might hinder a lane change, depending on the relative velocity and longitudinal distances between the vehicles. While at least two vehicles are driving on the right lane, the ego-vehicle has a higher speed than the followed car. Thus, a typical overtaking situation emerges for the ego-vehicle where both, the human driver and the automation, have to manage the situation by assessing it, decide for actions, and implement these actions. When applying the model described in Section 3 (see also Figure 3), overall processes running in parallel on both sides can be identified. The perceived visual information (and other redundant sensor data on the technical side such as e. g. radar) is used for several layers of the dynamic driving task. The distance to the leading vehicle, or to be more precise its change rate, is used as an indication of the relative velocity and therefore, the time to collision can be assessed. While human drivers usually do this without explicit calculation, they need more time for their assessment and for adapting to changes in the relative velocity.
The retrieved information is used on the guidance level for assessing and planning possible maneuvers, since a sufficiently small TTC (cf. Section 5) leads to a situation in which either an adaption in the velocity or a change of the lane is required. At the same time, other visual information is used for steering the vehicle, in particular the perception of the road markings.
Both partners might assess the situation with the same maneuver as a result or they might prefer different maneuvers. For example, the human driver might prefer changing the lane, while a conservative calibrated automation evaluates this maneuver as less likely because of other traffic blocking the target lane. In such a situation, the conducted maneuver has to be arbitrated between the human and the technical system, with one distinct maneuver as the result for the whole human-machine system. While the technical system provides a numeric representation of its preference for each assessed maneuver, the human driver communicates via the input devices such as the steering wheel.
Within the arbitration process, the human’s inputs are interpreted and compared to the results of the automation’s situation evaluation. For each maneuvers, the corresponding trajectories can be mapped to thresholds of the respective control inputs. When using an interface for bi-directional communication, such as an active steering wheel, the human driver and the automation can communicate by applying forces or torques to the shared actuators. The arbitration unit compares the direction and intensity (e. g. forces, torques, grip, input speed) of the human’s actions to the automation’s calculated controls. Thus, a significant difference in the steering angle or torque of the human driver and the automation can indicate that the human driver might prefer a different maneuver. If a maneuver compatible to the human’s inputs is also assessed by the automation and the technical system does not see a reason to block it, this driving maneuver is set active and passed from the interaction mediation to the automation (see also Figure 4).
When starting the lane changing maneuver, the driving actions take place on both lanes simultaneously. Therefore, the driving environment and the relevant objects, such as other traffic participants, on two lanes need to be monitored, the individual situations need to be assessed, and decisions regarding the overall situation need to be derived. This involves different layers of the dynamic driving task. In particular, the guidance layer for planning and executing the lane changing maneuver including its trajectory and the control layer for the actual driving action. While the ego-vehicle is still (partly) on the initial lane, the controls have to stay within the boundaries of both possible maneuvers in parallel since a collision with the leading vehicle on the starting lane is still possible at this time. If this vehicle decelerates rapidly by performing a sudden brake maneuver (see Figure 7), the described driving situation evolves from a typical overtaking maneuver into a highly critical situation. The visual modality of the human driver has limited capacity for observing the vehicle in front of her (which is braking) and the vehicle approaching from behind on the adjacent lane. As has been presented in Section 3 (see also Figure 3), this is also due to an interference of the modality in several layers of the driving task. Specifically, on the control layer the ego-vehicle system needs to avoid a collision with the vehicle in front, while the approaching vehicle on the adjacent lane rather impacts the guidance layer. This restriction in modalities only refers to the human driver, the automation is observing both vehicles simultaneously without the risk of a capacity conflict. It is therefore important that it assists the human driver in this critical situation by either sharing the information via a different modality or by taking the limitations of the human driver into account and advising her accordingly to a suitable maneuver.
Whereas previously, a decision has been made to change the lane, it now becomes unclear which maneuver – continuing to change the lane or remaining on the initial lane and performing a braking maneuver – should be preferred in light of the informational update (see Figure 8). This also depends on the exact circumstances, i. e. the system state, relative velocities, environment, and position of objects. In addition, human preferences (e. g. comfort, risk-aversion) influence the human maneuver assessment and technical parametrization influences the technical maneuver level assessment (cf. Section 5). Bringing these two sides of a joint system together requires experience, the human-machine system can learn each other’s preferences and “ideas”. By analyzing the behavior and feedback in comparable situations, data such as TTC or headway can be used for parametrizing the decision-making. This could be achieved by using an online approach and building up a model of the human driver or by using existing data samples for the learning process. On the human side, a mental model of the automation’s capabilities and boundaries can be built and refined.
7 Conclusion and Outlook
This paper examined the joint decision-making process of a human-machine system in automated driving. The driving task can be divided into four levels, which correspond to actions with different time horizons and frequencies. On all of these levels, decisions need to be taken as a reaction to an internal or external stimulus and must be updated whenever new, relevant information is obtained.
The chain of information processing and actions on each level leads to a situation assessment and a decision-making process. Hereby, the situation assessment process produces situation awareness, which is preparatory to the decision-making. The situation assessment can be hindered by the capacity of the working memory and capacities of the human sensory modalities. This is one of the reasons why the automation and the human driver need to communicate their respective results of the situation assessment among another. Communication is supported by a compatible design of the automation to the human cognition and by considering the resources of the individual modalities.
The common representation of the situation as well as knowledge of the system boundaries of the driver-vehicle system lay the foundation for a successful joint decision. However, a joint decision cannot be found without common goals and norms. The decision-making process extends the situation assessment in that the outcomes of several feasible alternatives are forecast and evaluated using knowledge retrieved from experience. That means that with more experience, the forecast and evaluation become more precise, leading to a better decision. This is also true for the joint decision, not only because the individual elements improve, but both subsystems also learn to better coordinate another and gain more precise knowledge of the system itself. In the case a joint decision cannot be found, arbitration or negotiation is needed to bring the two partners together.
With the theoretical background, a system for cooperative vehicle guidance and control has been implemented in a simulator. Hereby, the automation disposes of the four levels identified in the driving task to correspond to the human as much as possible. The implemented interaction mediator is an instance for arbitration between the automation and the human driver in the simulator.
Currently, experiments are being conducted to validate the described joint decision-making process. Future research should focus on how a human driver can communicate goals and norms to the automation to enhance the compatibility and a harmonized decision making. Also the role of experience and learning, both on the human and on the automation side should be further analyzed. Experimental investigation or simulation studies of learning algorithms in automated driving might provide further insides into self-sustaining and self-improving systems.
About the authors
Eugen Altendorf holds a diploma (master’s degree) in mechanical engineering with a major in system dynamics and additionally a bachelor’s degree in communication science, both from RWTH Aachen University. He works as a group leader at the Institute of Industrial Engineering and Ergonomics at RWTH Aachen University. His research topics include automation behavior in cooperative human-machine systems and system dynamics in the field of partially and highly automated driving.
Gina Weßel studied psychology at Maastricht University, the Netherlands, and in her master’s program she specialized on work and organizational psychology. Since 2016, she is working as a research associate at the Institute of Industrial Engineering and Ergonomics at RWTH Aachen University. Her research focuses on cooperative human-machine systems in the domain of automated driving.
Marcel Baltzer studied business administration & mechanical engineering at RWTH Aachen. From 2012 to 2015, he worked in the area of balanced human systems integration at the Institute of Industrial Engineering and Ergonomics at RWTH Aachen University and specialized in the subject of interaction mediation, i. e. how interaction between a human and a cooperative automation can be optimized in terms of usability, energy efficiency, comfort, safety, and joy of use. Since 2015, he is project and team leader at Fraunhofer FKIE in Wachtberg.
Yigiterkut Canpolat studied mechanical engineering with a major in mechatronics at the University Duisburg-Essen, where he received a master’s degree. He works as a research associate at the Institute of Industrial Engineering and Ergonomics at RWTH Aachen University since 2016. His research topics include control engineering in cooperative human-machine systems and human factors in the field of partially and highly automated driving.
Frank Flemisch studied aerospace engineering at the University of the Federal Armed Forces Munich, where he received a Ph. D. on the interplay of assistant systems and visual behavior. As an NRC associate at NASA Langley, he worked on cooperative control of highly automated air vehicles with the goal to make “flying as easy as driving a car”. From 2004 to 2011, he led a research team on system ergonomics and design of automation for cars and trucks at DLR-ITS Braunschweig (Brunswick). He was a member in the BAStgroup “Legal consequences of vehicle automation” and is a technical expert in ISO WG 204 (intelligent transport systems). Since 2011, he works as a branch head at Fraunhofer FKIE in Wachtberg, and is professor for Human Systems Integration at RWTH Aachen University.
Acknowledgement
The research conducted was partly funded by the Deutsche Forschungsgemeinschaft (DFG) within the projects “Arbitration of cooperative movement for highly-automated human machine systems” respectively “Systemergonomics for cooperative interacting vehicles: Transparency of automation behavior and intervention possibilities of the human during normal operation, at system limits and during system failure” and partly by RWTH Aachen University.
References
[1] ABBINK, David Alexander: Neuromuscular analysis of haptic gas pedal feedback during car following. doctoral dissertation. [S. l.]: [s. n.], 2006.Search in Google Scholar
[2] ALTENDORF, E.; FLEMISCH, F.: Prediction of driving behavior in cooperative guidance and control: a first game-theoretic approach. In: Kognitive Systeme 2 (2015), Nr. 1. URL http://duepublico.uni-duisburg-essen.de/servlets/DocumentServlet?id=37693.Search in Google Scholar
[3] ALTENDORF, Eugen; BALTZER, Marcel; HEESEN, Matthias; KIENLE, Martin; WEISSGERBER, Thomas; FLEMISCH, Frank: H-Mode. In: WINNER, Hermann; HAKULI, Stephan; LOTZ, Felix; SINGER, Christina (Hrsg.): Handbook of Driver Assistance Systems: Basic Information, Components and Systems for Active Safety and Comfort. Cham: Springer International Publishing, 2016, S. 1499–1518.10.1007/978-3-319-12352-3_60Search in Google Scholar
[4] BAINBRIDGE, Lisanne: Ironies of automation. In: Automatica 19 (1983), Nr. 6, S. 775–779.10.1016/0005-1098(83)90046-8Search in Google Scholar
[5] BALTZER, Marcel; FLEMISCH, F.; ALTENDORF, E.; MEIER, S.: Mediating the Interaction between Human and Automation during the Arbitration Processes in Cooperative Guidance and Control of Highly Automated Vehicles. In: Proceedings of the 5th International Conference on Applied Human Factors and Ergonomics AHFE, 2014.Search in Google Scholar
[6] BAUMANN, Martin R. K.; KREMS, Josef F.: A comprehension based cognitive model of situation awareness. In: International Conference on Digital Human Modeling, 2009, S. 192–201.10.1007/978-3-642-02809-0_21Search in Google Scholar
[7] BIESTER, Dipl-Psych Lars: CooperativeAutomation in Automobiles. Humboldt-Universität zu Berlin. 2008-01-01.Search in Google Scholar
[8] BUBB, Heiner: Ergonomie. In: 3. Aufl., Kap. 5.3 Systemergonomische Gestaltung, S. 390–420). München: Carl Hanser Verlag (1993).Search in Google Scholar
[9] BUSEMEYER, Jerome R.; TOWNSEND, James T.: Decision field theory: a dynamic-cognitive approach to decision making in an uncertain environment. In: Psychological review 100 (1993), Nr. 3, S. 432.10.1037/0033-295X.100.3.432Search in Google Scholar
[10] DIEDERICH, Adele: Dynamic stochastic models for decision making under time constraints. In: Journal of Mathematical Psychology 41 (1997), Nr. 3, S. 260–274.10.1006/jmps.1997.1167Search in Google Scholar PubMed
[11] DONGES, E.: A conceptual framework for active safety in road traffic, 1999.10.1076/vesd.32.2.113.2089Search in Google Scholar
[12] DONGES, Edmund: Aspekte der aktiven Sicherheit bei der Führung von Personenkraftwagen. In: AUTOMOB-IND 27 (1982), Nr. 2.Search in Google Scholar
[13] EMERY, Robert E.; SBARRA, David; GROVER, Tara: Divorce mediation: Research and reflections. In: Family Court Review 43 (2005), Nr. 1, S. 22–37.10.1111/j.1744-1617.2005.00005.xSearch in Google Scholar
[14] ENDSLEY, Mica R.: Toward a theory of situation awareness in dynamic systems. In: Human Factors: The Journal of the Human Factors and Ergonomics Society 37 (1995), Nr. 1, S. 32–64.10.1518/001872095779049543Search in Google Scholar
[15] ENDSLEY, Mica R.: Situation awareness misconceptions and misunderstandings. In: Journal of Cognitive Engineering and Decision Making 9 (2015), Nr. 1, S. 4–32.10.1177/1555343415572631Search in Google Scholar
[16] FASTENMEIER, Wolfgang; GSTALTER, Herbert: Driving task analysis as a tool in traffic safety research and practice. In: Safety Science 45 (2007), Nr. 9, S. 952–979.10.1016/j.ssci.2006.08.023Search in Google Scholar
[17] Flemisch, Frank; Abbink, David; Itoh, Makoto; Pacaux-Lemoine, Marie-Pierre; Weßel, Gina: Shared Control Is the Sharp End of Cooperation: Towards a Common Framework of Joint Action, Shared Control and Human Machine Cooperation. In: 13th IFAC / IFIP / IFORS / IEA Symposium on Analysis, Design, and Evaluation of Human-Machine Systems. Kyoto, Japan, 2016.10.1016/j.ifacol.2016.10.464Search in Google Scholar
[18] FLEMISCH, Frank; ALTENDORF, Eugen; CANPOLAT, Yigiterkut; WESSEL, Gina; BALTZER, Marcel; LÓPEZ, Daniel; HERZBERGER, Nicolas; VOß, Gudrun; SCHWALM, Maximilian; SCHUTTE, Paul: Uncanny and unsafe valley of assistance and automation: First sketch and application to vehicle automation. In: Schlick C. M., Duckwitz S., Flemisch F., Frenz M., Kuz S., Mertens A., Mütze-Niewöhner S. (Hrsg.): Advances in Ergonomic Design of Systems, Products and Processes. Berlin Heidelberg: Springer, 2016 (Proceedings of the Annual Meeting of GfA).10.1007/978-3-662-53305-5_23Search in Google Scholar
[19] FLEMISCH, Frank; HEESEN, Matthias; HESSE, Tobias; KELSCH, Johann; SCHIEBEN, Anna; BELLER, Johannes: Towards a dynamic balance between humans and automation: authority, ability, responsibility and control in shared and cooperative control situations. In: Cognition, Technology & Work 14 (2012a), Nr. 1, S. 3–18.10.1007/s10111-011-0191-6Search in Google Scholar
[20] FLEMISCH, Frank; KELSCH, Johann; LÖPER, Christan; SCHIEBEN, Anna; SCHINDLER, Julian; HEESEN, Matthias: Cooperative Control and Active Interfaces for Vehicle Assitsance and Automation (2008b).Search in Google Scholar
[21] FLEMISCH, Frank O.; ADAMS, Catherine A.; CONWAY, Sheila R.; GOODRICH, Ken H.; PALMER, Michael T.; SCHUTTE, Paul C.: The H-Metaphor as a guideline for vehicle automation and interaction (2003c).Search in Google Scholar
[22] FLEMISCH, Frank Ole; BENGLER, Klaus; BUBB, Heiner; WINNER, Hermann; BRUDER, Ralph: Towards cooperative guidance and control of highly automated vehicles: H-Mode and Conduct-by-Wire. In: Ergonomics 57 (2014), Nr. 3, S. 343–360.10.1080/00140139.2013.869355Search in Google Scholar PubMed
[23] GASSER, Tom M.; ARZT, Clemens; AYOUBI, Mihiar; BARTELS, Arne; BÜRKLE, Lutz; EIER, Jana; FLEMISCH, Frank; HÄCKER, Dirk; HESSE, Tobias; HUBER, Werner; OTHERS: Rechtsfolgen zunehmender Fahrzeugautomatisierung. In: Berichte der Bundesanstalt für Straßenwesen. Unterreihe Fahrzeugtechnik (2012), Nr. 83.Search in Google Scholar
[24] GILBOA, Itzhak; SCHMEIDLER, David: Case-based decision theory. In: The Quarterly Journal of Economics (1995), S. 605–639.10.2307/2946694Search in Google Scholar
[25] GONZALEZ, Cleotilde; LERCH, Javier F.; LEBIERE, Christian: Instance-based learning in dynamic decision making. In: Cognitive Science 27 (2003), Nr. 4, S. 591–635.10.1207/s15516709cog2704_2Search in Google Scholar
[26] GROEGER, John A.: Understanding driving: Applying cognitive psychology to a complex everyday task: Psychology Press, 2000.Search in Google Scholar
[27] HOC, Jean-Michel: From human-machine interaction to human-machine cooperation. In: Ergonomics 43 (2000), Nr. 7, S. 833–843.10.1080/001401300409044Search in Google Scholar PubMed
[28] HOLLNAGEL, Erik; WOODS, David D.: Cognitive systems engineering: New wine in new bottles. In: International Journal of Man-Machine Studies 18 (1983), Nr. 6, S. 583–600.10.1016/S0020-7373(83)80034-0Search in Google Scholar
[29] HOLZMANN, Frédéric: Adaptive cooperation between driver and assistant system. In: Adaptive Cooperation between Driver and Assistant System: Springer, 2008, S. 11–19.10.1007/978-3-540-74474-0_2Search in Google Scholar
[30] KINTSCH, Walter: Comprehension: A paradigm for cognition: Cambridge university press, 1998.Search in Google Scholar
[31] LÖPER, Christian; SCHOMERUS, J.; BRANDT, T.; FLEMISCH, F.; SATTEL, T.: Bahnplanung, Bahnfuehrung und haptische Interaktion fuer ein Fahrerassistenzsystem zur Querfuehrung /Path planning, vehicle guidance and haptic interaction for a driver assistance system for lateral guidance. In: VDI-Berichte (2006), Nr. 1960.Search in Google Scholar
[32] MANZEY, Dietrich; REICHENBACH, Juliane; ONNASCH, Linda: Human performance consequences of automated decision aids: The impact of degree of automation and system experience. In: Journal of Cognitive Engineering and Decision Making (2012), 1555343411433844.10.1177/1555343411433844Search in Google Scholar
[33] MICHON, J. A.: A critical view of driver behavior models: what do we know, what should we do?, 1985.10.1007/978-1-4613-2173-6_19Search in Google Scholar
[34] MULDER, Mark; ABBINK, David A.; BOER, Erwin R.: Sharing Control With Haptics Seamless Driver Support From Manual to Automatic Control. In: Human Factors: The Journal of the Human Factors and Ergonomics Society 54 (2012), Nr. 5, S. 786–798.10.1177/0018720812443984Search in Google Scholar PubMed
[35] NEUMANN, John von; MORGENSTERN, Oskar: Theory of games and economic behavior. Princeton: Princeton University Press, 1944.Search in Google Scholar
[36] ONNASCH, Linda; WICKENS, Christopher D.; LI, Huiyang; MANZEY, Dietrich: Human Performance Consequences of Stages and Levels of Automation An Integrated Meta-Analysis. In: Human Factors: The Journal of the Human Factors and Ergonomics Society (2013), 0018720813501549.10.1177/0018720813501549Search in Google Scholar PubMed
[37] PACAUX-LEMOINE, M. P.; DEBERNARD, S.: Common work space for human–machine cooperation in air traffic control. In: Control Engineering Practice 10 (2002), Nr. 5, S. 571–576.10.1016/S0967-0661(01)00060-0Search in Google Scholar
[38] PACAUX-LEMOINE, Marie-Pierre: Human-Machine Cooperation Principles to Support Life-Critical Systems Management. In: MILLOT, Patrick (Hrsg.): Risk Management in Life-Critical Systems. Hoboken, NJ, USA: John Wiley & Sons, Inc, 2015, S. 253–277.10.1002/9781118639351.ch12Search in Google Scholar
[39] PARASURAMAN, Raja; SHERIDAN, Thomas B.; WICKENS, Christopher D.: A model for types and levels of human interaction with automation. In: IEEE Transactions on systems, man, and cybernetics-Part A: Systems and Humans 30 (2000), Nr. 3, S. 286–297.10.1109/3468.844354Search in Google Scholar PubMed
[40] QUIGLEY, Morgan; CONLEY, Ken; GERKEY, Brian; FAUST, Josh; FOOTE, Tully; LEIBS, Jeremy; WHEELER, Rob; NG, Andrew Y.: ROS: an open-source Robot Operating System, Bd. 3. In: ICRA workshop on open source software, 2009, S. 5.Search in Google Scholar
[41] RASMUSSEN, Jens: Skills, rules, and knowledge; signals, signs, and symbols, and other distinctions in human performance models. In: IEEE Transactions on Systems, Man, and Cybernetics SMC-13 (1983), Nr. 3, S. 257–266.10.1109/TSMC.1983.6313160Search in Google Scholar
[42] SAE INTERNATIONAL: Taxonomy and definitions for terms related to on-road motor vehicle automated driving systems. Jan 2014. Warrendale, Pa.: SAE International, 2014 (Surface vehicle information report).Search in Google Scholar
[43] SALAS, Eduardo; PRINCE, Carolyn; BAKER, David P.; SHRESTHA, Lisa: Situation awareness in team performance: Implications for measurement and training. In: Human Factors: The Journal of the Human Factors and Ergonomics Society 37 (1995), Nr. 1, S. 123–136.10.1518/001872095779049525Search in Google Scholar
[44] SERVIN, Oscar; BORIBOONSOMSIN, Kanok; BARTH, Matthew: An energy and emissions impact evaluation of intelligent speed adaptation. In: 2006 IEEE Intelligent Transportation Systems Conference, 2006, S. 1257–1262.10.1109/ITSC.2006.1707395Search in Google Scholar
[45] SHERIDAN, T. B.; PARASURAMAN, R.: Human versus automation in responding to failures: an expected-value analysis. In: Human factors 42 (2000), Nr. 3, S. 403–407.10.1518/001872000779698123Search in Google Scholar PubMed
[46] SHERIDAN, Thomas B.: Humans and automation: System design and research issues: John Wiley & Sons, Inc, 2002.Search in Google Scholar
[47] SHERIDAN, Thomas B.; VERPLANK, William Lawrence: Human and computer control of undersea teleoperators. Cambridge, Mass., [Alexandria, Va.]: Massachusetts Institute of Technology, Man-Machine Systems Laboratory; [Distributed by Defense Technical Information Service], 1978.10.21236/ADA057655Search in Google Scholar
[48] Sterman, John D.: Modeling managerial behavior: Misperceptions of feedback in a dynamic decision making experiment. In: Management science 35 (1989), Nr. 3, S. 321–339.10.1287/mnsc.35.3.321Search in Google Scholar
[49] VAN DER VOORT, Mascha; DOUGHERTY, Mark S.; VAN MAARSEVEEN, Martin: A prototype fuel-efficiency support tool. In: Transportation Research Part C: Emerging Technologies 9 (2001), Nr. 4, S. 279–296.10.1016/S0968-090X(00)00038-3Search in Google Scholar
[50] WICKENS, Christopher D.: The structure of attentional resources. In: Attention and performance VIII 8 (1980).Search in Google Scholar
[51] WICKENS, Christopher D.: Multiple resources and performance prediction. In: Theoretical issues in ergonomics science 3 (2002), Nr. 2, S. 159–177.10.1080/14639220210123806Search in Google Scholar
[52] WICKENS, Christopher D.; LI, Huiyang; SANTAMARIA, Amy; SEBOK, Angelia; SARTER, Nadine B.: Stages and levels of automation: An integrated meta-analysis, Bd. 54. In: Proceedings of the Human Factors and Ergonomics Society Annual Meeting, 2010, S. 389–393.10.1177/154193121005400425Search in Google Scholar
[53] WINNER, Hermann; SCHOPPER, Michael: Adaptive Cruise Control. In: WINNER, Hermann; HAKULI, Stephan; LOTZ, Felix; SINGER, Christina (Hrsg.): Handbook of Driver Assistance Systems: Basic Information, Components and Systems for Active Safety and Comfort. Cham: Springer International Publishing, 2016, S. 1093–1148.10.1007/978-3-319-12352-3_46Search in Google Scholar
[54] YOUNG, Mark S.; BIRRELL, Stewart A.; STANTON, Neville A.: Safe driving in a green world: A review of driver performance benchmarks and technologies to support ‘smart’ driving. In: Applied Ergonomics 42 (2011), Nr. 4, S. 533–539.10.1016/j.apergo.2010.08.012Search in Google Scholar PubMed
© 2016 Walter de Gruyter GmbH, Berlin/Boston