Skip to content
Publicly Available Published by De Gruyter April 3, 2013

Camera-based driver assistance systems

Michael Grimm

Abstract

In recent years, camera-based driver assistance systems have taken an important step: from laboratory setup to series production. This tutorial gives a brief overview on the technology behind driver assistance systems, presents the most significant functionalities and focuses on the processes of developing camera-based systems for series production. We highlight the critical points which need to be addressed when camera-based driver assistance systems are sold in their thousands, worldwide – and the benefit in terms of safety which results from it.

1 Introduction

‘After a decade of the radar we are entering the century of the camera’, is how an expert could comment on the fundamental change of technology we are currently experiencing in the field of advanced driver assistance systems (ADAS) (J. Seekircher, Daimler AG, personal talk, 2009). For more than 10 years, electronic support to the driver was mostly provided by radar systems, such as automatic cruise control (ACC) or collision mitigation systems (CMS). In recent years, however, it can be seen that up-to-date model lines of all major car manufacturers include assistance systems that are based on camera technology. Driver assistance systems used to be exclusive, but today almost everybody shows an interest in such systems as the increasing number of articles and comparison tests on this subject indicate [1]. Motivated by good economic reasons, car suppliers have incorporated the ADAS business as their core business. Market studies indicate that the volume of all driver assistance systems will increase from €1 billion in 2008 to €6–7 billion in 2020 [2].

Such change in technology brings with it a number of questions: Which kind of support can be provided using optical sensors, and in which situations? Also: How is the desired functionality to be specified in an appropriate way? How can we cope with the system limits? What are the challenges of testing and releasing such systems? These questions are addressed here.

Although we give many examples of how driver assistance systems are developed by Mercedes-Benz, the system descriptions and development processes may also hold for other manufacturers. The systems are mentioned as examples because the focus of this article is to provide a tutorial view where it is more important to have a consecutive timeline of one original equipment manufacturer (OEM) rather than a complete list of all current market offers. However, a number of studies comparing the assistance systems of different OEMs can be found in Refs. [3–5].

The common sense motivation behind driver assistance systems is twofold: First, keeping the number of fatalities and injuries on our roads due to collisions and accidents to a minimum is the most desirable development goal bearing in mind that we had approximately 4000 fatalities and approximately 390 000 injuries in 2011 only on Germany’s roads [6]. But second, camera-based driver assistance systems may also increase comfort while driving and reduce the cognitive load of the driving task to a minimum, which allows the driver to focus on more important tasks – if desired.

In section 2, the most significant camera-based driver assistance systems are briefly described. The series development of such systems is focused on in section 3, and the major steps from specification through to validation and system release are included. Section 4 closes with summary and an outlook for the futire.

2 Overview of camera-based driver assistance systems

Driver assistance systems can be roughly grouped in the following categories: light functions, parking assistance, lane keeping support, collision prevention including collision mitigation, and traffic sign recognition systems.

2.1 Night vision and light control functions

Camera-based support to the driver is very helpful when it comes to night vision and light control. Obviously, having the maximum possible light at night time is the most beneficial function that would increase safety. The earlier you see an animal, for example, the more time you have to react. Thus, camera-based night vision support has continuously been improved through a number of steps.

2.1.1 Night View Assist

Using an infrared camera the driver is shown a frequently updated image of the scene. Owing to the infrared technique the driver would recognize dangerous situations, such as obstacles, persons or animals on the road much easier. Both far infrared (FIR) and near infrared (NIR) systems are common where both technologies have their pros and cons (see [7] for a detailed study). Image processing is applied to provide a sharp image with high contrast. High update rates and extremely short latencies are crucial as the driver should be able to drive just by viewing the night view image. Night view systems have been on the market since the launch of the Mercedes-Benz S-Class in 2005 [8].

2.1.2 Night View Assist including pedestrian detection

The night view system as described above was improved in such a way that pedestrians are detected in the night view image and highlighted to the driver. Although pedestrian recognition has been a matter of research for many years, this system – launched in 2009 as ‘Night View Assist PLUS’ – was the first to bring pedestrian recognition into the car [9]. Because, compared to a vehicle, pedestrians are ‘weak’ participants in traffic, ‘Intelligent Night View’ is a major step in active safety.

2.1.3 Spotlight function

As a consequence of recognizing pedestrians, Mercedes-Benz developed a system which not only informs the driver but also informs the pedestrian: ‘Night View Assist PLUS’ with spotlight function. Using a special LED light module within the headlights the pedestrian can be flashed at to make him or her aware of the dangerous situation. The spotlight function became available with the Mercedes-Benz CLS-Class in 2011 [10].

2.1.4 Night View Assist including animal detection

The third generation of night view systems coming in 2013 includes several significant improvements [11]. Now it not only detects pedestrians but also animals in potentially hazardous situations. And it automatically controls the focus of attention by showing the night view image in the instrument cluster with the object highlighted no matter whether the night view image is currently on display or not, as can be seen in Figure 1. From a sensor point of view, it is remarkable that now both NIR and FIR cameras are used at the same time.

Figure 1 Mercedes-Benz S-Class. Night View Assist PLUS: alerting to pedestrians and animals [11].

Figure 1

Mercedes-Benz S-Class. Night View Assist PLUS: alerting to pedestrians and animals [11].

2.1.5 Adaptive Highbeam Assist

The system automatically turns the high beam on and off depending on the cars in front of the vehicle. A standard CMOS camera behind the windscreen is used for this purpose that may be shared for other functions such as lane keeping support or traffic sign assist. Again by means of image processing the system detects oncoming vehicles or preceding vehicles and consequently regulates the light to allow for optimal vision without disturbing others. The light range with the 2009 Mercedes-Benz system [9] and others, is seamless and transitioning is very smooth between low beam and high beam, whereas other systems on the market switch in a binary way between low beam and high beam.

2.1.6 Adaptive Highbeam Assist with cut-out function

The next generation Highbeam Assist extends to a cut-out function: ‘Adaptive Highbeam Assist PLUS’ (2013) [11]. It makes use of the LED headlight module and only activates those LEDs that apply the high beam without interfering with other vehicles. As a consequence, as can be seen in Figure 2, the driver has high beam on at least in some areas all the time which significantly improves sight at night.

Figure 2 Mercedes-Benz S-Class. Adaptive Highbeam Assist PLUS: permanent main beam with no dazzling [11].

Figure 2

Mercedes-Benz S-Class. Adaptive Highbeam Assist PLUS: permanent main beam with no dazzling [11].

2.2 Parking assistance

Parking assistance denotes a number of systems which help the driver to get into and out of a parking space with ease and without causing any damage. Although parking assist traditionally is a domain of other sensors than cameras, such as ultrasonic sensors, we can see that more and more vehicles are equipped with parking assist cameras.

2.2.1 Reverse camera

A wide angle camera mounted on the trunk shows the surroundings behind the car to the driver when applying the reverse gear. Usually a CMOS camera is used for this purpose. The most important property is image quality under various light conditions. Additionally, distance cues and the trajectory of the most probable driving path are shown in the image as an overlay. Basic reverse camera systems have been in the market for at least 5 years.

2.2.2 360° view

The next step in providing the driver with a virtual outside view of the situation are 360° view systems – so-called ‘surround view’ systems. These systems make use of several wide angle cameras spread all over the outside of the vehicle. The Mercedes-Benz 360° view launched in the 2011 GL-Class uses the reverse camera and additionally one camera in the front and two mounted on the outside mirrors [12]. By means of image processing the images of all the cameras are merged to one image where usually the virtual point of the camera setting is right in the middle above the car (‘top view’), see Figure 3. Additional views, such as side views or the low distance front view, can help the driver in particular parking situations such as narrow parking spots.

Figure 3 Mercedes-Benz GL-Class. The current state of the art is embodied by extended Active Parking Assist and the 360° camera on board the GL-Class. The surrounding area is visualized from different perspectives on the COMAND display, from a complete bird’s eye panorama to various detailed views which enable it to pinpoint maneuvering or reveal crossing traffic which is hidden in blind spots when moving out of narrow exits [12].

Figure 3

Mercedes-Benz GL-Class. The current state of the art is embodied by extended Active Parking Assist and the 360° camera on board the GL-Class. The surrounding area is visualized from different perspectives on the COMAND display, from a complete bird’s eye panorama to various detailed views which enable it to pinpoint maneuvering or reveal crossing traffic which is hidden in blind spots when moving out of narrow exits [12].

2.3 Lane keeping support

Lane keeping support includes all systems that help the driver stay in lane or, in the case of unintended drifting out of the lane, make him or her aware of the situation to prevent damage. This is called lateral support. Generally, such systems make use of a (CMOS) camera mounted behind the windscreen. The upcoming 2013 version of these systems makes use of a stereo camera, where distance information that is calculated from the stereo image pairs is used to further enhance the system performance [11].

2.3.1 Lane Departure Warning

This system scans the environment in front of the vehicle by means of image processing and finds the lane boundary markings. It continuously estimates the position of the vehicle within the lane and, in case of drifting over a lane marking, issues a warning. The system that was launched by Mercedes-Benz in 2009 is now available in almost all model lines from A-Class to S-Class [9]. In one of the system modes that can be configured in the settings by the driver the system allows him or her to slightly cut curves without getting nuisance warnings. Also warnings are suppressed if the driver is very active in terms of accelerating, steering or breaking. Whereas the Mercedes-Benz system, and some others, makes us of a small vibration motor in the steering wheel, other manufacturers prefer acoustic warnings.

2.3.2 Lane Departure Protection

Making use of the electronic stability control (ESC) system, an unintended lane departure event can actively be prevented if just one of the wheels has brake torque applied. In doing so, the vehicle would change its yaw angle back towards the center of the lane. The Mercedes-Benz system ‘Active Lane Keeping Assist’ was launched in 2010. Recent cars make use of electronic power steering (EPS) as an alternative to single wheel braking. This has the same effect with respect to generating a yaw moment but does not decelerate the vehicle at the same time in the way a single wheel brake torque does. The Mercedes-Benz system only intervenes with solid continuous lines. In the 2013 version, it also intervenes with dashed lines in case there is a vehicle on the adjacent neighboring lane detected by the radar sensors [11].

2.3.3 Lane Keeping Assist

Detecting the lane markings in the image consequently leads to another system, which provides continuous lane keeping support by taking control of electric power steering (EPS) as indicated in Figure 4. Such a system automatically keeps the vehicle in the center of the lane as long as lane markings can be recognized in the camera image. Although the primary purpose of lane departure warning and lane departure protection systems is increasing safety, lane keeping assist aims at increasing comfort and making driving less stressful. Mercedes-Benz will introduce ‘Steering Assist’ as part of DISTRONIC+ in 2013 [11]. It is designed to be one integral system to operate in the speed range from 0 to 200 km/h. Thereby, lane markings are the basis for steering assist at speeds above 60 km/h. Lateral steering support at lower speeds is primarily based on vehicles ahead that are recognized and measured in the camera image.

Figure 4 Mercedes-Benz S-Class. DISTRONIC PLUS with Steering Assist: comfort-enhancing assistance with lateral lane guidance [11].

Figure 4

Mercedes-Benz S-Class. DISTRONIC PLUS with Steering Assist: comfort-enhancing assistance with lateral lane guidance [11].

2.4 ACC and collision prevention

To a certain extent, camera-based driver assistance systems can also provide longitudinal support although, historically speaking, this has always been a classical domain of radar sensors. However, using stereo cameras or smart combinations of one mono camera sensor and one additional sensor such as LIDAR or PMD sensors can do the same job: they can estimate the distance to preceding vehicles or pedestrians and use it to automatically keep a safe distance or prevent collisions. Since active safety and pedestrian safety is becoming an increasingly important issue, different technologies are still competing. Stereo camera sensors seem to be best prepared to combine distance measures with appearance-based classifiers to yield the best possible recognition performance at extremely low false positive rates. However, this performance is at the cost of high computational power: image rectification, disparity image calculation and a complex stereo calibration algorithm yield for the necessity of high performance DSPs or FPGA and ASIC parts in the camera ECU. Mono camera systems, on the other hand, provide the same basic functionality with less computational effort. However, their performance is inferior to stereo systems when it comes to ambiguous situations due to the lack of measured distance information.

Note that still each individual technology is being pushed forward continuously. Some of the system limits can we worked on by algorithmic improvements. Nevertheless, best performance is achieved using different technologies with different characteristics, in particular camera plus radar sensor technology. Sensor fusion compensates for sensor uncertainties, often involving advanced Kalman filtering.

2.4.1 Automatic cruise control

This system measures the distance to the vehicle ahead and controls the driver’s vehicle to keep a constant safe distance. Usually such systems are enriched by below-limit distance warnings. The camera only based ACC has its strength at low speed, whereas the classical radar-based systems show their strength at high speed and with a long distance to preceding vehicles.

2.4.2 Collision prevention

At low speeds – up to city cruising speeds – collision prevention systems can fully prevent collisions by automatic braking. Such systems are mostly based on either a stereo camera sensor or a mono camera plus one additional sensor because distance estimation is crucial and false alarm rates must be as low as possible.

2.4.3 Brake assist

Camera-based recognition of vehicles that might potentially lead to a collision can also be used to assist braking once the driver has decided to touch the brake pedal. Such a system will bring the driver’s touch of the brake pedal to a full brake as soon as it detects a close-to colliding vehicle. The major contribution of camera-based technology is the detection of crossing traffic and the recognition of pedestrians that might be in hazardous areas in front of the vehicle. The stereo camera-based system ‘BAS PLUS with Cross-Traffic Assist’ will be launched by Mercedes-Benz in 2013 and follows the previous radar-based systems.

2.4.4 Collision mitigation system

Collision prevention systems may similarly act as collision mitigation systems in the sense that if the system recognizes that a collision cannot be prevented it may at least reduce the severity of the collision by applying a full brake.

2.5 Sign recognition

In addition to actively intervening driver assistance systems there are also systems that only inform the driver about road signs beside the road. They help the driver in catching up on important information and not unintendedly missing a sign.

2.5.1 Speed Limit Assist

This system detects speed limit signs and displays them to the driver in the instrument cluster or in the central display. End-of-limit signs are also recognized. Camera-based recognition is supported by map data provided by the navigation system. Mercedes-Benz introduced such a system in 2009 using the Multi Purpose Camera a (joint device which includes a lane departure warning and adaptive headlight control) [9].

2.5.2 Traffic Sign Assist

Recent systems also recognize and display a number of additional road signs: passing prohibition signs, wrong way signs, and compulsory driving direction signs as shown in Figure 5 (Mercedes-Benz 2013) [11]. Such recognition is extremely challenging as traffic signs look very different all over the world.

Figure 5 Mercedes-Benz S-Class. Traffic Sign Assist: now recognizes no-overtaking zones and access restrictions, too [11].

Figure 5

Mercedes-Benz S-Class. Traffic Sign Assist: now recognizes no-overtaking zones and access restrictions, too [11].

3 Series development of camera-based driver assistance systems

The development process of driver assistance systems generally follows the development process of quality management driven automotive processes as described in ISO/TS 16949 [13]. In 2011, ISO 26262 has become relevant which increased the focus on automotive safety considerably [14]. The process can be described as a V-shape: starting at a high-level specification (top left edge of the ‘V’), the specification is detailed more and more until it finds its implementation (base of the ‘V’). Testing and verification, by contrast, goes from low-level testing to high-level system testing and release (top right edge of the ‘V’). This process is described to some abstraction level in the following. In [15], a more detailed description of some aspects of the previous generation driver assistance systems lane departure warning, adaptive headlight control and speed limit assist can be found.

3.1 Specification

The specification of driver assistance systems starts at the very top level which could be called car level. It is followed by the system level which includes the component view and the software level. Several sublevels can be thought of as independent levels or being part of the named main levels. These main levels are explicitly ‘car level’, ‘system level’, and ‘software level’.

3.1.1 Car level specification

The management team of a vehicle model line decides which features a car under development is to have. Such features may be mechanical properties, design characteristics or, as is considered here, which driver assistance systems are to be offered to the customer in a particular model. They decide whether a system should become a series system in each and every car or whether it should be a supplemental, optional system. The sensors are defined including cost and weight requirements. The project timeline for the car development is defined.

To give an example: it is also at the car level to decide whether a car would be offered with a lane departure warning system or with lane keeping support or both.

3.1.2 System level specification

At this level of specification the system requirements are formulated. Using requirements management tools these are broken down to the components that are part of the system. Safety requirements are therefore the result of a safety plan or hazard analysis. FMEA (failure mode and error analysis) yields further requirements to provide the failsafe mode for the system. Performance requirements such as system availability and false alarm rates (if applicable) are among the most important measures to be defined.

Additionally, interfaces between the involved components must be defined. This includes defining the type of network (Flexray, CAN, etc.), signal names and bit definitions, as well as repetition cycles and timeout restrictions. Not to forget communication failsafe aspects such as cyclic redundancy check (CRC) or sequence counters.

One of the important steps is to check for interference with other systems. There might be contradictory requirements. Just think of parameters such as exposure time and aperture which might be wanted to be very different for different applications using the same camera. Often there is a compromise necessary to save cost and still provide acceptable performance for all systems.

3.2 Implementation

When system decomposition to the components is finished, the software algorithm architecture design follows. Finally, the individual software modules are implemented. With respect to camera-based driver assistance functions such modules handle the following steps:

  • Image capturing and temporary storage, including exposure time control, aperture control, white balancing and other top-level image preprocessing steps.

  • Region-of-interest selection, camera-to-world calibration and function specific requirements. Subsampling or interpolation may be applied if necessary.

The next implementation steps may be different for displaying systems and control systems.

  • Image enhancement: adjustment of brightness and contrast, color adjustment, dewarping (essential for the night view system and rear camera).

  • Merging: mosaicing and point-of-view generation (essential for 360° view).

  • Image preprocessing, such as rectification and disparity image calculation (with stereo cameras), and optical flow calculation; gradient or integral image generation, color space transforms (saturation image). As a method, the so-called ‘6D-Vision’ may be used as a framework for object and pedestrian recognition [16].

  • Feature generation: identifying relevant features that may be indicative for the system function. To give an example: for lane keeping support these features may be measurements of lane marking boundaries, for headlight control these may be potential vehicle lights, and for traffic sign recognition these might be candidates for circular geometry matches. Features for pedestrian recognition may be both 3D shape or appearance model characteristics. Feature reduction, feature decorrelation or feature space transformation may follow.

  • Classification: information reasoning of the detected features results in a decision as to whether an object is in the image or not. The notion of ‘object’, again, is dependent of the system function and may be either a lane boundary, a traffic sign, a vehicle/bicycle or pedestrian/animal. There are different and very special classifiers for each task. In general, pattern recognition by means of trained sample representations is highly applicable. Usually a huge amount of data are used to train a classifiers, whereas separate release data are used for verification.

  • Tracking: as single frame-based recognition would be most unstable, classification results (or even features themselves) are tracked over time. In general, the vehicle dynamics are input parameters to such tracking. Kalman filtering is common to pay respect to different variance of the characteristics. Some signals must be low-pass filtered to reduce noise.

  • Output generation: providing the signals to other components in a way they can be transmitted or displayed, respectively.

3.3 Validation

Validation returns the process direction from low level to high level. Implementation is first verified by the software test. Component tests and integration tests follow. System tests and ‘high level’ car validation are the last steps before a driver assistance system is released.

3.3.1 Software level verification

The software modules contributing to the driver assistance system are tested by code review and unit tests first. For each module, input and output checks are applied. It is verified that the software fulfills the software design requirements.

To give an example, for the system lane departure protection by Mercedes-Benz, two different algorithms for lane detection are used and verified by comparing the results (‘verification module’) [17]. Only if the comparison finds that the two algorithms are in accordance, the system will actually carry out electronic stability program (ESP) braking. The module testing of the verification module is a good example for a module that is completely tested for its full permutation of input and output signals to assure that it calculates the comparison in the desired way.

3.3.2 Database verification

Database tests consist of recorded video files that are used for offline simulation of the algorithms in the camera. Batch lists help verify availability and precision on a large scale of recorded scenes. The video data is annotated (‘labeled’) by experts who attach the information what is actually seen in the video; this so-called ‘ground truth’ data is stored in separate label files that can be processed automatically in time alignment with the video data. For lane keeping assistance, for example, labeled ground truth data contains:

  • situation labels: country, weather condition, day/night;

  • road marking characteristics: road class, lane marking type (solid, dashed), lane marking color;

  • lane boundary positions: x, y co-ordinates in the image, which can be transformed into 3D real-world co-ordinates using the camera position.

The lane boundary estimated by the camera is compared to the labeled position data yielding precision assessment of the estimated road model. Availability is calculated for each road characteristic cluster and for each situation label independently. Such an evaluation shows up strengths and weaknesses of the algorithms under all major conditions.

Similar database verification is carried out for all camera-based systems. Obviously, the amount of ground truth data to be labeled as well as the key figures derived from such database tests may vary according to specific needs.

3.3.3 System level verification

The system level verification starts with component verification: Does the hardware component fulfill all requirements? For instance, temperature tests, shock tests and durability tests are essential basic tests. The next step is usually HIL (hardware in the loop) testing. For driver assistance systems the HIL can include several components that are interacting. These tests aim at verifying that the system shows correct behavior in controlled situations. Random tests may be added if numerous input variables or delta configurations are allowed.

System verification also includes testing that all software modules in the camera run together without disturbing one another. Inter-processor communication, shared memory tests and processor multitasking make it essential to run such tests.

Of course, with driver assistance systems the major part of verification is done by driving. Such testing is separated from implementation and carried out by independent test managers. Both the supplier and OEM run long-distance drives with several cars and all the hardware configurations to make sure the system shows that it is adhering to the specifications. Test drives cover all major markets where the driver assistance system is to be sold. All weather conditions must be met. Special situations are systematically tested. For example, for lane keeping support these situations might be construction sites, tunnels or tar stripes on the road. Different car configurations must also be systematically tested. For instance, different headlight technologies (halogen, xenon, LED) may significantly impact recognition results for all camera-based systems at night.

Although this is not required by ISO 26262 in this form, Mercedes-Benz runs close-to-customer drives as a final confirmation of the driver assistance systems. This means that drivers who are not familiar with the systems drive such a car and report on the positive or negative experience of driving the car. For lane departure protection, for example, such validation consisted of a total of 1 000 000 km [17].

Usually online and offline measurement equipment are used to allow for evaluation of the system. All issues that are found during testing are reported and tracked. In the end, the system is released formally, including hardware and software release of all contributing components.

3.3.4 Car level verification

Although the transition among component testing, system testing and car testing is fluid, this last step of car level verification is mentioned separately to highlight the fact that all driver assistance systems must also be validated together in a single car. Just as the correct interaction of all systems in the car is also important, the correct interaction of all systems in the car is also important. Therefore, manufacturers run durability test drives for each vehicle line. Winter and summer test drives focus on weather conditions.

The vehicle is finally released by approvement of the management team after all testing has been completed successfully.

4 Conclusion

Camera-based driver assistance systems have become available to almost everybody today. The first systems were expensive and exclusive and mostly singular in their function, for example, night vision. The second generation introduced excellent basic camera-based functions such as adaptive headlight control or lane departure warning. Today, we are already about to enter the third generation which can be characterized by several tendencies:

  • Increasing functionality can be found for existing systems, for example, more and different traffic signs are recognized; different views for parking assistance are available.

  • The number of applications for camera-based assistance systems is increasing enormously, and all relevant aspects of driving are addressed, for example, lane keeping support and crossing traffic alert.

  • The stereo camera is proving to be a reliable 3D sensor due to its robust object detection. Yet, alternative sensor principles or sensor combinations are also being developed, for example, mono camera + LIDAR [18], and different hardware solutions may evolve for different performance requirements of the same systems.

Taking this into account, it can also be concluded that system specification and validation is the essential basis for the development of driver assistance systems. These efforts will pay off with the fast rollout of driver assistance systems which is already on the horizon.

In the same way, public standardization and reproducible, objective testing is being advanced. The US National Highway Traffic Safety Administration (NHTSA), for example, has already set up testing programs for driver assistance systems [19]. The European pendant is considering to grant rating stars for driver assistance systems to underline their importance in vehicle safety [20].

Finally, all the systems mentioned can be seen as individual steps towards a common goal: avoiding injuries and fatalities on our roads and making driving a comfortable and relaxed experience, even in traffic jams or in hazardous situations.


Corresponding author: Michael Grimm, Daimler AG, Benzstr. 1, 71069 Sindelfingen, Germany

References

[1] J. Horn and D. Gau, ‘Wer spurt am besten? – Assistenzsysteme im Test’, in Autobild Nr. 32 (2011) pp. 36–46.Search in Google Scholar

[2] G. Matthies and O. Bendig, in ‘Wo ein Wille ist, ist auch ein Markt’, Automotive Agenda Nr. 5 (2010) pp. 30–35.Search in Google Scholar

[3] ADAC, ‘Den Fahrer unterstützen -Das leisten aktuelle Assistenzsysteme’, Available at: http://www.adac.de/infotestrat/tests/assistenzsysteme/default.aspx.Search in Google Scholar

[4] auto motor und sport, ‘Fahrer-Assistenzsysteme im Test: Unfallvorsorge mit moderner Technik’, 19.02.2011, Available at: http://www.auto-motor-und-sport.de/testbericht/fahrer-assistenzsysteme-im-test-1325728.html.Search in Google Scholar

[5] Autobild, ‘Assistenzsysteme im Test — Wie gut sind elektronische Schutzengel?’, 20.09.2012, Available at: http://www.autobild.de/artikel/ass4istenzsysteme-im-test-3607502.html.Search in Google Scholar

[6] Statistisches Bundesamt, Press Information Nr. 065, 24.02.2012.Search in Google Scholar

[7] J.-E. Källhammer, Night Vision: Requirements and possible roadmap for FIR and NIR systems, in ‘Proc. SPIE 6198’, Photonics in the Automobile II, 61980F (2006).Search in Google Scholar

[8] Daimler AG, ‘The New Mercedes-Benz S-Class: Be Ahead’, Press Information, 05.08.2005, Available at: http://media.daimler.com/dcmedia/0-921-614307-1-815841-1.html.Search in Google Scholar

[9] Daimler AG, ‘The New Mercedes-Benz E-Class’, Press Information, 03.03.2009, Available at: http://media.daimler.com/dcmedia/0-921-1159728-1-1182145-1.html.Search in Google Scholar

[10] Daimler AG, ‘New Spotlight Function for Active Night View Assist Plus’, Press Information, 08.12.2010, Available at: http://media.daimler.com/dcmedia/0-921-658892-1-1354042-1.html.Search in Google Scholar

[11] Daimler AG, ‘Mercedes-Benz “Intelligent Drive” TecDay: Networked with all Senses’, TecDay, Press Information, 19.11.2012, Available at: http://media.daimler.com/dcmedia/0-921-1549267-1.html.Search in Google Scholar

[12] Daimler AG, ‘Visibility and Safety: The GL-Class: Complete Awareness’, Press Information, 26.07.2012, Available at: http://media.daimler.com/dcmedia/0-921-1476982-1-1514668-1.html.Search in Google Scholar

[13] ISO/TS 16949:2009(D), ‘Qualitätsmanagementsysteme – Besondere Anforderungen bei Anwendung von ISO 9001:2008 für die Serien- und Ersatzteil-Produktion in der Automobilindustrie’, 3rd edition (2009).Search in Google Scholar

[14] ISO 26262:2011(E), in ‘Road Vehicles — Functional Safety —‘, 1st edition (2011).Search in Google Scholar

[15] J. Seekircher, B. Woltermann, A. Gern, R. Janssen, D. Mehren, et al., Das Auto lernt sehen: Kamerabasierte Assistenzsysteme, in ‘Die neue E-Klasse von Mercedes-Benz’, ATZ Nr. 1 (2009) pp. 64–70.Search in Google Scholar

[16] Daimler AG, ‘Daimler’s ‘6D Vision’ Innovation is Awarded the Karl Heinz Beckurts Prize’, Press Information, 22.11.2012, Available at: http://media.daimler.com/dcmedia/0-921-656548-1-1552158-1.html.Search in Google Scholar

[17] M. Schopper, T. Kandemir, R. Fröming, H. Messner, M. Raab, et al., Fahrerassistenzsysteme, in ‘Der neue SL von Mercedes-Benz’, ATZ Nr. 2 (2012) pp. 172–179.Search in Google Scholar

[18] Continental AG, ‘Continental Integrates the Camera and Infrared Functions into a Single Compact Unit’, Press Information, 17.10.2012, Available at: http://www.conti-online.com/generator/www/com/en/continental/pressportal/themes/press_releases/categoryNavigation_overview_press_en.html?ovMode=showDocM&ssanum=&DocId=6708332.Search in Google Scholar

[19] NHTSA, National Highway Traffic Safety Administration, Available at: http://www.nhtsa.gov/, 04.12.2012.Search in Google Scholar

[20] EuroNCAP: StandardisiertesTestprogramm, in ‘European New Car Assessment Programme’, Available at: http://www.euroncap.com, 04.12.2012.Search in Google Scholar

Received: 2012-12-13
Accepted: 2013-3-3
Published Online: 2013-04-03
Published in Print: 2013-04-01

©2013 by THOSS Media & De Gruyter Berlin Boston

Scroll Up Arrow