Taxonomy and definitions for terms related to driving automation systems for on-road motor vehicles sae j 3016, 2016.
Shinko Y. Cheng and Mohan M. Trivedi. Vision-based infotainment user determination by hand recognition for driver assistance. IEEE transactions on intelligent transportation systems, 11(3):759–764, 2010.CrossrefWeb of ScienceGoogle Scholar
Daniel Damböck, Mehdi Farid, Lars Tönert, and Klaus Bengler. Übernahmezeiten beim hochautomatisierten fahren. Tagung Fahrerassistenz. München, 15:16, 2012.Google Scholar
Aaron Enes and Wayne Book. Blended shared control of zermelo’s navigation problem. In American Control Conference (ACC), pages 4307–4312. IEEE, 2010.Google Scholar
Jacob Engwerda. LQ dynamic optimization and differential games. John Wiley & Sons, 2005.Google Scholar
Michael Flad, Jonas Otten, Stefan Schwab, and Sören Hohmann. Steering driver assistance system: A systematic cooperative shared control design approach. In Proceedings of the International Conference on Systems, Man and Cybernetics (SMC), pages 3585–3592, 2014.Google Scholar
Christian Gold, Daniel Damböck, Lutz Lorenz, and Klaus Bengler. “Take over!” How long does it take to get the driver back into the loop? In Proceedings of the Human Factors and Ergonomics Society Annual Meeting, volume 57, pages 1938–1942, 2013.Google Scholar
Albert Haque, Boya Peng, Zelun Luo, Alexandre Alahi, Serena Yeung, Li Fei-Fei, Jiri Matas, Nicu Sebe, and Max Welling. Towards viewpoint invariant 3d human pose estimation. In Proceedings of the European Conference on Computer Vision (ECCV), pages 160–177, 2016.Google Scholar
Nikolas Hesse, Gregor Stachowiak, Timo Breuer, and Michael Arens. Estimating body pose of infants in depth images using random ferns. In Proceedings of the IEEE International Conference on Computer Vision Workshops, pages 35–43, 2015.Google Scholar
Pedro Jiménez, Luis M. Bergasa, Jesús Nuevo, Noelia Hernández, and Ivan G. Daza. Gaze fixation system for the evaluation of driver distractions induced by ivis. IEEE Transactions on Intelligent Transportation Systems, 13(3):1167–1178, 2012.CrossrefWeb of ScienceGoogle Scholar
Uwe Kiencke and Lars Nielsen. Automotive control systems: for engine, driveline, and vehicle. Springer, 2005.Google Scholar
Charles C. Liu, Simon G. Hosking, and Michael G. Lenné. Predicting driver drowsiness using vehicle measures: Recent insights and future challenges. Journal of safety research, 40(4):239–245, 2009.CrossrefWeb of ScienceGoogle Scholar
Tianchi Liu, Yan Yang, Guang-Bin Huang, Yong Kiang Yeo, and Zhiping Lin. Driver distraction detection using semi-supervised machine learning. Transactions on intelligent transportation systems, 17(4):1108–1120, 2016.Web of ScienceCrossrefGoogle Scholar
Julian Ludwig, Christoph Gote, Michael Flad, and Sören Hohmann. Cooperative dynamic vehicle control allocation using time-variant differential games. In Proceedings of the International Conference on Systems, Man and Cybernetics (SMC), pages 117–122, 2017.Google Scholar
Manuel Martin, Frederik Diederichs, Kangxiong Li, Michael Voit, Vivien Melcher, Harald Wildroither, and Rainer Stiefelhagen. Klassifikation von fahrerzuständen und nebentätigkeiten über körperposen bei automatisierter fahrt. In VDI/VW-Gemeinschaftstagung Fahrerassistenzsysteme und automatisiertes Fahren, 2016.Google Scholar
Manuel Martin, Stephan Stuehmer, Michael Voit, and Rainer Stiefelhagen. Real time driver body pose estimation for novel assistance systems. In International Conference on Intelligent Transportation Systems (ITSC), pages 1738–1744. IEEE, 2017.Google Scholar
Harshal Maske, Girish Chowdhary, and Prabhakar Pagilla. Intent aware shared control in off-nominal situations. In Proceedings of the Conference on Decision and Control (CDC), pages 5171–5176. IEEE, 2016.Google Scholar
Ralph Oyini Mbouna, Seong G. Kong, and Myung-Geun Chun. Visual analysis of eye state and head pose for driver alertness monitoring. Transactions on intelligent transportation systems, 14(3):1462–1469, 2013.CrossrefWeb of ScienceGoogle Scholar
Natasha Merat, A. Hamish Jamson, Frank C. H. Lai, and Oliver Carsten. Highly automated driving, secondary task performance, and driver state. Human factors, 54(5):762–771, 2012.CrossrefWeb of ScienceGoogle Scholar
Natasha Merat, A. Hamish Jamson, Frank C. H. Lai, Michael Daly, and Oliver M. J. Carsten. Transition to manual: Driver behaviour when resuming control from a highly automated vehicle. Transportation research part F: traffic psychology and behaviour, 27:274–282, 2014.CrossrefWeb of ScienceGoogle Scholar
Pavlo Molchanov, Shalini Gupta, Kihwan Kim, and Kari Pulli. Multi-sensor system for driver’s hand-gesture recognition. In Proceedings of the Conference and Workshops on Automatic Face and Gesture Recognition (FG), volume 1, pages 1–8, 2015.Google Scholar
Xiaoxiang Na and David J. Cole. Game-theoretic modeling of the steering interaction between a human driver and a vehicle collision avoidance controller. IEEE Transactions on Human-Machine Systems, 45(1):25–38, 2015.CrossrefWeb of ScienceGoogle Scholar
S. Ozgur Oguz, Ayse Kucukyilmaz, Tevfik Metin Sezgin, and Cagatay Basdogan. Haptic negotiation and role exchange for collaboration in virtual environments. In Haptics Symposium, pages 371–378, 2010.Google Scholar
Eshed Ohn-Bar, Sujitha Martin, Ashish Tawari, and Mohan M. Trivedi. Head, eye, and hand patterns for driver activity recognition. In Proceedings of the International Conference on Pattern Recognition, pages 660–665, 2014.Google Scholar
Ina Petermann-Stock, Linn Hackenberg, Tobias Muhr, and Christian Mergl. Wie lange braucht der fahrer? eine analyse zu übernahmezeiten aus verschiedenen nebentätigkeiten während einer hochautomatisierten staufahrt. 6. Tagung Fahrerassistenzsysteme. Der Weg zum automatischen Fahren, 2013.
Jonas Radlmayr, Christian Gold, Lutz Lorenz, Mehdi Farid, and Klaus Bengler. How traffic situations and non-driving related tasks affect the take-over quality in highly automated driving. In Proceedings of the Human Factors and Ergonomics Society Annual Meeting, volume 58, pages 2063–2067, 2014.Google Scholar
Jamie Shotton, Toby Sharp, Alex Kipman, Andrew Fitzgibbon, Mark Finocchio, Andrew Blake, Mat Cook, and Richard Moore. Real-time human pose recognition in parts from single depth images. Communications of the ACM, 56(1):116–124, 2013.CrossrefWeb of ScienceGoogle Scholar
Seyed Hossein Tamaddoni, Saied Taheri, and Mehdi Ahmadian. Optimal preview game theory approach to vehicle stability controller design. Vehicle system dynamics, 49(12):1967–1979, 2011.CrossrefWeb of ScienceGoogle Scholar
Ashish Tawari and Mohan M. Trivedi. Robust and continuous estimation of driver gaze zone by dynamic analysis of multiple face videos. In Proceedings of the Intelligent Vehicles Symposium, pages 344–349. IEEE, 2014.Google Scholar
Francisco Vicente, Zehua Huang, Xuehan Xiong, Fernando De la Torre, Wende Zhang, and Dan Levi. Driver gaze tracking and eyes off the road detection system. IEEE Transactions on Intelligent Transportation Systems, 16(4):2014–2027, 2015.Web of ScienceCrossrefGoogle Scholar
Hongsong Wang and Liang Wang. Modeling temporal dynamics and spatial configurations of actions using two-stream recurrent neural networks. In The IEEE Conference on Computer Vision and Pattern Recognition (CVPR), July 2017.Google Scholar
Shih-En Wei, Varun Ramakrishna, Takeo Kanade, and Yaser Sheikh. Convolutional pose machines. In Proceedings of the Conference on Computer Vision and Pattern Recognition (CVPR), pages 4724–4732, 2016.Google Scholar
Martin Wollmer, Christoph Blaschke, Thomas Schindl, Björn Schuller, Berthold Farber, Stefan Mayer, and Benjamin Trefflich. Online driver distraction detection using long short-term memory. Transactions on Intelligent Transportation Systems, 12(2):574–582, 2011.Web of ScienceCrossrefGoogle Scholar
Ho Yub Jung, Soochahn Lee, Yong Seok Heo, and Il Dong Yun. Random tree walk toward instantaneous 3d human pose estimation. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 2467–2474, 2015.Google Scholar
About the article
Julian Ludwig studied electrical engineering and information technology at Karlsruhe Institute of Technology (KIT). He received the bachelor degree in 2011 and the master degree in 2013. Since February 2014, he is a member of the scientific staff of IRS. His research is focused on modeling and optimization of transitions of the driving task between driver and assistance system.
Manuel Martin studied computer science at the Karlsruhe Intitute of Technology (KIT). He received his diploma in 2013. In his diploma thesis at the Fraunhofer IOSB he worked on head pose estimation based on depth cameras. Since March 2014 he is a member of the scientific staff of the Fraunhofer IOSB in the group Perceptual User Interfaces. In his research he focuses on body pose estimation and activity recognition of drivers for (automated-)vehicles.
Matthias Horne studied computer science at the Karlsruhe Institute of Technology and received the master’s degree in 2016. He currently works at the Fraunhofer Institute of Optronics, System Technologies and Image Exploitation (IOSB) in Karlsruhe on image analysis algorithms with a focus on the interior of cars.
Michael Flad received the Diploma degree from Ravensburg University of Cooperative Education, Ravensburg, Germany, in 2008, and the M. Sc. degree in electrical engineering and the Ph. D. degree in control engineering from Karlsruhe Institute of Technology (KIT), Karlsruhe, Germany, in 2011 and 2016, respectively. Since 2016, he has been the Group Leader of the research group Cooperative Systems with the Institute of Control Engineering, KIT. His research interests include cooperative control structures between human and machine and the application of these concepts to driver assistance system.
Dr. Michael Voit studied computer science at the Karlsruhe Institute of Technology (KIT). After receiving his diploma in 2005, he began his research at KIT on the estimation of people’s visual focus of attention for perceptual user interfaces, using a camera-based perception of head orientations and context observations. In 2008 he co-initiated and joined the newly founded group at Fraunhofer IOSB for Perceptual User Interfaces, in order to research and develop smart and attentive workplace environments. Since finishing his Ph. D. in 2011, he took over group management and established the developed methodologies in numerous domains benefitting from proactive assistance systems, such as e. g. (semi-)autonomous driving as well as manual manufacture or medical surgery.
Rainer Stiefelhagen received his Diplom (Dipl.-Inform) and Doctoral degree (Dr.-Ing.) from the Universität Karlsruhe (TH) in 1996 and 2002, respectively. He is currently a full professor for “Information technology systems for visually impaired students” at the Karlsruhe Institute of Technology (KIT), where he directs the Computer Vision for Human-Computer Interaction Lab at the Institute for Anthropomatics and Robotics as well as KIT’s Study Center for Visually Impaired Students. His research interests include computer vision methods for visual perception of humans and their activities, in order to facilitate perceptive multimodal interfaces, humanoid robots, smart environments, multimedia analysis and assistive technology for persons with visual impairments.
Sören Hohmann studied electrical engineering at the Technische Universität Braunschweig, University of Karlsruhe and école nationale supérieure d’électricité et de mécanique Nancy. He received the diploma degree (1997) and Ph. D. degree (2002) from University of Karlsruhe. Afterwards, until 2010 he worked in the industry for BMW, Munich, where his last position was head of the predevelopement and series developement of active safety systems. Today he is the head of the Institute of Control Systems at the Karlsruhe Institute of Technology, Germany as well as a directors board member of the research center for information technology (FZI), Karlsruhe. His research interests are cooperative control, alternative energies and system guarantees by design.
Published Online: 2018-02-10
Published in Print: 2018-02-23
Funding Source: Bundesministerium für Bildung und Forschung
Award identifier / Grant number: 16SV7675K
This work is being supported by the Federal Ministry of Education and Research of Germany (Bundesministerium für Bildung und Forschung) in the project “personalized adaptive cooperative systems for automated vehicles” (PAKoS), Grant number: 16SV7675K.