Skip to content
Publicly Available Published by Oldenbourg Wissenschaftsverlag August 16, 2016

Smartglasses for the Triage of Casualties and the Identification of Hazardous Materials

How Smartglasses Can Help Emergency Medical Services Managing Challenging Rescue Missions

  • Henrik Berndt

    Henrik Berndt is a research assistant at the Institute for Multimedia and Interactive Systems (IMIS) of the University of Luebeck. He holds a B.Sc. and M.Sc. in Informatics, specializing in Digital Media, and is currently working on his dissertation. His main current research interests include human-computer interaction in safety-critical contexts and interaction design for mobile devices.

    EMAIL logo
    , Tilo Mentler

    Tilo Mentler is a research assistant at the Institute for Multimedia and Interactive Systems (IMIS) of the University of Luebeck. He holds a diploma in Informatics, specializing in Digital Media. Recently, he finished his dissertation about the usability of mobile interactive systems in regular and extraordinary missions of Emergency Medical Services. His main current research interests include human-computer interaction in safety-critical contexts (e. g. medicine), usability engineering and interaction design of mobile devices. He is a founding member and vice-chairman of the sub-group “Human-Computer Interaction in Safety-Critical Systems” within the special interest group “Human-Computer Interaction” of the German Informatics Society (GI).

    and Michael Herczeg

    Prof. Dr. rer. nat. Michael Herczeg is professor of practical computer science and media informatics and director of the Institute for Multimedia and Interactive Systems (IMIS) of the University of Luebeck. His main areas of interest are human-computer interaction, software ergonomics, interaction design, multimedia and interactive systems, computer-aided teaching and learning as well as safety-critical human-machine systems. He is a co-founder and chair of the German ACM SIGCHI and Human-Computer-Interaction section of the German Informatics Society (GI). Prof. Herczeg is a member of ACM and GI and served as an organizer, reviewer, chair and keynote speaker for more than 100 conferences and workshops. He is an author and editor of more than 200 publications and is an editor for books and journals in interactive media. He works as a consultant for industry and government in the area of human-computer-interaction, human factors, software-ergonomics, usability engineering, eLearning and safety-critical human-machine systems.

From the journal i-com

Abstract

Emergency Medical Services (EMS) can be confronted with complex and challenging situations with many casualties that require special procedures and organizational structures. In order to keep control and records, incident commanders use paper-based notes, lists and forms. The increasing availability of smartglasses leads to the research question, whether they can support members of EMS and improve processes and efficiency. In this contribution, we describe use cases for smartglasses in emergency medicine, such as the triage in incidents with many casualties and the recognition of hazardous materials in accident contexts. We describe results from interviews with 10 members of EMS and civil protection units in Germany and from prototypical applications that have been developed and evaluated together with domain experts. The prototypical applications described in this contribution have shown promising results with respect to usability and acceptance.

1 Introduction

Emergency Medical Services (EMS) are faced with the challenge of managing rare incidents. About half of the missions of German EMS are patient transports, the other half are emergencies ([19], p. 50). The vast majority of emergency responses concern injuries or diseases of one or two persons. Operations with complex and challenging factors (e. g. accidents with hazardous materials or incidents with many casualties) are even rarer [7]. Accordingly, members of EMS cannot gain much routine and experience for such situations. An incident with many patients, which cannot be managed by locally available resources using routine procedures, can be named Mass Casualty Incident (MCI) ([21], p. 9). MCIs require special procedures (e. g. decision on priority for treating casualties) which deviate from routine procedures for missions with only one or a few casualties [20].

Currently, EMS mostly use paper-based tools for information management or documentation purposes (see Figure 1). However, computer-based systems might not only be more efficient, but could also improve information sharing and support members of EMS by helping them to perform their tasks or to gain better Situation awareness (SA) [8, 9, 16]. The collection, usage or transfer of digital data could result in substantial additional benefits like automatic reports to leaders, control centers or hospitals. While some other application fields for safety-critical human-computer systems (e. g. aviation) have been well studied for a long time [3], this is not the case for EMS. The main reason can be seen in the need for mobile computers since most of the work of EMS takes place outside of buildings or cars.

Figure 1 
          Paper-based tools that are used by members of German EMS in mass casualty incidents (names blackened). Shown are two lists with information about triaged casualties and a registration card for a casualty. On the lists, the numbers were sticked on while the most of the information was handwritten.
Figure 1

Paper-based tools that are used by members of German EMS in mass casualty incidents (names blackened). Shown are two lists with information about triaged casualties and a registration card for a casualty. On the lists, the numbers were sticked on while the most of the information was handwritten.

During the last ten years, various projects have examined the use of tablet computers, PDAs and similar devices for supporting tasks of EMS [1, 13, 17]. Some products have come to market since then and first EMS are testing or using them in regular operation, e. g. replacing paper-based documentation [14]. Currently, documentation is a time-consuming process that can eventually be optimized since members of EMS in Germany need about 21 minutes on average for documenting a rescue mission [15].

With the availability of lightweight smartglasses (e. g. Google Glass), the question has been raised, if they could eventually be more suitable than tablet computers for supporting the work of EMS members. An apparent advantage could be the hands-free operation using speech control. Since studies from Carenzo et al. [5] and Cicero et al. [6] showed some potential, we have worked out application fields and potential advantages together with 10 members of German EMS and civil protection units. Therefore, we have conducted individual interviews with the participants, which respectively took about one hour on average. In order to get qualitative feedback and unbiased ideas from the participants, the interviews were semi-structured. A guideline provided the structure and basic questions for determining a general but variable course of conversation [2].

In the following sections we describe potential benefits of smartglasses (section 2) and characteristics of different types of devices (section 3) as a basis for introduction of the realized prototypical applications (section 4). Then, further potential application fields for the use of smartglasses are explained (section 5) and limitations and results of the applications will be outlined (section 6).

2 Potential Benefits of Smartglasses

All interview participants saw potential to replace some of today’s paper-based tools by mobile devices. There is a wide range of paper-based materials and tools in EMS. According to the tasks, there are for example algorithms, checklists, lists for documentation or for getting an overview about the dimension of an incident and registration cards for casualties (see Figure 1). Often, several of these tools are needed to handle one single task.

In general, the interviewees saw spatial flexibility as one main advantage of mobile devices, while reliability and the need for electric power were mentioned as challenges or disadvantages. However, these problems seem not to be unsolvable, present-day radio devices used by EMS need electric power as well. Several interviewees proposed that mobile devices such as smartglasses or tablet computers should automatically collect and transfer information that results from tasks performed by the user. For example, in MCIs they could count casualties, transfer numbers and locations to lead persons or to the control center. Currently this information is collected in paper lists during triage and then transmitted to leaders directly (e. g. by handing over the list) or with radio devices.

The interview participants broadly agreed that hands-free usage could be one of the main advantages of smartglasses compared to paper-based artifacts as well as to other mobile devices. Nevertheless, the potential was seen controversially. While some interviewees suspected, that standard procedures could be performed while regarding information on smartglasses, others meant that hands-free operation would not necessarily lead to time gain because of the delay of human information processing, especially for decision making. One interview participant hoped that smartglasses might lead to a better focus onto the patient. Another one mentioned that smartglasses could not be dropped unintentionally as a positive effect of wearing them hands-free. A special condition of medical application fields in general is that personnel often wears medical gloves and disinfects hands in order to avoid the danger of contamination and infection. For that reason, several interview partners mentioned that a device with hands-free operation could save time because the user must not disinfect his hands or switch gloves for each interaction.

It can be concluded that even if hands-free operation could eventually bring some advantages, it will not automatically bring larger benefit compared to handheld devices. Most of the interviewees said that displaying information in the field of view could be an advantage of smartglasses compared to other devices since other persons would not notice it. The interviewees mentioned two application cases:

  1. Patients should not be able to see information for treatment. Examples could be the probability of survival and the treatment priority in MCIs or indications for diseases and injuries and treatment instructions.

  2. Leaders should be informed in the whole process about important changes, even if being in meetings or standing close to other members of EMS or persons affected by the incident. While voice radio can be listened by people nearby and would disrupt meetings, displaying information with smartglasses in form of a “silent alarm”, as one interview participant expressed it, would not have these disadvantages. Especially mission-critical information shall not even be audibly to other members of EMS.

Individual interview participants mentioned disadvantages of smartglasses, too. As one of them said, tablet computers would have a bigger display and the input of text information would be easier on them. Such limitations must be considered when designing applications for smartglasses.

3 Devices

With regard to the usage, smartglasses (excluding glasses for virtual reality) can be assigned to two basic concepts. For this classification it is irrelevant whether the glasses contain displays for only one or for both eyes. Either the display is placed directly inside the field of vision of the wearer or at the edge respectively outside of it (see Figure 2). Smartglasses which show information in the middle of the field of vision (e. g. Epson Moverio) are appropriate for augmenting objects of the reality with additional information such as labels and objects or for highlighting important objects. On the other hand, additional information without direct references to real objects could disturb users and hinder them from seeing the whole scene. Smartglasses with the display at the edge or outside of the field of vision (e. g. Google Glass) have been constructed in the opposite way. With these devices, the user is not hindered in seeing the real scene, but he or she must look at the display consciously in order to perceive the displayed information.

Figure 2 
          Two concepts for smartglasses: On the left side, the display of the smartglasses is placed at the upper edge of the normal field of vision of the viewer; he or she must look upwards in order to get the shown information. On the right side, the display is placed in the center of the field of view.
Figure 2

Two concepts for smartglasses: On the left side, the display of the smartglasses is placed at the upper edge of the normal field of vision of the viewer; he or she must look upwards in order to get the shown information. On the right side, the display is placed in the center of the field of view.

For first studies on smartglasses in EMS it seemed neither necessary nor useful to implement complex use cases with an augmentation of objects. For use cases which only need to display information, devices with the latter described characteristics seemed to be better suited. Moreover, they would probably be more acceptable for members of EMS because of not hampering the users view. Nevertheless, subsequent studies could examine use cases for devices with the screen in the middle of the field of vision. Since for both concepts use cases are conceivable, future devices could comply with both concepts by providing a display covering the whole field of vision of wearers like normal glasses do.

4 Human-centered Design Process of Prototypical Applications

Two prototypical applications for EMS using smartglasses were realized. The first application supports the prioritization of casualties for further treatment (triage), the second one helps to identify and to assess hazardous materials. The use cases for both applications are results from the interviews with 10 members of EMS and civil protection units.

Because of the safety-critical application area, users should not be disturbed in any case and be distracted as less as possible. We have chosen Google Glass, which has the display at the edge and outside the field of vision (see chapter 3). Both applications show the same basic structure of the user interface, which is split into a large center area and two smaller areas above and below with clear separation lines. The actual application is shown in the center area, while the area above holds information about the application as well as identification numbers and the progress of a task if necessary. The area below widely complies with the design guidelines for Google Glass and is filled with the voice command “ok glass” and a time stamp [10].

4.1 Application for the Prioritization of Casualties

During incidents with many casualties and finite resources of EMS (at least initially), the urgency of treatment must be determined for each casualty in order to be able to handle the casualties in a reasonable order. This process, named triage, is the main task of one of the first arriving units of EMS. It is challenging since MCIs are rare events and since they require letting casualties initially untreated contrary to normal approaches. In order to support the triage process and to get better results, algorithms like START (Simple Treatment and Rapid Transport) and others have been established since the 1980s [11]. According to conversations with members of German EMS, algorithms have an increasing importance in their training and work. The mSTaRT algorithm was known by most of them, thus it was implemented in the realized prototypical application. mSTaRT is a modified and extended version of START, though being almost similar to START with regard to the part that is relevant for the first triage by EMS members [12]. Currently, mSTaRT is distributed along with other algorithms for different scenarios and injury patterns in form of pocket-size booklets. The algorithm consists of ten questions and instructions, visualized in form of a flowchart. The questions can be affirmed or negated. Depending on the given answers, the needed questions vary. A complete pass varies between one and seven questions and results in one of three categories for the urgency of treatment for the casualty (Category III / Minor, Category II / Delayed, Category I / Immediate) or in the confirmation of death.

For implementing mSTaRT, we decided to divide the algorithm in its parts by showing one question or instruction per screen. This is no major limitation, as the user shall handle the questions and instructions consecutively. Although the limited display size of Google Glass would not allow showing all at once anyway, there are some additional advantages. In contrast to paper versions, the user is confronted only with the currently relevant information to keep the overview even if being disrupted (e. g. by following the instructions of the algorithm). Furthermore, the reference for interaction is clear, since only the current question or instruction is displayed at a time.

The user interface for the questions and instructions was designed in an iterative process. One of our requirements was the possibility to operate the complete application hands-free via speech control as well as via touchpad (in the side frame of Google Glass) in order to be able to test and evaluate both interaction forms. Touchpad operation was tested with medical gloves without determining any problems. While touchpad interaction should follow the interaction guidelines for the Google Glass, we have tried out a deviating form of speech control. In the guidelines it is proposed to create a menu that is opened by saying “ok glass” and which shows the voice commands [10]. In order to save this step and to be able to directly show all options, the first two iterations have displayed the speech commands directly in the screen. The position of the answers was below the question, they were marked by additional speech bubbles. Since showing the questions and the answers in one screen made the display unclear and confusing and reduced space for the questions, we changed to the described interaction paradigms for the Google Glass (see Figure 3). Opening the menu via the voice command “ok glass” requires some initial training; however this interaction form is consistent to other apps. Furthermore, it must be known anyway, since it is needed to start applications on Google Glass. In order to make it easier to distinguish questions and instructions, the latter are marked with the symbol of a hand as symbolization for an action. At the end of the algorithm, when showing the category for a casualty, the application reminds the user to use a registration card (such cards are used in Germany as triage tags). The background color of the text complies with the color of the category on the cards. Maintaining registration cards has some advantages: They allow the assignment of the category resulting from the algorithm to a casualty and make it optically visible for all members of EMS. Furthermore, they can serve as a backup for system failures.

Figure 3 
            The application for the prioritization of casualties shows the questions and instructions of a modified START algorithm in large letters. The user can also see the given answer for the previous question in order to be able to correct possible errors. The interaction follows the principles of the Google Glass: Saying “ok glass” opens a menu with speech commands; tapping on the touchpad opens the menu for touch interaction.
Figure 3

The application for the prioritization of casualties shows the questions and instructions of a modified START algorithm in large letters. The user can also see the given answer for the previous question in order to be able to correct possible errors. The interaction follows the principles of the Google Glass: Saying “ok glass” opens a menu with speech commands; tapping on the touchpad opens the menu for touch interaction.

Early user tests with the speech control of the Google Glass revealed some problems in terms of reliability. Thus, we implemented an undo function. Furthermore, we decided to show the last given answer in the display to enable the user to check it. This does not only allow detecting problems with the speech detection, but can also help if the resulting question seems to be illogical in relation to the last answer. For example, a user misread one of the questions and gave a wrong answer in the evaluation, but recognized and solved the issue with the given information.

The information resulting from the triage has to be transferred to incident commanders, the dispatch center or other leaders since they need it for decision making. This was also desired in the interviews with members of EMS (see section 2). The application for the prioritization of casualties can transfer needed information automatically and without major delay to other mobile or stationary devices. Provided that failures and corrective measures are considered in the design, this can help optimizing processes. For that, an assignment of the triage results to the casualty must be provided. This is solved by adding a QR-code to the registration cards, which must be scanned with the Google Glass before the algorithm starts and which is transmitted and saved along with the triage results.

4.2 Application for the Identification of Hazardous Materials

This application can help to recognize hazardous materials during incident management. These materials are a significant problem, when they are spreading because of leaking containers or tanks. The rare occurrence of such incidents and their diversity in types of risk and handling makes it difficult to evaluate the danger and to keep the necessary procedures in mind. We figured out, that members of EMS basically know about the importance of hazardous materials and warning signs, but generally are not able to associate different warning signs or hazard classes with specific dangers (e. g. sign for class 4.3, “Dangerous when Wet”, see Figure 4). Some mentioned that they could look up the needed information in a book or a smartphone app. Even if the book or app is quickly available, the process of searching for the right hazardous material would take a while. All consulted members of EMS estimated that a function to detect hazardous materials with smartglasses could be useful, if it would be simple and fast.

Figure 4 
            The application for the identification of hazardous materials questions the user to verify the detection of warning signs before showing information about the material and recommended behavior.
Figure 4

The application for the identification of hazardous materials questions the user to verify the detection of warning signs before showing information about the material and recommended behavior.

The prototypical application recognizes signs of hazardous classes as well as signs for specific materials. It can be controlled using speech control or the touchpad of the Google Glass. The application consists of three interfaces. The first one shows the camera stream of the Google Glass and instructs the user to look at the warning sign. Once a sign is detected, a second interface shows an image and asks the user if the detection has been correct. This step is necessary independent from the error rate of detection, because signs can be dirty, bent or incomplete. The user should eventually recognize if for example the number “8” is recognized as number “3” because of being dirty or occluded. If the recognition is confirmed, the application shows some basic information about the danger and proposals for behavior and handling. If it is negated, a new recognition is started. This process is not optimal, since the recognition for dirty or incomplete signs will eventually fail repeatedly. Although not yet implemented, we suggest that an additional manual input or correction should be possible.

5 Further Application Fields and Use Cases

In the interviews with members of EMS and civil protection units, we have not only identified the prioritization of casualties and the identification of hazardous materials for supporting members of EMS using smartglasses, but also more potential application fields. Some of them are specific for organizational structures in MCIs, others could be useful for incidents with one or two patients as well.

  1. Communication: Smartglasses have the potential to replace contemporary devices as radio devices and mobile phones, which are used in incidents with many rescue personnel. They can eventually improve communication by displaying text messages, calls and the receipt of voice messages. Predefined automated status reports (e. g. when arriving at the scene) can be collected and shown in the display of the leaders in an aggregated and as little as possible disturbing form.

  2. Images from accident site: According to 9 of 10 interviewees, taking photos from the view of the members of EMS and transferring them to leaders or dispatchers could be useful in order to force their understanding of the situation, especially when being off-site. Some interviewees believed that every member of EMS should have the possibility to send images to leaders, the others meant that this could cause an information overload and that leaders should request images. One participant saw no benefit in this feature at all.

  3. Algorithms and checklists: In German EMS services, algorithms and checklists have become increasingly important in the last ten years [18]. All interview partners mentioned that smartglasses possibly could be better suited for showing algorithms or checklists than paper books or tablet computers because of not being hold in the hands and because the algorithm or checklist can be looked at without averting one’s gaze from the task.

  4. Tracking members of EMS: While most interviewed leaders do not want to know the exact location of each member of EMS even in extensive MCIs, a useful benefit of tracking them could be to find the nearest personnel or to see places with many or few personnel.

  5. Navigation: A proposal was to support members of EMS in car but also on foot with navigation features, if the incident locations as well as organizational structures (e. g. treatment stations) are registered in a map.

  6. Telemedicine: Most of the interview participants believed that telemedicine approaches like communicating with an emergency physician will come in future for standard incidents, while they saw no use in MCIs because of the lack of time for individual treatment. There was a consensus that smartglasses are very suitable for telemedicine approaches because they could transmit video streams from the view of the rescue personnel and because they could offer a voice channel as well as displaying instructions in the field of view. In the interviews, voluntary members of civil protection units saw more benefit in telemedicine than professional members of EMS.

  7. Teleleading: Two interviewees said that the communication with leaders in remote places (e. g. dispatch centers) could replace leaders on-site. Three others denied this argument strictly.

  8. Overlays for areas: With augmented reality approaches, danger zones for hazardous materials as well as such for sections of individual leaders could be shown in form of overlays.

Summarized and additionally to these mentions from interviews, we see potential in the automation of processes and in the support of members of EMS in rare situations and will continue research in these areas.

6 Limitations and Results

The interviews and evaluations have been done with German members of EMS and civil protection units. The results are transferrable to EMS in other countries, since MCIs can occur in any EMS system requiring similar tasks and measures. With the use of the Google Glass, several technical problems occurred. Very limited battery-life and overheating problems make Google Glass inapplicable for real missions. For this reason, our applications must be seen as prototypes. For real use in EMS, rugged versions of smartglasses without the mentioned problems must become available.

The Google Glass Explorer Edition that was used in our studies stores the detection data for the commands used in the application on the device, therefore no internet connection is needed for speech detection. Unfortunately, speech control is unreliable in current versions of Google Glass. The Explorer Edition supports a short list of commands that are recognized relatively well, other words often have bad recognition rates. The supported commands were inappropriate for the described applications. Since these only need a few input commands, it should certainly be possible to optimize the recognition for them. The described problems result in incorrect inputs and corresponding interrupts, if for example a command for confirmation is recognized as a negation or vice versa.

Both applications have been evaluated with 13 members of EMS and civil protection units. Another person could not participate because of not being able to see the information undistorted and readable with as well as without additional normal glasses. All evaluation participants agreed for at least one of the applications that it could be useful to support in rescue missions. In terms of the application for the triage process (see Figure 5), 8 of 13 participants said that it would be useful and helpful in MCIs, one disagreed. The other four considered it to be partly useful, for example for voluntary or new personnel. It is generally accepted that algorithms should not only be known but also be followed by members of EMS ([4], S. 5). All evaluation participants stated, that the application could help to perform all steps of the algorithm but only four of them esteemed that this would necessarily change triage process since they said that not all steps would be considered without the application. Three believed that all steps would already be performed without it; the others stated them to be considered partly or had no opinion on that question. The possibility of sharing gathered information from triage with leaders was seen as a benefit by 12 of the participants, one esteemed partial advantage in the use of the information.

Figure 5 
          Answers for questions on the application for the prioritization of casualties at the end of the evaluation (n = 13).
Figure 5

Answers for questions on the application for the prioritization of casualties at the end of the evaluation (n = 13).

The application for hazardous materials was rated positive by 11 of 13 evaluation participants, the other 2 were satisfied. All rated it to be useful in real missions. The improvement suggestions were related to the amount of displayed information on the hazardous materials, since some of the evaluation participants proposed to add further information (e. g. instructions for treatment).

Speech control was seen as most promising interaction form, if reliability would be high enough. Since algorithms with prescribed answers result in a predefined command set, this demand seems attainable for the prototypical applications. However, it is unclear, if the limitation of commands would restrict further applications. At the same time, a second interaction form as fallback (e. g. for noisy environments), as implemented in the prototype, was seen as a necessity, even in case of a better voice recognition. Some participants thought that a touchpad worn at the arm would be better than the touchpad on the frame of the Google Glass. The display of Google Glass was rated positive by most participants, while a few participants had problems to see all information sharp and one participant was not able to see anything correctly. Hence, it can be concluded from the evaluation with members of EMS that smartglasses in general have potential to be used in EMS. Before that, research on more application fields must be done and devices must become more mature as well as rugged enough.

7 Conclusions

In our studies, prototypical systems for the classification of casualties based on injury patterns (triage) and the identification of hazardous materials have been developed and evaluated. These applications are unusable in real EMS missions because of their prototypical state but also because of limitations of the device in form of a Google Glass. Nevertheless the evaluation with members of EMS shows promising results in general. Future sophisticated versions of smartglasses seem to have some potential in EMS but further research on interaction design is necessary at first. With characteristics as the possibility to display information directly into the field of view of the user, hands-free work and control via speech, smartglasses could not only replace nowadays paper-based tools and artifacts, but rather might be more suitable than other devices such as tablet computers for some use cases. Nevertheless, smartglasses also have disadvantages such as the missing possibility for text writing with pens or keyboards. This might be overcome by additional devices and cross-device interaction with devices like smartwatches. For a final assessment with the real potential as well as limitations of the concept of smartglasses for the use in EMS further studies in real-world conditions with more use cases would be necessary. In this context, we have identified some interesting application fields for further research.

About the authors

Henrik Berndt

Henrik Berndt is a research assistant at the Institute for Multimedia and Interactive Systems (IMIS) of the University of Luebeck. He holds a B.Sc. and M.Sc. in Informatics, specializing in Digital Media, and is currently working on his dissertation. His main current research interests include human-computer interaction in safety-critical contexts and interaction design for mobile devices.

Tilo Mentler

Tilo Mentler is a research assistant at the Institute for Multimedia and Interactive Systems (IMIS) of the University of Luebeck. He holds a diploma in Informatics, specializing in Digital Media. Recently, he finished his dissertation about the usability of mobile interactive systems in regular and extraordinary missions of Emergency Medical Services. His main current research interests include human-computer interaction in safety-critical contexts (e. g. medicine), usability engineering and interaction design of mobile devices. He is a founding member and vice-chairman of the sub-group “Human-Computer Interaction in Safety-Critical Systems” within the special interest group “Human-Computer Interaction” of the German Informatics Society (GI).

Michael Herczeg

Prof. Dr. rer. nat. Michael Herczeg is professor of practical computer science and media informatics and director of the Institute for Multimedia and Interactive Systems (IMIS) of the University of Luebeck. His main areas of interest are human-computer interaction, software ergonomics, interaction design, multimedia and interactive systems, computer-aided teaching and learning as well as safety-critical human-machine systems. He is a co-founder and chair of the German ACM SIGCHI and Human-Computer-Interaction section of the German Informatics Society (GI). Prof. Herczeg is a member of ACM and GI and served as an organizer, reviewer, chair and keynote speaker for more than 100 conferences and workshops. He is an author and editor of more than 200 publications and is an editor for books and journals in interactive media. He works as a consultant for industry and government in the area of human-computer-interaction, human factors, software-ergonomics, usability engineering, eLearning and safety-critical human-machine systems.

References

[1] Adler, C., Krüsmann, M., Greiner-Mai, T., Donner, A., Chaves, J. M. & Via Estrem, À. (2011). IT-Supported Management of Mass Casualty Incidents: The e-Triage Project. In Proceedings of the 8th International ISCRAM Conference. Lisbon.Search in Google Scholar

[2] Berndt, H., Mentler, T. & Herczeg, M. (2015). Optical Head-Mounted Displays in Mass Casualty Incidents: Keeping an Eye on Patients and Hazardous Materials. International Journal of Information Systems for Crisis Response and Management (IJISCRAM), 7(3), 1–15.10.4018/IJISCRAM.2015070101Search in Google Scholar

[3] Billings, C. E. (1997). Aviation Automation: The Search for A Human-Centered Approach. Mahwah, New Jersey: Lawrence Erlbaum.Search in Google Scholar

[4] Bundesamt für Bevölkerungsschutz und Katastrophenhilfe. (2015). 6. Sichtungs-Konsensus-Konferenz. Retrieved from http://www.bbk.bund.de/SharedDocs/Downloads/BBK/DE/Downloads/GesBevS/6_Konsensus-Konferenz_Protokoll.pdf?__blob=publicationFile.Search in Google Scholar

[5] Carenzo, L., Barra, F. L., Ingrassia, P. L., Colombo, D., Costa, A. & Della Corte, F. (2015). Disaster medicine through Google Glass. European Journal of Emergency Medicine, 22(3), 222–225.10.1097/MEJ.0000000000000229Search in Google Scholar PubMed

[6] Cicero, M. X., Walsh, B., Solad, Y., Whitfill, T., Paesano, G., Kim, K., Baum, C. R. & Cone, D. C. (2015). Do You See What I See? Insights from Using Google Glass for Disaster Telemedicine Triage. Prehospital and Disaster Medicine, 30, 4–8.10.1017/S1049023X1400140XSearch in Google Scholar PubMed

[7] Ellebrecht, N. (2013). Die Realität der Sichtung. Ergebnisse einer Befragung zur Sichtungsausbildung und MANV-Erfahrung von Notärzten und Rettungsassistenten. Notfall + Rettungsmedizin, 16(5), 369–376.10.1007/s10049-013-1726-6Search in Google Scholar

[8] Endsley, M. R., Bolté, B. & Jones, D. G. (2003). Designing for Situation Awareness. London: Taylor & Francis.10.1201/9780203485088Search in Google Scholar

[9] Endsley, M. R. & Garland, D. J. (Ed.). (2000). Situation Awareness - Analysis and Measurement. Mahwah, New Jersey: Lawrence Erlbaum.10.1201/b12461Search in Google Scholar

[10] Google Inc. (2015). Design for Glass. Retrieved from https://developers.google.com/glass/design/.Search in Google Scholar

[11] Jenkins, J. L., McCarthy, M. L., Sauer, L. M., Green, G. B., Stuart, S., Thomas, T. L. & Hsu, E. B. (2008). Mass-casualty triage: time for an evidence-based approach. Prehosp Disaster Med. 23(1), 3–8.10.1017/S1049023X00005471Search in Google Scholar PubMed

[12] Kanz, K., Hornburger, P., Kay, M., Mutschler, W. & Schäuble, W. (2006). mSTaRT-Algorithmus für Sichtung, Behandlung und Transport bei einem Massenanfall von Verletzten. Notfall + Rettungsmedizin, 9(3), 264–270.10.1007/s10049-006-0821-3Search in Google Scholar

[13] Killeen, J. P., Chan, T. C., Buono, C. J., Griswold, W. G. & Lenert, L. A. (2006). A wireless first responder handheld device for rapid triage, patient assessment and documentation during mass casualty incidents. In AMIA Annual Symposium Proceedings, 429–433.Search in Google Scholar

[14] Krüger-Brand, H. E. (2014). Telemedizin in Bayern: Mobile Lösung für den Rettungsdienst. Deutsches Ärzteblatt 111(45), 13.Search in Google Scholar

[15] Luiz, T., Zurek, B., Rauen, C., Jugenheimer, K. & Ullrich, C. (2013). Einsatzdokumentation im Rettungsdienst: Papier oder Tablet? Rettungsdienst 36(7), 668–670.Search in Google Scholar

[16] Mentler, T. & Herczeg, M. (2014). Interactive Cognitive Artifacts for Enhancing Situation Awareness of Incident Commanders in Mass Casualty Incidents. In Proceedings of the 2014 European Conference on Cognitive Ergonomics (ECCE ‘14), Article 24. New York: ACM.10.1145/2637248.2637254Search in Google Scholar

[17] Mentler, T., Herczeg, M., Jent, S., Stoislow, M., Kindsmüller, M. C. & Rumland, T. (2012). Routine mobile applications for emergency medical services in mass casualty incidents. Biomedical Engineering / Biomedizinische Technik, 57(SI-1 Track-N), 784–787.10.1515/bmt-2012-4457Search in Google Scholar PubMed

[18] Peters, O., Runggaldier, K. & Schlechtriemen, T. (2007). Algorithmen im Rettungsdienst. Ein System zur Effizienzsteigerung im Rettungsdienst. Notfall + Rettungsmedizin 10(3), 229–236.10.1007/s10049-006-0886-zSearch in Google Scholar

[19] Schmiedel, R. & Behrendt, H. (2015). Leistungen des Rettungsdienstes 2012 / 13. Analyse des Leistungsniveaus im Rettungsdienst für die Jahre 2012 und 2013. (Berichte der Bundesanstalt für Straßenwesen, M 260). Bremen: Carl Schünemann Verlag.Search in Google Scholar

[20] Sefrin, P. (2010). Der Massenanfall von Verletzten. Notfallvorsorge 41(4), 13–16.Search in Google Scholar

[21] World Health Organization. (2007). Mass Casualty Management Systems. Strategies and guidelines for building health sector capacity. WHO Document Production Services. Retrieved from http://www.who.int/hac/techguidance/MCM_guidelines_inside_final.pdf.Search in Google Scholar

Published Online: 2016-08-16
Published in Print: 2016-08-01

© 2016 Walter de Gruyter GmbH, Berlin/Boston

Downloaded on 4.12.2023 from https://www.degruyter.com/document/doi/10.1515/icom-2016-0024/html
Scroll to top button