Skip to content
Publicly Available Published by Oldenbourg Wissenschaftsverlag January 14, 2020

User-Centered Development of Smart Glasses Support for Skills Training in Nursing Education

  • Jan Patrick Kopetz

    Jan Patrick Kopetz (PhD student, M.Sc. in Media Informatics) is a researcher at the Institute for Multimedia and Interactive Systems (IMIS) of the University of Lubeck. His research interests include mobile media, new interaction devices, augmented reality, and digital healthcare.

    EMAIL logo
    , Daniel Wessel

    Daniel Wessel (PhD in Psychology) is a postdoctoral researcher at the Institute for Multimedia and Interactive Systems (IMIS) of the University of Lubeck. His research interests include mobile media, evaluation, and especially the interaction between psychology and computer technology.

    and Nicole Jochems

    Nicole Jochems (PhD in Engineering) is a Professor for Media Informatics and Head of the Media Informatics Programme at the Institute for Multimedia and Interactive Systems (IMIS) of the University of Lubeck. Her research interests include methods in the area of Human-Centered Design, age-specific design, virtual- and augmented reality, and digital healthcare.

From the journal i-com


An ageing society creates an increasing need for a well-trained nursing staff. In particular, physically demanding motion sequences must be learned correctly to preserve carers’ long-term health. During training, support in practical skills training must also leave the carers’ hands free to allow them to perform the motion sequences unencumbered. Wearables might provide the necessary information “hands-free” and thus support skills training. In this paper, we present and discuss a User-Centered Design approach conducted with nursing students to determine the suitability of smart glasses support for skills training in nursing education. This User-Centered Design process consisted of a survey, two design thinking workshops, and a summative evaluation of a high-fidelity prototype. The developed smart glasses application was well evaluated and is usable for training purposes.

1 Introduction

The demographic change with an increasingly ageing society requires a growing number of trained nurses and carers. In Germany alone, the number of needed carers is estimated to rise from 2.4 million in 2010 to 3.4 million in 2030 [25]. At the same time, there has been a shortage of carers since 2009 [4].

This development makes protecting the carers’ health crucial, not only from an ethical, but also from an economic perspective: The carers should be able to participate in life as well as in the workforce for as long as possible. Given the carers’ physically demanding tasks, important dangers to carers’ (and patients’) health are falsely learned motion sequences and incorrect posture during their initial training. Small mistakes can entrench themselves and — with many repetitions over time — lead to cumulative damages to the carer’s back and joints. Furthermore, correcting falsely learned motion sequences is not only difficult – it is also unlikely these habits will get corrected later, giving the high levels of stress during work. Thus, the focus must be on carers’ skills training during which they initially learn these tasks and ensure that they learn the motion sequences correctly. Confidence and efficiency of carers could be increased if they had access to patient-specific information (e. g. if symptoms require special treatment) and clear instructions. Both carers and patients might benefit from that.

Information and communication technology can provide the necessary support to facilitate learning. However, in this practical setting, relevant information must be provided while the carers are performing complex and demanding physical actions, usually with their hands. The provided support must not encumber the carer or interrupt the motion sequences.

While different technological solutions might be able to handle these challenges, wearable devices, in particular head-mounted displays like smart glasses, could fulfil this requirement very well: they can provide situationally needed information while also being “hands-free”. But are smart glasses really suitable for this setting, and, if so, which functionality should they provide? Can they actually lead to improvements in training? Will they be accepted by the nurses in training?

A User-Centered Design (UCD) process is well suited to analyze the use context, to determine the requirements, to support the development and formative evaluation of possible designs, and to evaluate the usability of the solution.

In this article, the results of a User-Centered Design process to answer the following three research questions are presented and discussed:

Q1) What are the needed features of smart glasses support for skills training in nursing education?

Q2) Does the use of smart glasses lead to improved performance, i. e., fewer mistakes in performing these actions?

Q3) Are smart glasses accepted by carers in training and/or work settings?

2 State of the Art

To answer these questions, we first look at the technical state of the art and discuss existing research.

Head-mounted displays (HMDs), visual displays which are worn on the head of the user, have been used since the 1970s. Originally used in military contexts for training and as an additional display, they also became of interest for the manufacturing industry. In recent years, the development of consumer-grade smart glasses — a subcategory of HMDs — has raised lots of interest [23]. Typical features of smart glasses are the projection of a display into the visual field of the wearer, wireless connectivity, camera, position and movement sensors, microphone, and an integrated battery. One main feature of smart glasses is to allow for hands-free interactions, e. g., via voice commands.

There are different types of smart glasses differing in their design, e. g. whether they display the virtual screen on one or both eyes, whether the information is displayed directly in front of the wearer or slightly above the usual line of sight, and whether the glasses cover the eyes or allow for direct eye contact. While smart glasses usually focus on visual information, audio information can be used as well. Designs differ regarding audio technology, ranging from normal headsets/earplugs to bone conduction (leaving the ears uncovered). The design influences not only the wearer but also the social setting. Especially in a setting like a professional care environment, in which interpersonal trust is crucial, direct eye contact and clear communication unencumbered by headsets/earplugs are important.

Thus, to determine the suitability of smart glasses in the professional care context, currently used devices (Google Glass, Vuzix M300, and Epson Moverio BT-300) were examined regarding these two criteria. The Epson Moverio BT-300 [17] limits the sight of the wearer due to its design based on binocular displays. The Vuzix M300 [31] had been announced for summer 2016 but has only recently become available. Google Glass [11] (introduced in 2013) obstructs the visual field only minimally, thus allowing for almost unimpeded eye contact between carer and patient. It is lightweight and, despite some limitations like battery life and processing power, it seems well suited for this context. Additionally, it is the most unobtrusive one of the examined devices.

The potential of smart glasses (esp. regarding augmented reality, AR) in healthcare-related scenarios has been examined in many studies (e. g. Vorraber et al. [30]], Chang et al. [Vorraber et al. [7]], Benninger [Vorraber et al. [2]], Tully et al. [Vorraber et al. [27]], Sultan [Vorraber et al. [26]], Berndt et al. [Vorraber et al. [3]], Vaughn et al. [Vorraber et al. [29]), and was covered by review articles (e. g., Dougherty and Badawy [8]). In most cases, Google Glass was used as an AR device. Examined medical applications include monitoring of patients, reading of laboratory reports, or live-transmissions during surgery [10], as well as hands-free documentation via photos or videos, or hands-free calls [18]. The potential of learning with AR in healthcare is widely recognized (e. g. Bartlett-Bragg [1]], Kamphuis et al. [Bartlett-Bragg [13]], Nosta [Bartlett-Bragg [20]], Ponce et al. [Bartlett-Bragg [21]], Sultan [Bartlett-Bragg [26]). While many studies examine the usefulness of AR as a supportive tool in clinical education, they primarily focus on documentation or realistic simulations (e. g. Benninger [2]], Chaballout et al. [Benninger [6]], Nakhla et al. [Benninger [19]], Russell et al. [Benninger [24]], Tully et al. [Benninger [27]], Vallurupalli et al. [Benninger [28]).

Risling [ 22] discussed technology trends of the next decade and their possible influence on the education of 2025’s nurses. After reviewing the literature on wearable technology, he concluded that future nurses must be prepared for these upcoming technological advancements, e. g. via improved computer skills training. In their pilot study, Byrne and Senk [5] examined the potential of Google Glass to support decision making during professional care activities and to support communication with care providers. They also examined user perceptions of Google Glass. Most participants completed the given tasks (scanning a QR code and completing a phone-call hands-free) successfully, and both communication and safety were improved. However, some participants reported that Google Glass distracted them. The authors concluded that the gap between education and practice could be narrowed if the value of computer science is emphasized within the curriculum.

Currently, however, the application of smart glasses in nursing is still in their infancy. Wüller et al. [32] systematically reviewed the existing literature and could not identify any systems beyond the prototype status. They concluded that AR is far away from usage in domestic care because a comprehensive examination of the possibilities and limitations of AR to support nurses is still missing. Especially in a complex and demanding task like nursing, the possible negative effects of smart glasses must not be ignored. For example, reading text displayed on smart glasses can also result in a higher perceived cognitive workload (e. g., Young et al. [33], using a simulated lane keeping task), and thus adding more demands to an already demanding work environment.

In general, the studies so far focused on the usefulness of AR, and particular the potential of learning via AR in healthcare. However, the potential of AR as a supportive tool for learning nursing skills in practical training is currently insufficiently examined.

3 Method

Figure 1 
            User-Centered Design process with the stages of the project.
Figure 1

User-Centered Design process with the stages of the project.

A User-Centered Design process [12] was conducted to examine the potential of smart glasses in nursing skills education and to determine how usable applications for smart glasses should be designed. Figure 1 shows the different methods chosen as part of the UCD process. In the present paper, the process is described linearly, while in practice, multiple (sub-)loops are usually required (e. g., going back to analysis during the conception or realization phase to answer new questions or deal with formative evaluation results). In the initial analysis phase, requirements and general acceptance were examined. The target group – both academic and vocational nursing students – filled in a questionnaire regarding their use of information and communication technology. In the conception phase, the questionnaire results were used to develop low-fidelity prototypes in two design thinking workshops with potential users. Design ideas were developed and visualized with paper prototypes. During the realization phase, the results of the workshops were combined with expert feedback and used as the basis for the development of a working high-fidelity prototype (Google Glass app). During the evaluation phase, this prototype was tested by nursing students and evaluated by students and educators.

In order to ground the UCD process in the envisioned usage, a prototypical use case (task scenario) for skills training with smart glasses was developed. Based on discussions with nursing students and a qualified teacher, the transfer of a patient from a bed to a wheelchair was identified as a prototypical and important use case. This scenario requires carers to use their hands and — while ostensibly simple — requires many steps that are easy to forget. These mistakes are likely to become reinforced, as they decrease the time needed for the task, which is positive in the short run. However, over many repetitions, these mistakes likely produce cumulative damages. Thus, this scenario is well suited as it requires information and reminders during training.

Given the steps in the UCD process building upon each other, they are presented sequentially (each with results and discussions).

4 Questionnaire

An online questionnaire was developed in order to assess the attitudes of the target group towards using technology (in general and for educational purposes), their satisfaction with skills training, their first impression of using smart glasses for skills training, the suitability of the prototypical task scenario (repositioning a person from the bed into a wheelchair), and possible visualization formats. The questionnaire was already published [15], so only the main information and findings are presented here.

4.1 Sample

Thirty universities and 53 nursing schools were asked to provide the survey link to their students. Of the 231 students who started the survey, 115 students finished it, with 107 usable responses. The age of the students ranged between 17 and 49 years (M=23.22, SD=6.1), 81.3 % were female, and 44.9 % did study at a university, 55.1 % at a nursing school (in Germany two educational tracks — via university or via nursing school — are available to become a nurse). Regarding their expertise, 44.8 % were in their first, 37.9 % in their second and 17.2 % in their third and final year of training.

4.2 Results

All respondents owned a PC or a laptop, and all but two respondents owned a smartphone or a tablet. In skills training, results of an ANOVA showed differences in the frequency of how instructions were presented: Verbal instruction and practical demonstrations are used more frequently than feedback by other students, printed scripts, or presentations (slides), while books, videos, as well as apps, are used least frequently. Except for presentations (slides), students would like to have more information using the above-mentioned types of presentation during skills training (differences were significant, see Kopetz et al. [15] for details).

The task scenario was considered as authentic (M=3.96, SD=1.13, scale of 1–5, higher values equal higher realism). Furthermore, students who had already encountered the tasks of repositioning a patient from bed to wheelchair regarded it as more authentic than those who did not. Free responses in text fields also mentioned the lack of time for skills training and an insufficient number of educators.

Respondents’ impressions of smart glasses were mixed, free text answers ranging from enthusiasm (“great for learning”) to scepticism (“we don’t need it”). Critical aspects were hygiene concerns and the potential for misuse, while the potential for providing information was recognized. Regarding the desired visualization formats, single visualization options (bullet points, images, videos, or running text alone), as well as combinations of them (e. g., bullet points/images) were assessed. Statistically significant differences were found for the preference of different visualization options. Pairwise comparisons revealed that videos alone, bullet points/images combined, and images alone, were seen as most useful. Bullet points alone and images/videos combined were seen as less useful. Running text (alone and in any combination) was seen as at least useful.

4.3 Discussion

The results of the questionnaire indicate a high potential of smart glasses for skills training. Respondents show a high media affinity and request additional support during skills training. While an online questionnaire automatically incurs selection biases, especially regarding media usage and media affinity, the target group is likely not overburdened by using smart glasses and would prefer videos or images with bullet points. Still images are probably more advantageous, given the users’ attention has to be split between the information on the glasses and the environment. A video would continue even if the user’s attention is directed on the environment, leading to missed information. Also, with images and bullet points, existing material can be used more easily.

The next step of the User-Centered Design process deals with the question of how to design smart glasses support with these elements.

5 Design Thinking Workshops

Based on the questionnaire results, the next phase in the UCD process was to generate ideas and design concepts. Thus, two design thinking workshops were conducted with potential users. The focus was on determining the app structure/information flow with the goal of creating low-fidelity paper prototypes of the application.

5.1 Participants

Four third-year nursing students and one professional carer participated in two design thinking workshops. Participants were between 22 and 26 years old (M=23.6, SD=1.52), four of the participants were female. Group 1 consisted of two students (male and female) and the professional carer (female), while group 2 consisted of two female students.

5.2 Material

Sample content for the application (videos/images) was taken from a kinaesthetic workshop. Material for paper prototyping included paper, scissors, pens, and a display template with a suitable screen ratio.

5.3 Procedure

Participants were made familiar with Google Glass via a short presentation and a hands-on test of the device. They were also introduced to the task scenario and the sample content. The main part of the workshop was an open discussion of potential uses of smart glasses in nursing skills training and the development of a paper prototype, with a final review and discussion of the results.

5.4 Results

Each group created a low-fidelity prototype. Group 1 created a prototype that is based on bullet points and video, following a hierarchical tree structure (see Figure 2, left side). The application consists of three parts: (1) access to the patient-information taken from the hospital information system (digital file with patient information), (2) a checklist of the necessary tasks prior to transferring the patient into the wheelchair, and (3) a video showing the motion sequence and providing additional textual information when required. Navigation is done via a menu. However, touch interaction was seen as problematic, as it would not be compatible with the hospital’s hygiene regulations. Thus, navigation via voice control or eye tracking was seen as a necessary requirement.

Group 2 developed a more linear prototype (see Figure 2, right side) consisting of a sequential application of ten cards with step-by-step instructions. They used bullet points, images and video — with images to show static elements like the position of the feet and videos to show dynamic motion sequences — as content. Navigation is envisioned via voice commands, with each card displaying the possible voice commands (e. g., “start video”, “back”, “advance”).

Figure 2 
              Abstracted sketch of the information architectures of the two workshop prototypes.
Figure 2

Abstracted sketch of the information architectures of the two workshop prototypes.

5.5 Discussion

Similarities between the two design thinking workshops include general design elements, esp. the use of checklists. It’s important to note that checklists and bullet points were used synonymously by the participants. In nursing, checklists are frequently used and the participating students were highly familiar with them. The desired content was also similar in both groups, which is not surprising given the same task and the same material provided. However, the students also mentioned that each patient is different and specific requirements have to be considered, ideally via access to the patient information system. Differences in the prototype existed regarding the number of cards. While more cards can provide more detailed information, they also require more interaction. Given that students would have to switch attention frequently, they might be inclined to advance multiple cards at once to reduce switching costs. Once the card and the current action step is out-of-sync, it would probably lead to mistakes.

Overall, the two workshops led to two promising low-fidelity prototypes, which form the basis for the development of a high-fidelity prototype.

6 Development of a High-Fidelity Prototype

Based on the results of the design thinking workshops and under consideration of design principles (e. g., Google’s recommendation for Glass) and expert feedback, a high-fidelity prototype was developed (Figure 3).

In development, more emphasis was put on the ideas of group 2, as group 1’s suggestions would require a connection to the hospitals’ patient-information-system. This connection would incur a plethora of organizational and legal issues. While using patient-specific information is a very interesting use case, the first question to be answered is whether smart glasses have the potential for training purposes. In this context, specific patient information was not seen as crucial. However, it might prove useful for use in actual settings or for advanced training (e. g., simulating patients with specific conditions that require deviation from the basic process). Thus, in the first iteration, the design supports the general process without using patient-specific information.

Figure 3 
            High-fidelity prototype design (Note: black and white is inverted for the text/background for improved print readability. On several smart glasses, white text on a black background is used for better screen readability.)
Figure 3

High-fidelity prototype design (Note: black and white is inverted for the text/background for improved print readability. On several smart glasses, white text on a black background is used for better screen readability.)

Given the current limitations of Google Glass, voice control was not implemented. Own formative tests had shown that voice recognition currently is not dependable and leads to frustration as the voice commands may not work as expected. For the purpose of evaluating training support with this high-fidelity prototype, touch interaction was implemented. While touch interaction requires the use of hands — and thus is not “hands-free” — no device has to be kept in hand while performing the actions and the touch interaction can be done quickly with only one hand. As checklists require an extra interaction to check an item, bullet points were chosen instead of checklists.

The working high-fidelity prototype provides the information nursing students need to know to conduct the task. The question is whether this app is useful during actual skills training.

7 Evaluation of the High-Fidelity Prototype

In the last step of the User-Centered Design process, the prototype was evaluated with users from the target group.

7.1 Design

The high-fidelity prototype was evaluated as part of a skills training in nursing education in a within-subjects design with two conditions (without and with the support of smart glasses). Given that only a small number of participants were initially expected, a fixed order was used, first performing the task without assistance, then assisted. Doing the task without the app provided a baseline of what the participant remembered without assistance. It was compared to providing the instructions on the smart glasses (see Discussion regarding possible training effects).

Figure 4 
              The evaluation situation: a participant supporting the patient during the transfer.
Figure 4

The evaluation situation: a participant supporting the patient during the transfer.

7.2 Sample

Twenty-nine nursing students participated in the study, aged from 18 to 31 (M=22.07, SD=3.28) and 79.3 % female. First-year students made up 44.8 % of the sample, second-year 37.9 %, and third-year 17.2 %. Most students did their studies as part of university studies (79.3 %), the rest (20.7 %) in nursing schools. Given that the task involved wearing smart glasses, the need for corrective glasses was assessed: 55.2 % did not require glasses, 6.9 % used contact lenses, another 6.9 % require reading glasses, and 31 % did regularly wear glasses.

7.3 Setting and Instruments

The high-fidelity prototype was used during skills training with the task of repositioning a patient from bed to wheelchair (see Figure 4). A hospital bed and a wheelchair were provided. The first author acted as a patient. The evaluation was recorded on video for analysis. Our main dependent variables in the two conditions were the number of errors and time on task, as well as the participants’ self-evaluation of their performance, confidence and comfort. We also assessed participants’ familiarity with the task and how well-suited they found the smart glasses support in training and on the job. For a usability assessment of the app, an adapted PSSUQ-questionnaire [16] was used.

7.4 Procedure

The participants were instructed about the procedure, performed the task (repositioning a patient from bed to wheelchair) without smart glasses, and answered a questionnaire about demographic information and a self-assessment of their interaction with the app (how self-confident they felt; if they noticed something they would do differently during the next attempt). Afterwards they were introduced to Google Glass, did the same task again with support of the Google Glass app, and filled in another questionnaire about their performance (if the app changed their self-confidence; if they did something differently during the first attempt; if differences were incited by the app) and the app itself (usage of smart glasses in training and on the job; rating; possible improvements).

7.5 Results

As described in section 7.3, data of the evaluation included self-reports of the participants (evaluation of the app and the quality of their work), video observation of the task (expert ratings), and time measurement. The video data was used to determine the error rate and time on task with and without smart glasses support.

7.5.1 Participants’ Familiarity with the Task

Participants were mostly familiar with the task, but to varying degrees: 24.1 % had done the task during the last week, 37.9 % during the last month, 27.6 % during the last six months, 6.9 % during the last twelve months, and one person (3.4 %) had not done the task before.

7.5.2 Participants’ Number of Errors

Figure 5 
                Box plots showing the distributions of error index values (consisting of the number and severity of mistakes identified by the two raters). Higher values indicate more/more severe errors.
Figure 5

Box plots showing the distributions of error index values (consisting of the number and severity of mistakes identified by the two raters). Higher values indicate more/more severe errors.

Two nursing experts — one from a university and one from a vocational school — examined the videos and rated participants’ performance according to eight criteria covering communication, preparation and patient safety. An error index was used, with the criteria being rated as fulfilled (0 points), partly fulfilled (1 point) or missing (2 points), thus higher scores meaning quantitatively more and/or more severe errors. All criteria — as well as the degree of completion — were given the same weight. Errors were assessed separately for the baseline and smart glasses condition. As the raters represent different vocational tracks and the correlation between the ratings was moderate (r=.43, n=55, p=.001), the two ratings were analyzed separately (see Figure 5). For the academic rater, a descriptive improvement occurred from without smart glasses to smart glasses support (error index: Mnoˍglass=5.25, SDnoˍglass=3.76, Mglass=4.54, SDglass=3.05, t(27)=1.473, p=.152, η2=0.07). For the vocational school rater, performance improved statistically significantly (error index: Mnoˍglass=5.25, SDnoˍglass=3.24, Mglass=3.07, SDglass=3.22, t(27)=5.512, p<.0001, η2=0.52).

7.5.3 Participants’ Self-Evaluation of Their Performance

After performing the task unassisted, 51.7 % of the participants did respond that they realized they did forget something. Most frequently (in 6 of 16 cases), it was activating the resources of the patient. After the second trial (this time supported with smart glasses), 89.7 % said they did notice something they missed. Most frequently (in 15 of 25 answers), it was related to carer-patient communication. Furthermore, 86.2 % agreed that the information provided by the app reminded them to perform a certain action, most frequently (8 of 25 answers) to encourage the patient to hold onto the backrest of the wheelchair.

7.5.4 Participants’ Confidence and Comfort

Asked whether using the app changed how confident they felt, 3.4 % (one participant) stated they felt less confident, 37.9 % did not notice any change, while 51.7 % felt more confident and 6.9 % much more confident. As for comfort, none of the participants regarded wearing smart glasses as very annoying, 20.7 % as annoying, 51.7 % as neutral, 24.1 % as natural, and 3.4 % as very natural. In terms of usability, the PSSUQ average values on all items were mostly positive (M=4.96, SD=0.46, scale from 1 to 6, the higher the value the better the rating).

7.5.5 Usage of Smart Glasses in Training and on the Job

Figure 6 
                Acceptance of smart glasses in skills training without and with actual patients (not all participants answered both questions).
Figure 6

Acceptance of smart glasses in skills training without and with actual patients (not all participants answered both questions).

Asked whether the participants would use the app in practical skills training, about one third of the students were undecided (28.6 %), while about two thirds were in favor of using it (35.7 % “rather yes”, another 35.7 % “yes”; see Figure 6). In 11 of 28 answers, reasons for usage were increased confidence, while the most frequently mentioned reason against was distraction (5 of 20 answers). However, if the training is done with actual patients (Figure 6), 3.7 % of the students would not want to use the smart glasses support, 59.3 % would rather not use it, 11.1 % were yet undecided, 22.2 % would rather use it and 3.7 % would use it. In this scenario, increased confidence was also mentioned most frequently (7 of 16 answers). Negative aspects included distraction and interference with carer-patient interaction (each 8 of 27 answers), as well as the interaction being too time consuming (4 of 27 answers).

7.5.6 Time on Task

The time for the task was measured and the average durations were calculated. The average duration without smart glasses was one minute, 41 seconds (SD=24 seconds) and with smart glasses one minute 52 seconds (SD=31 seconds). Thus, on average, using the app increases the duration by 11 seconds.

7.6 Discussion

The high-fidelity prototype was evaluated by actual nursing students in the targeted setting — a simulated practical skills training.

The usability of the smart glasses app was evaluated positively. Overall, participants reported increased confidence, stated that the app reminded them of several important steps, and the overall acceptance of Google Glass in skills training was rather positive. This is especially noteworthy given the brief time the participants had to get used to the device. However, outside of skills training, smart glasses are seen more negatively.

The results of the expert rater with a vocational school background revealed statistically significantly fewer errors when the participants were supported by smart glasses. However, this difference was not present with the university rater. Given the only moderate agreement, it is an open question whether the raters had a different focus based on their professional background, or whether there are other reasons for this difference. An in-depth discussion of the differences and an improvement of the classification scheme to take the possibly different viewpoints into account would have been beneficial. However, this was beyond the scope of this UCD development process but should be considered for future work.

Due to the design of the study, training effects cannot be ruled out. First establishing a baseline without smart glasses and then doing the same task with smart glasses might have led to training effects. A randomized design would have controlled for training effects. However, with the expected limited sample size, establishing a baseline to determine what the students remember on their own was seen as more important than a balanced design of the conditions. To check for a training effect, students were asked whether the app reminded them to do a certain step when they did the task a second time. Most of the students agreed to this question. Thus, although they did the task a second time when they were using smart glasses, many participants still would have missed steps without being reminded by the app.

While the time on task increased by an average of eleven seconds when using the app, this difference is rather small and not surprising given that many participants would have otherwise likely skipped a step. Not doing the procedure correctly and omitting steps shortens the time on task – that is one way how bad habits are established – but this time saving comes with the mentioned possible negative consequences for their own (and their patient’s) health.

The prototype used hand gestures to compensate for a currently unreliable speech recognition in commercial devices. These gestures were sufficient to evaluate the device, and quick and easy to do with one hand. They are likely also suitable in practical training sessions with simulated patients. However, they are unsuitable in an actual hospital context with hygiene requirements. For situations with actual patients, improved technology is needed (see future work). The use of voice commands, esp. in situations with patients, will likely pose additional design challenges.

Selective influences on the conditions without and with smart glasses support cannot be ruled out. Randomizing two conditions and preventing the simulated patient from seeing whether the participant wears smart glasses or not (e. g., via an eye mask) would be a possible improvement. However, this would also change the nature of the carer-patient interaction, as the simulated patient would essentially be blinded. Some participants with consecutive IDs also did show high similarity in their free text replies, possibly indicating that they did talk about the study while filling in the questionnaires. However, this did not affect their actual performance without and with smart glasses support.

In general, the high-fidelity prototype evaluation shows evidence for the potential of smart glasses support during skills training in nursing education. The app did provide structured information in the situation itself without distracting its users, reminding them of steps they would have otherwise omitted. For training with actual patients, concerns regarding an impeded interaction with the patient must be addressed.

8 General Discussion

We conducted a User-Centered Design process to examine the potential of smart glasses for skills training in nursing education. Questionnaires during the analysis phase provided useful initial information on attitudes, media usage and potential visualizations on smart glasses. The results highlighted the need for additional support during skills training and lend credibility to the selected prototypical task scenario. The reported media usage also provided confidence in the target groups’ ability to use smart glasses. Furthermore, information about the preferred visualization methods — images, videos and bullet points — proved useful for the design thinking workshops. These workshops, conducted during the conception phase, acted upon the initial information and resulted in two low-fidelity paper prototypes. These prototypes were used in the realization phase — combined with design guidelines and expert feedback — to develop a high-fidelity prototype. This prototype was then evaluated in skills training with the target group. The results of this evaluation indicate the potential of smart glasses for skills training in nursing education: When using the smart glasses app, the complex task scenario was accomplished with increased confidence and — according to one of the two raters — with statistically significant fewer mistakes.

Thus, regarding the first research question — the needed features of smart glasses support for skills training in nursing education — the User-Centered Design process succeeded in determining these features.

Regarding the second research question — whether smart glasses support leads to improved performance — the use of smart glasses did remind students of steps they would have otherwise missed. Thus, potentially reducing the risk of incorrectly learning motion sequences that have negative long-term consequences for their health.

Regarding the third research question — whether smart glasses are accepted by carers in training and/or work settings — smart glasses support is accepted at least as long as it is used with simulated patients in training. On-the-job use would likely require additional benefits, e. g., direct access to patient information or on-the-fly intervention if movement mistakes are made.

However, some limitations have to be addressed. While satisfactory for training with simulated patients, a reliable voice or gesture-based interaction has to be developed for use in a hospital due to hygiene requirements (see below). Regarding the evaluation, while ameliorated by an additional check, an experimental design would have allowed to clearly differentiate the effects of smart glasses support from possible training effects. E. g., by having half starting with support, the other half without, or by adding a different task and varying support and task order. Smart glasses were also compared to a condition without any support (as would be the case on the job, or during unsupervised training), but not to different kinds of support. Additional studies should compare this kind of support to other solutions, e. g. on different devices.

Regarding future work, the creation of content by nursing professionals should be possible as well, in order to reduce the initial effort for a user study. Thus, a client-server architecture and an early prototype of a system were developed, allowing nursing professionals to create tutorials on their own, by using a web interface that can be accessed via the Google Glass app [14].

Additionally, access to the patient information system would be useful. It would provide individual data about the patient and allow for adapting the particular task to the individual patient. It would also further justify the use of an information tool on-the-job — both from the carer and patient perspective. Smart glasses would be less about doing a task correctly in general, but more about doing the task correctly specifically for this particular patient. However, legal questions have to be solved first.

Regarding hygiene requirements, wearable devices specialized for input like the Myo Gesture Armband or an improved voice recognition could be used to fulfil hygiene requirements. However, voice commands might also negatively affect the patient-carer-interaction, as the interaction with the device becomes highly noticeable by patients, which might think they are addressed. A system that is able to recognize the situation and the particular step in the motion sequence (e. g., via the camera) might be more useful. While not yet possible and unlikely to be mentioned by the target audience during the analysis phase, a Wizard-of-Oz study could assess the potential of this kind of support in actual interaction with patients. Thus, privacy concerns regarding the usage of cameras in hospitals during the interaction with vulnerable patients must be adequately addressed in future work.

Input from other sources would be beneficial as well, esp. about the current posture of the carer. Sensors, e. g., in the nursing uniform, could provide this information and allow for reminders triggered by the movement and position of the carer. These reminders would also be helpful to change entrenched falsely-learned habits. However, on-the-job training and intervention would include more heterogeneous users in terms of age and (likely) affinity for technology [9], which has to be taken into account.

Finally, the feedback could be adapted to the learning situation. Augmenting context-specific information into the field of view using AR technology could be used to provide useful feedback for the user.

However, the focus should first be on nurse training. And for this purpose, smart glasses seem well suited for providing support during the task without encumbering the carers. It reminds them of steps they would have otherwise omitted and thus can prevent the development of habits detrimental to their and their patient’s health.

About the authors

Jan Patrick Kopetz

Jan Patrick Kopetz (PhD student, M.Sc. in Media Informatics) is a researcher at the Institute for Multimedia and Interactive Systems (IMIS) of the University of Lubeck. His research interests include mobile media, new interaction devices, augmented reality, and digital healthcare.

Daniel Wessel

Daniel Wessel (PhD in Psychology) is a postdoctoral researcher at the Institute for Multimedia and Interactive Systems (IMIS) of the University of Lubeck. His research interests include mobile media, evaluation, and especially the interaction between psychology and computer technology.

Nicole Jochems

Nicole Jochems (PhD in Engineering) is a Professor for Media Informatics and Head of the Media Informatics Programme at the Institute for Multimedia and Interactive Systems (IMIS) of the University of Lubeck. Her research interests include methods in the area of Human-Centered Design, age-specific design, virtual- and augmented reality, and digital healthcare.


The authors thank all participants that took part in the survey, in the design thinking workshops and in the evaluation of the prototype. We also thank the anonymous reviewers for their helpful feedback.


[1] Anne Bartlett-Bragg. Wearable technologies: Shaping the future of learning. Training & Development, 41(3):13, June 2014.Search in Google Scholar

[2] Brion Benninger. Google Glass, ultrasound and palpation: the anatomy teacher of the future? Clinical Anatomy (New York, N.Y.), 28(2):152–155, March 2015. ISSN 1098-2353. 10.1002/ca.22480.Search in Google Scholar PubMed

[3] Henrik Berndt, Tilo Mentler, and Michael Herczeg. Optical Head-Mounted Displays in Mass Casualty Incidents: Keeping an Eye on Patients and Hazardous Materials. International Journal of Information Systems for Crisis Response and Management (IJISCRAM), 7(3):1–15, July 2015. 10.4018/IJISCRAM.2015070101.Search in Google Scholar

[4] Holger Bonin, Angelika Ganserer, and Grit Braeseke. Internationale Fachkräfterekrutierung in der deutschen Pflegebranche. Zentrum für Europäische Wirtschaftsforschung im Auftrag der Bertelsmann Stiftung, 2015. URL in Google Scholar

[5] Paula J. Byrne and Patricia A. Senk. Google Glass in Nursing Education: Accessing Knowledge at the Point of Care. Computers, informatics, nursing: CIN, 35(3):117–120, March 2017. ISSN 1538-9774. 10.1097/CIN.0000000000000339.Search in Google Scholar PubMed

[6] Basil Chaballout, Margory Molloy, Jacqueline Vaughn, Raymond Brisson III, and Ryan Shaw. Feasibility of Augmented Reality in Clinical Simulations: Using Google Glass With Manikins. JMIR Medical Education, 2(1):e2, 2016. ISSN 2369-3762.10.2196/mededu.5159Search in Google Scholar PubMed PubMed Central

[7] Johnny Yau Cheung Chang, Lok Yee Tsui, Keith Siu Kay Yeung, Stefanie Wai Ying Yip, and Gilberto Ka Kit Leung. Surgical Vision: Google Glass and Surgery. Surgical Innovation, 23(4):422–426, August 2016. ISSN 1553-3506. 10.1177/1553350616646477.Search in Google Scholar PubMed

[8] B. Dougherty and S. M. Badawy. Using Google Glass in Nonsurgical Medical Settings: Systematic Review. JMIR mHealth and uHealth, 5(10):e159, October 2017. ISSN 2291-5222. 10.2196/mhealth.8671.Search in Google Scholar PubMed PubMed Central

[9] Thomas Franke, Christiane Attig, and Daniel Wessel. A personal resource for technology interaction: Development and validation of the affinity for technology interaction (ati) scale. International Journal of Human–Computer Interaction, 0(0):1–12, 2018. 10.1080/10447318.2018.1456150.Search in Google Scholar

[10] Wendy Glauser. Doctors among early adopters of Google Glass. Canadian Medical Association. Journal, 185(16):1385, 2013.10.1503/cmaj.109-4607Search in Google Scholar PubMed PubMed Central

[11] Google Glass Technical Specifications, 2013. URL in Google Scholar

[12] ISO. 9241-210: 2010. ergonomics of human system interaction-part 210: Human-centred design for interactive systems. International Standardization Organization (ISO). Switzerland, 2009.Search in Google Scholar

[13] Carolien Kamphuis, Esther Barsom, Marlies Schijven, and Noor Christoph. Augmented reality in medical education? Perspectives on Medical Education, 3(4):300–311, September 2014. ISSN 2212-2761, 2212-277X. 10.1007/s40037-013-0107-7. URL in Google Scholar PubMed PubMed Central

[14] Jan Patrick Kopetz, Daniel Wessel, Katrin Balzer, and Nicole Jochems. Smart glasses as supportive tool in nursing skills training. In Susanne Boll, Andreas Hein, Wilko Heuten, and Karin Wolf-Ostermann, editors, Zukunft der Pflege: Tagungsband der 1. Clusterkonferenz 2018 - Innovative Technologien für die Pflege, pages 137–141, Oldenburg, 2018a. BIS-Verl. der Carl von Ossietzky Universität Oldenburg, ISBN 978-3-8142-2367-4.Search in Google Scholar

[15] Jan Patrick Kopetz, Daniel Wessel, and Nicole Jochems. Suitability of interactive smart glasses to support nurses in training. Zeitschrift für Arbeitswissenschaft, 2018b. ISSN 0340-2444, 2366-4681. 10.1007/s41449-017-0072-9.Search in Google Scholar

[16] JR Lewis. User satisfaction questionnaires for usability studies: 1991 manual of directions for the asq and pssuq. In Tech. Rep. No. 54.609. International Business Machines Corporation Boca Raton, 1991.Search in Google Scholar

[17] Moverio BT -300 - Epson, 2016. URL in Google Scholar

[18] Oliver J Muensterer, Martin Lacher, Christoph Zoeller, Matthew Bronstein, and Joachim Kübler. Google Glass in pediatric surgery: an exploratory study. International journal of surgery, 12(4):281–289, 2014.10.1016/j.ijsu.2014.02.003Search in Google Scholar PubMed

[19] Jonathan Nakhla, Andrew Kobets, Rafeal De la Garza Ramos, Neil Haranhalli, Yaroslav Gelfand, Adam Ammar, Murray Echt, Aleka Scoco, Merritt Kinon, and Reza Yassari. Use of Google Glass to Enhance Surgical Education of Neurosurgery Residents: “Proof-of-Concept” Study. World Neurosurgery, 98, February 2017. 10.1016/j.wneu.2016.11.122.Search in Google Scholar PubMed

[20] John Nosta. How Google Glass Is Changing Medical Education, 2013. URL in Google Scholar

[21] Brent A. Ponce, Mariano E. Menendez, Lasun O. Oladeji, Charles T. Fryberger, and Phani K. Dantuluri. Emerging Technology in Surgical Education: Combining Real-Time Augmented Reality and Wearable Computing Devices. Orthopedics, 37(11):751–757, November 2014. ISSN 0147-7447, 1938-2367. 10.3928/01477447-20141023-05.Search in Google Scholar PubMed

[22] Tracie Risling. Educating the nurses of 2025: Technology trends of the next decade. Nurse Education in Practice, 22:89–92, January 2017. ISSN 1471-5953. 10.1016/j.nepr.2016.12.007.Search in Google Scholar PubMed

[23] Christoph Runde. Head Mounted Displays und Datenbrillen: Einsatz und Systeme. Virtual Dimension Center Fellbach - Kompetenzzentrum für virtuelle Realität und Kooperatives Engineering w. V., 2014. URL in Google Scholar

[24] Patrick M. Russell, Michael Mallin, Scott T. Youngquist, Jennifer Cotton, Nael Aboul-Hosn, and Matt Dawson. First “glass” education: telementored cardiac ultrasonography using Google Glass- a pilot study. Academic Emergency Medicine: Official Journal of the Society for Academic Emergency Medicine, 21(11):1297–1299, November 2014. ISSN 1553-2712. 10.1111/acem.12504.Search in Google Scholar PubMed

[25] Entwicklung der Anzahl von Pflegebedürftigen in Deutschland nach Geschlecht in den Jahren von 2005 bis 2030, 2010. URL in Google Scholar

[26] Nabil Sultan. Reflective thoughts on the potential and challenges of wearable technology for healthcare provision and medical education. International Journal of Information Management, 35(5):521–526, October 2015. ISSN 0268-4012. 10.1016/j.ijinfomgt.2015.04.010.Search in Google Scholar

[27] Jeffrey Tully, Christian Dameff, Susan Kaib, and Maricela Moffitt. Recording medical students’ encounters with standardized patients using Google Glass: providing end-of-life clinical education. Academic Medicine: Journal of the Association of American Medical Colleges, 90(3):314–316, March 2015. ISSN 1938-808X. 10.1097/ACM.0000000000000620.Search in Google Scholar PubMed

[28] S. Vallurupalli, H. Paydak, S. K. Agarwal, M. Agrawal, and C. Assad-Kottner. Wearable technology to improve education and patient outcomes in a cardiology fellowship program - a feasibility study. Health and Technology, 3(4):267–270, December 2013. ISSN 2190-7188, 2190-7196. 10.1007/s12553-013-0065-4.Search in Google Scholar

[29] Jacqueline Vaughn, Michael Lister, and Ryan J. Shaw. Piloting Augmented Reality Technology to Enhance Realism in Clinical Simulation. CIN: Computers, Informatics, Nursing, 34(9):402–405, September 2016. ISSN 1538-2931. 10.1097/CIN.0000000000000251.Search in Google Scholar PubMed

[30] Wolfgang Vorraber, Siegfried Voessner, Gerhard Stark, Dietmar Neubacher, Steven DeMello, and Aaron Bair. Medical applications of near-eye display devices: An exploratory study. International Journal of Surgery, 12(12), December 2014. ISSN 1743-9191. 10.1016/j.ijsu.2014.09.014.Search in Google Scholar PubMed

[31] Vuzix M300 Smart Glasses, 2016. URL in Google Scholar

[32] Hanna Wüller, Jonathan Behrens, Marcus Garthaus, and Hartmut Remmers. Technologiebasierte Unterstützungssysteme für die Pflege – Eine Übersicht zu Augmented Reality und Implikationen für die künftige Pflegearbeit. In Vorabprogramm ENI 2017, Hall, 2017.Search in Google Scholar

[33] Kristie L. Young, Amanda N. Stephens, Karen L. Stephan, and Geoffrey Stuart. An Examination of the Effect of Google Glass on Simulated Lane Keeping Performance. Procedia Manufacturing, 3:3184–3191, January 2015. 10.1016/j.promfg.2015.07.868.Search in Google Scholar

Published Online: 2020-01-14
Published in Print: 2019-11-18

© 2019 Walter de Gruyter GmbH, Berlin/Boston

Downloaded on 3.3.2024 from
Scroll to top button