Summary
It is challenging to design a sufficiently complex user interface that is universally usable. Striking differences between younger and older users, based on age and cohort effects, demand suitable design compromises with an effective combination of user interface and instructional design. This paper describes such a design compromise with a focus on video instruction for an AAL application designed to maintain and expand cross-generational social support networks. To estimate its effectiveness, 30 younger (M = 26 years) and 31 older (M = 68 years) participants were split in two groups: Both solved the same 16 tasks with the same AAL application, yet the experimental group received a short video instruction before the tasks and the control group did not. Results show that both age groups rated the video instruction as useful and did benefit from it – older users’ effectiveness improved even to the level of younger users. It can be concluded that the effective combination of user interface and instructional design played a central role towards universal usability and that early integration of instructional design into the human centered design process improved its efficiency.
1 Introduction
Within research on Design for Aging and particularly on Ambient Assisted Living (AAL), many technological assistance applications have been identified in recent years, in particular providing support for the challenges of daily living that become more pronounced with higher age. Central among the questions regarding the implementation of such support functions is the choice of target user group and level of complexity, which can vary greatly and have direct consequences for the usability engineering process. Addressing the issue of level of complexity, it is possible to develop large applications that integrate control mechanisms for a multitude of purposes. On the other end electronic household devices or small apps with low complexity can be designed for a single purpose or device. Clearly both approaches, separation versus integration of functions, have pros and cons for (the developer as well as) the user. If the system is controlled with a single central user interface (UI), the user only needs to learn interaction knowledge for one complex UI, otherwise many more but less complex UIs need to be learned. Designing UIs equally suited for diverse users is very challenging – it requires knowledge of the diverse user characteristics relevant for successful human technology interaction.
This paper describes the development of an application to maintain and expand cross-generational social support networks. Thus the main aim for the design of this application was to be equally suited for younger and older users while developing a UI that improves social communication options in particular for older people. This paper also describes fundamental challenges in the design of such UIs with a focus on three goal aspects: (1) Developing instructions to compensate usability issues resulting from the integration of many functions (high complexity) (2) Developing instructions equally suited for younger and older users (3) Deriving general strategies and methods for developing such instructions.
2 Universal Usability in AAL
2.1 The SMILEY Research Project
The findings described in this article are part of a larger research project funded by the German Federal Ministry of Education and Research (BMBF) called SMILEY (Smart and Independent Living for the Elderly). The goal of SMILEY was to aid independent living with socio-technical assistance helping (a) directly with everyday challenges and (b) indirectly by fostering social integration of older people, including the private (community) and professional (service providers, medical institution) environment. For this purpose, one part of the project developed a User Interface (UI) for the SMILEY system according to the iterative approach of Human-Centered Design [9]. Based on research indicating that interaction with touch screens was particularly suitable for older adults [5, 13, 24, 25, 26, 30], tablet-PCs were chosen as hardware platform.
Integrating diverse assistance applications into one central UI suitable for younger and older users alike was a central design goal of the project. The decision for an integrative approach was to ensure that only interaction knowledge for a single (if more complex) UI needs to be learned for successful use, which should benefit older users in particular. It was also to increase the perceived and actual usefulness of the system for the target user group, which is vital for its acceptance and regular use, especially for older adults [10].
Functions were integrated on the system and the module level – Modular design was to allow adaptation to the actual need for assistance for individuals and user groups. Thus, the SMILEY system comprises five core modules derived from a comprehensive requirements analysis: (1) CONTACT, to maintain and expand social contacts, (2) LIVING, to control home appliances and automation, (3) HEALTH, to monitor health, (4) ENVIRONMENT, to access and exchange information and (5) REMINDER to keep calendars, task lists and reminders (e. g. for medication).
2.2 User Interface Design and Instructional Design
Designing UIs equally suited for younger and older users is challenging. It requires knowledge of the diverse user characteristics relevant for successful human technology interaction. User interface design (UID) and instructional design (ID) are core approaches to achieve usable user interfaces (UIs). UID aims to adapt the technology to the user, while ID aims to adapt the user to the technology through knowledge transfer. Both approaches (UID and ID) aim to improve the quality of human technology interaction (usability) and complement one another. In fact, they are often difficult to distinguish, merging in labels, tool tips, prompts, warnings and other elements containing interactional and instructive elements. This paper focuses on the design of explicit instructions in manuals, tutorials and trainings. But is there still a need for those, in a time of search engines and video platforms (e. g. youtube) providing ubiquitous information for the use of information and communication technology (ICT), or will the legally required declaration of conformity suffice?
2.3 Design for Aging
In design for aging, age and cohort effects influence usability and for practical purposes, the question is how relevant user characteristics can be adequately considered. UID aimed at improving usability for older users can in turn lead to negative effects for younger users, but as Hawthorne ([12], p. 523f) points out, “it is not just a matter of whether our interface designs allow older users access to our software’s services but also whether we can design for older users without crippling the power of our interfaces to serve younger users.” Possible immediate negative effects for younger users could be decreased attractiveness and familiarity, resulting in less acceptance. Thus, the challenge in design for aging seems to be finding a compromise that benefits most users and careful consideration of user characteristics seems critical to finding these compromises (see [29]; guideline for the generation-friendly design and assessment of products).
Younger and older adults differ in a variety of characteristics relevant for successful ICT (information and communication technology) use and the great differences between user groups call for an effective interplay of UID and ID, as illustrated in figure 1. Computer and Internet use differs greatly in quality and quantity between younger and older people in Germany [23] and other European countries [22], with more variance within the older group. This difference in mean and variance is depicted in the hypothetical knowledge distribution curves for both age groups in figure 1. UID and ID result in two usability thresholds intersecting these curves and separating those who can use the technology successfully with the given UI from those who cannot. Thus, the blue area marks those who can use the given UI without further instructions, the yellow area marks those who can use the given UI with instructions provided and the red area marks those for whom the UI is unusable. More usability thresholds would be conceivable, e. g. for adaptive systems following the training wheel approach [6], yet these thresholds would then apply only to part of the system and thus remain the same for any given system.
To achieve a predefined usability goal, UID and ID can be combined to a total design effort, in which the effort placed on UID and ID are interdependent: If UID is inadequate, more ID is necessary to achieve a usable UI. And yet, UID and ID may lead to different results for different user groups. When designing for an older user group, it seems reasonable to design for less prior computer knowledge, which devalues the computer knowledge available to (younger) users. Thus, when designing for universal usability, large inter- and intragroup differences have to be considered for an adequate design solution.
Alternative designs for different user groups can provide the needed flexibility to achieve an adequate design compromise. However, they often require substantial design effort. In contrast, instructions can be delivered tailored to specific user needs and in the context of use with relatively little effort. The instructional design process should begin early to allow feedback to UID, e. g. early detection of usability problems can inform decisions on whether they can be overcome more efficiently by UID or ID. The analysis of usability problems plays a central role as signpost for design decisions. A theoretical model can be very helpful to structure and classify usability problems found within the iterative human centered design process. The model used within the SMILEY project differentiates four sources of usability problems: Age and cohort differences as well as interaction and task knowledge, fostering a systematic approach to design based on the identified usability problems.
Usability problems based on age effects were mainly addressed with UID (e. g. button sizes and distances) and those based on cohort effects (e. g. less exploration and different problem solving behavior of older ICT users) were addressed with UID or ID based on feasibility and cost-benefit analysis. Differentiating usability problems based on task domain versus interaction knowledge provided a valuable guidepost for instruction, e. g. unknown swipe gestures needed for basic tablet interaction were shown in the introduction, while domain specific functions of the system were conveyed in later video instructions. The following section describes the UID process for the CONTACT app within the SMILEY project.
3 User Interface Design for the CONTACT App
The UID process of the whole SMILEY system was aimed at older users and followed norms (e. g. [8]) and guidelines considering human perception and cognition, that were continuously adapted to the design process at hand. For instance, guidelines established that screen transitions should be smooth rather than abrupt to avoid confusion, buttons should contain meaningful descriptions and colors should be subtle and rich in contrast. Inspection methods (heuristic evaluation, cognitive walkthrough) and usability tests revealed usability problems and provided further insights for the design.
Since one core design aspect was to integrate many support functions into one application, formative evaluation also focused on information integration and simplification. To achieve consistent design between the five modules, e. g. the top navigation bar always contained a back button in the top left that goes back one screen and a “home button” next to it that goes back to the start screen, as well as a current position indicator and a search box that provides results separately for the CONTACT app, the SMILEY system and the Internet.
To integrate the diverse support functions into the CONTACT app, it was developed using an iterative process starting with simple paper prototypes, to mock-ups and functional (software) prototypes. Figure 2 illustrates steps in the development process of the start screen.
Usability tests were conducted with 16 scenario based tasks created to cover most of the implemented functions (see table 1). For the summative evaluation of the final development stage representing a functional software system with minor limitations (i. e. search function and sensors were simulated or not fully implemented), instructions were developed to complement the UI.
# | Task |
---|---|
1. | Reply to an incoming message. |
2. | Creating a new contact in the address book. |
3. | Finding someone who wants to go for a walk. |
4. | Write and send a new message. |
5. | Calling a person from the address book. |
6. | Change the contact information of an existing contact. |
7. | Find an opera event. |
8. | Share opera event data with friends. |
9. | Show number of missed calls. |
10. | Let friends know that you would like to go to the zoo. |
11. | Find out who of the closest friends has tried to contact you. |
12. | Let all contacts know that you do not want to be disturbed. |
13. | Contact someone that you have not talked to for the longest time. |
14. | Setting own preferences for automated event suggestions. |
15. | Add a contact to the circle of close friends (favorites). |
16. | Check if a contact has proposed an activity and schedule the appointment. |
4 Instructional Design for the CONTACT App
4.1 Need for Instruction
Since the CONTACT app UI design aimed to integrate many functions, it was expected that instructions would be necessary for many (particularly older) users, even though their scope, format and content was still unclear. The assumed need for instruction was confirmed in a first study evaluating standard iPad applications for older adults, showing that younger and older users differed significantly in their task performance: While younger users were able to solve almost all tasks (98 %), older users solved only 57 % (see [2]). Results from our further studies expand these findings by showing that even seemingly simple tasks required interaction knowledge not necessarily available to older users, i. e. 18 of the 31 (= 58 %) participants already required help to unlock the iPad screen. This illustrates the importance of an introduction to new technology, especially for older adults.
Thus the experimenter introduced all participants to basic iPad operation (touch screen, interaction elements such as buttons and sliders etc.), to ensure a minimal level of necessary interaction knowledge for all participants before the usability test. In the introduction, the experimenter gave participants a task, such as “Please turn on the device”. If the participant solved the task, then the next task followed. If not, then the experimenter provided help increasingly, until the participant solved the task.
The set of tasks was repeated until the participant solved all tasks without help. For practical purposes, such a personal introduction could be delivered by the technician installing the AAL system, yet an interactive video based training might also be viable. With further CONTACT app development and formative evaluation, the increasing complexity revealed a need for instruction that informed the development of the help function.
4.2 Systems Approach
Instructional design for the help function followed the systems approach by Rogers et al. [17], that has proven practical for the design of diverse instructions (e. g. tutorials, manuals, trainings), particularly for older adults (e. g. [15, 27]). The systems approach provides a frame of reference for the systematic identification and integration of requirements for instructions. It considers requirements derived from user (abilities, skills & knowledge), task, technology and context characteristics to evaluate the instructions for shortcomings. This requires correction and further development and fosters an iterative process that helps to detect major problems in early development phases. The following description of the instructional design process for the CONTACT app focuses on the concrete implementation and its relation to design for aging.
4.3 Requirements for CONTACT App Instructions
After the need for instruction for potential users had been confirmed, requirements for the instructions of the CONTACT app were collected. This took into mind that the design should consider age and cohort differences AND be accepted by young and old users alike to foster cross-generational social integration and support. Large differences in computer literacy and experience between potential users, especially among older users, were identified as central challenges and contextual help deemed viable. Colloquially, the central questions could be summarized as “What can be done with the app and where do I find functions and information about it?”
Video instruction was identified as adequate form of instruction, because audio-visual representation (see [14]; on the underlying Cognitive Theory of Multimedia Learning) allows simultaneous communication of function purpose and location (“What can I do and where can I find it?”) on demand in the context of use and had been used successfully in prior research designing instructions for the use of a ticket vending machine [19–21]. Two kinds of design decisions had to be made: those important for the implementation into the final system and those that ensured valid evaluation of video instruction effectiveness.
4.4 Designing the Video Instruction
A male speaker voice was chosen for the instructional text, because age-related hearing loss (presbyacusis) is characterized by a decreased ability to hear very high frequencies. A low rate of speaking with an average of 58 words per second (well below the recommended upper limit of 140, [10]) was chosen to avoid attention overload, since attention span, cognitive speed and working memory show decline in high age [10]. Instructions were expressed in simple common language with short sentences and a low average difficulty of FI = 54 on the Flesch index ([11]; for comparison, this article has a difficulty of FI = 35, with higher numbers indicating less difficulty). The complexity of video instructions was reduced by splitting content into ten small lessons with an average duration of less than one minute, reducing intrinsic cognitive load [7] and working memory demands [16]. The lesson contents (see table 2) were informed by a hierarchical task analysis following the GOMS-approach (Goals, Operators, Methods and Selection rules, [4]) and usability problems identified in usability testing and inspection (see [1]).
# | Lesson content |
---|---|
0. | Introduction: description of CONTACT-app functionality |
1. | Find, read and answer incoming messages, write and send new messages |
2. | Back-button navigation |
3. | Create and save a new address book entry |
4. | Edit and save an existing address book entry |
5. | Find recommendations for events and forward them to friends |
6. | Identify number of new messages |
7. | Explain the concept for the own status display |
8. | Show contacts for the inner circle of friends on the start screen |
9. | See at a glance when the last contact with a friend from the inner circle was |
10. | Adjust preferences for automated event suggestions |
Video instructions were shown on the same iPad running the CONTACT app, an advantage that should be implemented with all similar technology featuring screens capable of video playback. Thus, all instructions could be displayed in original size, reducing the need for additional age specific design measures and for additional cognitive transfer regarding position and proportion of interaction elements in the learning process. Instructions contained specific illustrative pointing and interaction triggering gestures executed with “natural” (instead of animated) hands to increase the probability of association with the spoken words and to compensate for the limited model appearance [27]. Pointing gestures showed the palm of the hand moving in circles about the target, while interaction gestures showed the back of the hand and fingers touching the screen (see figure 3). Video recording took place in a professional recording studio at Humboldt University Berlin and followed written scripts containing the spoken text, stage directions for pointing and interaction gestures, as well as screenshots used for illustration.
4.5 Design Decisions for Evaluation
To evaluate instruction effectiveness, five lessons were combined in one video maintaining the testing order. The first video including a short introduction of the CONTACT app was shown prior to the first eight tasks with a total duration of 5:00 minutes, the second prior to the remaining eight tasks with a total duration of 4:17 minutes. The lessons were separated by an acoustic signal.
4.6 Formative Evaluation and Redesign
The resulting video instructions were evaluated by experts and older users. Mainly open qualitative questions helped identifying design flaws and aesthetic deficits, e. g. concerning text comprehensibility and gesture distinctness: In one lesson, the “back button” had been explained incorrectly and some pointing gestures had to be improved. Some users complained about fast speech or the unpleasant sound separating lessons. Findings were used to improve video instructions, resulting in more precise lessons, better distinguishable gestures, slower rate of speech, more pleasant notification sounds, longer breaks between lessons and improved audio recording quality. Finally, a usability test was devised to evaluate the effectiveness of the redesign.
5 Summative Evaluation of the Video Instruction
A total of 30 younger (M = 26.33 years, SD = 3.81; 14 female, 16 male) and 31 older adults (M = 68.16 years, SD = 5.65; 19 female, 12 male) took part in a 2 × 2 factorial cross-sectional usability study, solving 16 scenario-based tasks with vs. without video instruction. Testing was done individually with the CONTACT app prototype installed on Apple iPad 2 devices in a usability laboratory resembling a living room setting to increase ecological validity (see figure 4). The procedure followed a common experiment guideline, lasted between 60 and 90+ minutes (younger were faster than older participants) and was recorded for later analysis. Participants received 10 € compensation.
5.1 Procedure and Control Variables
Before the usability test, demographic data (age, gender, education, etc.) and control variables were collected, of which only the most important shall be briefly described. To test computer literacy, participants rated their computer, Internet and mobile device experience on a three point Likert scale (1 = low, 2 = medium, 3 = high) and answered the computer literacy scale (CLS, [18]). Fluid intelligence was measured with the WAIS DSC ([31, 32]; German: HAWIE ZST, HAWIE-R; [28]).
Then all participants received a general iPad introduction to ensure a common minimum of iPad interaction knowledge prior to the 16 scenario based usability tasks. All tasks began with a short naturalistic scenario followed by a corresponding assignment. For example, the 10th task read: “You would like to go to the zoo – but not alone. Let all of your friends know that you would like to go to the zoo!”
While participants in the experimental group received video instructions before the first and the ninth task, participants in the control group did not. After every task, participants also estimated their satisfaction with system interaction solving their task on a 10 point Likert scale and their subjective effort with the Rating Scale Mental Effort (RSME, [33]). The experimenter documented whether participants had solved the task correctly (effectiveness) and how long they needed (efficiency). After the tasks, the experimental group rated the video instruction on usefulness with a 10 point Likert scale and on other qualitative properties on a 9 point Likert scale.
5.2 Measuring Usability
The video instruction was evaluated for the usability criteria effectiveness, efficiency measured in time and interaction steps, and satisfaction. To allow comparisons across tasks, they were all transformed to range from 0 to 100 %, with 100 % marking the perfect score. Thus effectiveness was measured as the number of solved tasks divided by the total number of tasks, resulting in 50 % effectiveness if half of the tasks had been solved correctly. Efficiency (time) was measured as fastest possible time divided by the time needed to solve the task, resulting in 50 % if twice the fastest time was needed. Efficiency (steps) was measured as the minimum of necessary touch interactions (steps) divided by the number of steps needed to solve the task (50 % if twice the necessary steps were needed). The minimum of necessary steps had been determined in a prior task analysis. Satisfaction was measured for every task on a 10 point Likert scale (1 = not at all; 10 = very satisfying).
6 Results
6.1 Impact of Control Variables
There were significant differences in the control variables introduced above (computer literacy and experience as well as fluid intelligence) between age groups, but not within age groups and between experimental conditions (see [1] for a detailed description of results). Thus effects found in dependent variables can be attributed to the effectiveness of the video instruction.
6.2 Video Instruction Effectiveness
Results show that both age groups did benefit from the video instruction (see figure 5). For the younger group, effectiveness increased from 81 % to 91 % (z = –2.440, p = .015), efficiency (time) from 48 % to 60 % (T = –4.266, p > .001) and efficiency (steps) from 69 % to 78 % (T = –3.587, p = .001). For the older group, effectiveness increased from 49 % to 86 % (T = –6.664, p > .001), reaching the level of the younger group and efficiency (time) increased from 22 % to 30 % (z = –2.332, p = .020). Efficiency (steps) however did not increase significantly for the older group and satisfaction did not increase significantly for either age group.
6.3 Video Instruction Evaluation
The video instruction design aimed to be suitable for young and old users and both age groups estimated its usefulness very positively with scores above 8 out of 10, while the young group (M = 8.33, SD = 1.397) did not differ significantly (z = –1.228, p = .22, r = .22, if testing for the null hypothesis, the alpha error level should be adjusted to α = .20 (two-sided) to avoid type II error, ([3], p. 201ff) from the old group (M = 8.81, SD = 1.601, see figure 6).
The evaluation of video instruction properties is documented in figure 7 and 8 along with table 3 and 4. For figure 7 and table 3, ratings around the middle (5) are desirable and all but one rating are located between 4 and 6 with no significant differences between age groups. Only the rate of speaking was evaluated significantly different by the old (just right, M = 4.44, SD = .892) and young (too slow, M = 3.33, SD = .816) age group. Thus, the video instruction can be considered suitable for both age groups.
# | Item | age group | Mean | SD | z | Sig. | effect size |
---|---|---|---|---|---|---|---|
1. | the amount of information presented in the entire video is too little vs. too much | young | 4.87 | 1.125 | –7.775 | .439 | r = .14 |
old | 5.00 | .632 | |||||
2. | the amount of information presented in the sections is too little vs. too much | young | 4.93 | .594 | –.646 | .518 | r = .12 |
old | 4.75 | .683 | |||||
3. | the spans between the sections are too short vs. too long | young | 5.80 | 1.014 | –1.197 | .231 | r = .21 |
old | 5.31 | 1.078 | |||||
4. | the rate of speaking is too slow vs. too fast | young | 3.33 | .816 | –3.057 | .002* | r = .55 |
old | 4.44 | .892 | |||||
5. | the speech volume is too low vs. too high | young | 5.13 | .640 | –.695 | .487 | r = .12 |
old | 5.00 | .730 |
For figure 8 and table 4, higher ratings are desirable and again, they were mostly positive with 4 out of 5 scoring above 7 on a 9 point scale. Even though both groups rated the instruction wording as comprehensible, the young group (M = 8.93, SD = .258) did so significantly stronger (z = –3.192, p = .001, r = .57 than the old group (M = 8.31, SD = .602). The lesson appropriateness for contextual assistance was confirmed by both, young (M = 7.40, SD = 2.03) and old (M = 7.56, SD = 1.32) groups. These results indicate a suitable design compromise for both age groups.
# | Item | age group | Mean | SD | z | Sig. | effect size |
---|---|---|---|---|---|---|---|
6. | wording is incomprehensible vs. comprehensible | young | 8.93 | .258 | –3.192 | .001* | r = .57 |
old | 8.31 | .602 | |||||
7. | gestures are ambiguous vs. clear | young | 8.40 | 1.595 | –1.379 | .168* | r = .25 |
old | 8.25 | 1.238 | |||||
8. | gestures are imprecise vs. precise | young | 8.27 | 1.100 | –.285 | .775 | r = .05 |
old | 8.13 | 1.500 | |||||
9. | pointing gestures and touch events are poorly vs. well discriminable | young | 6.73 | 2.492 | –.406 | .684 | r = .07 |
old | 6.69 | 2.522 | |||||
10. | for contextual assistance lessons are inappropriate vs. appropriate | young | 7.40 | 2.028 | –.245 | .806 | r = .04 |
old | 7.56 | 1.315 |
7 Discussion
Even using a relatively simple user interface designed in several iterations according to the Human Centered Design approach can be challenging for (older) users, and this problem is exacerbated if more functionality is integrated into the system, increasing UI complexity. This study shows that such usability problems can be tackled with comparatively little effort by developing short video instructions (each < 1 min). Results are consistent with prior research [19, 20, 21, 27] and carefully designed multimedia learning systems seem well suited for older users (see [10]).
Video instructions substantially improved effectiveness, and had positive yet little impact on other usability criteria. Interestingly, video instructions improved efficiency (steps) only for the younger group, perhaps because they succeeded in developing an adequate mental model with the instructions, while the older group did not.
Evaluation also shows that both age groups rate the video instructions as very useful and the lessons as suitable for contextual assistance. Thus they could be integrated into the CONTACT app to provide on demand information about the purpose and location of functions in the context of use and tailored to the user needs.
Despite the positive evaluations, the video instructions could be further improved. Carefully animated highlighting could enhance pointing and interaction gestures. Lessons could be combined with the training wheel approach [6] to reduce attentional demands and help with the construction of an adequate mental model. Also, as some (younger) users perceived the rate of speaking as too slow, perhaps it could be made adjustable to novelty and complexity of information.
Finally, it can be concluded that the study succeeded in designing and evaluating a video instruction suitable for younger and older users. It shows that early integration of instruction design into the human centered design process can be beneficial for the usability of the product by eliciting insights for the user interface design process, improving perspective taking for the design team and reducing additional instruction design efforts. Video instruction effectiveness and usability characteristics were verified, even though some design aspects promoting learning (see [8]), such as smooth screen transitions or stepwise exploration, are difficult to evaluate in a cross-sectional study.
Further research could be directed at usability testing in longitudinal studies and realistic field use situations, e. g. using real data, i. e. names, photos, addresses of actual family and friends in the CONTACT app could improve usability testing results, because the fictitious people used in this study could not be recognized by participants, further adding to task complexity.
If user interfaces are to be equally suited for young and old users, their design needs to address a wide spectrum of user characteristics and prior experiences. This study has shown that a universal usability approach can be successful by directing attention to those who need to learn more for successful interaction and by integrating instructional design into the user interface design process.
About the authors
Roman Benz graduated in psychology with a focus on human factors and engineering psychology at the Humboldt-University Berlin (HU Berlin). After some time working as a research assistant at the Institute for Educational Quality Improvement (IQB, Berlin) and at the HU Berlin at the department of engineering psychology, he currently works as a freelance user experience consultant for companies in the service sector and is involved as an expert for test evaluation and documentation in the project UseTree (Berlin Institute of Technology). His research interests lie in the design of cross-generational user interfaces and the interdependence between user interface design and instructional design in the usability engineering lifecycle.
Martin Brucks studied architecture and psychology at the Berlin University of the Arts (UdK Berlin) and at the Berlin Institute of Technology (TU Berlin). His interdisciplinary research is situated between these two academic fields and centers on environmental psychology, engineering psychology and architectural psychology. In particular, his research focuses on environmental perception and evaluation, the conflict between normative and empirical design theories, as well as on the different perspectives of planners and users. In 2011 he completed his interdisciplinary Ph. D. project entitled “Perception and Affective Judgment of Building Density by Architects and Laypersons.” Since 2012 Martin Brucks has been working as a research associate at the Humboldt-University Berlin (HU Berlin) at the Department of Psychology, and since 2013 he has additionally been holding a post-doc position at the Dresden Institute of Technology (TU Dresden) at the Department of Architecture.
Michael Sengpiel is visiting professor of engineering psychology and cognitive ergonomics at Humboldt-University Berlin (HU Berlin). He received his PhD investigating „User characteristics and the effectiveness of inclusive design for older users of public access systems“ in the research project “ALISA” and coordinated the AAL project “SMILEY” at HU Berlin. His research interests are in sociotechnical systems and human machine systems, smart home and Ambient Assisted Living, public access technology, media and computer literacy, universal usabilty and design for aging. He studied psychology with a focus on clinical as well as industrial organizational and engineering psychology at Humboldt-University Berlin and the University of Illinois at Urbana Champaign (Illinois, USA).
Acknowledgements
The authors want to thank all study participants and members of the SMILEY research project team at Humboldt-University Berlin as well as the project partners at the Fraunhofer-ISST, at the Allianz Managed Operations and Services SE and at the scemtec automation GmbH. The SMILEY research project was funded by the German Federal Ministry of Education and Research (BMBF grant 01FC10002) from 10 / 2010 to 09 / 2012 and managed by the DLR (German Aerospace Center).
References
[1] Benz, R. (2014). Entwicklung und Evaluation einer generationengerechten AAL-Applikation mit Videoinstruktion für Tablet-PCs. Unpublished diploma thesis, Humboldt-Universität zu Berlin.Search in Google Scholar
[2] Brucks, M., & Reckin, R. (2012). Ist das iPad fit für Ältere? In H. Reiterer, & O. Deussen (Eds.). Mensch & Computer 2012 – Workshopband: interaktiv informiert – allgegenwärtig und allumfassend!? (pp. 45–51). München: Oldenbourg Verlag.Search in Google Scholar
[3] Bühner, M., & Ziegler, M. (2008). Statistik für Psychologen und Sozialwissenschaftler. München: Pearson.Search in Google Scholar
[4] Card, S. K., Moran, T. P., & Newell, A. (1983). The Psychology of Human-Computer Interaction. Hillsdale NJ: Lawrence Erlbaum.Search in Google Scholar
[5] Caprani, N., O’Connor, N. E., & Gurrin, C. (2012). Touch screens for the older user. In F. Auat Cheein (Ed.). Assistive Technologies (pp. 96–118). Rijeka: InTech.10.5772/38302Search in Google Scholar
[6] Carroll, J. M., & Carrithers, C. (1984). Training wheels in a user interface. Communications of the ACM, 27(8), 800–806.10.1145/358198.358218Search in Google Scholar
[7] Chandler, P., & Sweller, J. (1991). Cognitive Load Theory and the Format of Instruction. Cognition and Instruction, 8(4), 293–332.10.1207/s1532690xci0804_2Search in Google Scholar
[8] DIN ISO. (2006). ISO 9241-110:2006 – Ergonomics of human-system interaction – Part 110: Dialogue principles. ISO.org.Search in Google Scholar
[9] DIN ISO. (2010). ISO 9241-210:2010 – Ergonomics of human-system interaction – Part 210: Human-centred design for interactive systems. ISO.org.Search in Google Scholar
[10] Fisk, A. D., Rogers, W. A., Charness, N., Czaja, S. J., & Sharit, J. (2009). Designing for older adults: principles and creative human factors approaches (2nd ed). Boca Raton: CRC Press.Search in Google Scholar
[11] Flesch, R. (1948). A new readability yardstick. Journal of Applied Psychology, 32(3), 221–233.10.1037/h0057532Search in Google Scholar PubMed
[12] Hawthorn, D. (2000). Possible implications of aging for interface designers. Interacting with computers, 12(5), 507–528.10.1016/S0953-5438(99)00021-1Search in Google Scholar
[13] Holzinger, A. (2003). Finger instead of mouse: touch screens as a means of enhancing universal access. In N. Carbonell, & C. Stephanidis (Eds.). Universal access theoretical perspectives, practice, and experience (pp. 387–397). Berlin: Springer.10.1007/3-540-36572-9_30Search in Google Scholar
[14] Mayer, R. E. (2005). The Cambridge Handbook of Multimedia Learning. Cambridge: University Press.10.1017/CBO9780511816819Search in Google Scholar
[15] Mayhorn, C. B., Stronge, A. J., McLaughlin, A. C., & Rogers, W. A. (2004). Older Adults, Computer Training, and the Systems Ap-proach: A Formula for Success. Educational Gerontology, 30(3), 185–203.10.1080/03601270490272124Search in Google Scholar
[16] Niegemann, H. M. (2008). Kompendium multimediales Lernen. Berlin, Heidelberg: Springer.Search in Google Scholar
[17] Rogers, W. A., Campbell, R. H., & Pak, R. (2001). A systems approach for training older adults to use technology. In N. Charness, D. Park, & B. Sabel (Eds.). Communication, technology, and aging: Opportunities and challenges for the future (pp. 187–208). New York: Springer Publishing Company.Search in Google Scholar
[18] Sengpiel, M., & Dittberner, D. (2008). The computer literacy scale (CLS) for older adults-development and validation. In M. Herczeg, & M. C. Kindsmüller (Eds.). Mensch & Computer 2008: Viel Mehr Interaktion (pp. 7–16). München: Oldenbourg Verlag.10.1524/9783486598650.7Search in Google Scholar
[19] Sengpiel, M., Sönksen, M., & Wandke, H. (2013). Integrating Training, Instruction and Design into Universal User Interfaces. In C. M. Schlick, E. Frieling, & J. Wegge (Eds.). Age-Differentiated Work Systems (pp. 319–345). Berlin, Heidelberg: Springer.10.1007/978-3-642-35057-3_14Search in Google Scholar
[20] Sengpiel, M., Struve, D., Dittberner, D., & Wandke, H. (2008). Entwicklung von Trainingsprogrammen für ältere Benutzer von IT-Systemen unter Berücksichtigung des Computerwissens. Wirtschaftspsychologie, Alter und Arbeit (3), 94–105.Search in Google Scholar
[21] Sengpiel, M., & Wandke, H. (2010). Compensating the effects of age differences in computer literacy on the use of ticket vending machines through minimal video instruction. Occupational Ergonomics, 9(2),10.3233/OER-2010-0174Search in Google Scholar
[22] Seniorwatch 2. (2008). Final report of the seniorwatch 2 study – assessment of the senior market for ICT progress and developments (report). Bonn: European Commission – Information Society and Media Directorate General 6.Search in Google Scholar
[23] Statistisches Bundesamt. (2013). Datenreport 2013: ein Sozialbericht für die Bundesrepublik Deutschland. Bonn: Bundeszentrale für politische Bildung.Search in Google Scholar
[24] Stößel, C., Wandke, H., & Blessing, L. (2009). An evaluation of finger-gesture interaction on mobile devices for elderly users. Prospektive Gestaltung von Mensch-Technik-Interaktion, 8, 470–475.Search in Google Scholar
[25] Stößel, C., Wandke, H., & Blessing, L. (2010). Gestural interfaces for elderly users: help or hindrance? In S. Kopp, & I. Wachsmuth (Eds.). Gesture in Embodied Communication and Human-Computer Interaction (pp. 269–280). Berlin: Springer.10.1007/978-3-642-12553-9_24Search in Google Scholar
[26] Stone, R. G. (2008). Mobile touch interfaces for the elderly. In Bradley, G. (Ed.). Proceedings of ICT, Society and Human Beings 2008 (pp. 230–234). Amsterdam: International Association for Development of the Information Society (IADIS).Search in Google Scholar
[27] Struve, D. (2010). Instruktionsdesign für ältere Nutzer interaktiver Systeme Gestaltungsaspekte modellbasierter Lernvideos in multi-medialen Bedientrainings. Berlin: Logos-Verlag.Search in Google Scholar
[28] Tewes, U. (1991). Hamburg-Wechsler-Intelligenztest für Erwachsene – Revision 1991 (HAWIE-R). Bern, Stuttgart, Totonto: Huber.Search in Google Scholar
[29] VDI / GGT 2236. (2013). Generationengerechte Gestaltung und Bewertung technischer Produkte – Gerontotechnik. In VDI – Handbuch. Produktentwicklung und Konstruktion. Berlin: Beuth Verlag.Search in Google Scholar
[30] Wandke, H., Sengpiel, M., & Soenksen, M. (2012). Myths about older people’s use of information and communication technology. Gerontology, 58(6), 564–570.10.1159/000339104Search in Google Scholar PubMed
[31] Wechsler, D. (1955). Manual for the Wechsler adult intelligence scale. Oxford: Psychological Corporation.Search in Google Scholar
[32] Wechsler, D. (1981). WAIS-R manual: Wechsler adult intelligence scale-revised. Oxford: Psychological Corporation.Search in Google Scholar
[33] Zijlstra, F. R. H. (1993). Efficiency in work behaviour: a design approach for modern tools. Delft: Delft University Press.Search in Google Scholar
Supplemental Material
The online version of this article (DOI https://doi.org/10.1515/icom-2016-0004) offers supplementary material, available to authorized users.
© 2016 Walter de Gruyter GmbH, Berlin/Boston