Jump to ContentJump to Main Navigation
Show Summary Details
More options …

it - Information Technology

Methods and Applications of Informatics and Information Technology

Editor-in-Chief: Conrad, Stefan

See all formats and pricing
More options …
Volume 61, Issue 5-6


A design and evaluation framework for digital health interventions

Tobias KowatschORCID iD: https://orcid.org/0000-0001-5939-4145
  • Corresponding author
  • Center for Digital Health Interventions, Institute of Technology Management, 27215 University of St. Gallen, Dufourstrasse 40a, CH-9000 St. Gallen, Switzerland
  • Department of Management, Technology, and Economics, ETH Zurich, Weinbergstrasse 56/58, 8092 Zurich, Switzerland
  • orcid.org/0000-0001-5939-4145
  • Email
  • Other articles by this author:
  • De Gruyter OnlineGoogle Scholar
/ Lena Otto
  • Chair of Wirtschaftsinformatik, esp. Systems Development, Technische Universität Dresden, D-01062 Dresden, Germany
  • Email
  • Other articles by this author:
  • De Gruyter OnlineGoogle Scholar
/ Samira Harperink
  • Center for Digital Health Interventions, Institute of Technology Management, 27215 University of Saint Gallen (ITEM-HSG), Dufourstrasse 40a, CH-9000 St. Gallen, Switzerland
  • Email
  • Other articles by this author:
  • De Gruyter OnlineGoogle Scholar
/ Amanda Cotti
  • Center for Digital Health Interventions, Institute of Technology Management, 27215 University of Saint Gallen (ITEM-HSG), Dufourstrasse 40a, CH-9000 St. Gallen, Switzerland
  • Email
  • Other articles by this author:
  • De Gruyter OnlineGoogle Scholar
/ Hannes Schlieter
  • Chair of Wirtschaftsinformatik, esp. Systems Development, Technische Universität Dresden, D-01062 Dresden, Germany
  • Email
  • Other articles by this author:
  • De Gruyter OnlineGoogle Scholar
Published Online: 2019-11-20 | DOI: https://doi.org/10.1515/itit-2019-0019


Digital health interventions (DHIs) have the potential to help the growing number of chronic disease patients better manage their everyday lives. However, guidelines for the systematic development of DHIs are still scarce. The current work has, therefore, the objective to propose a framework for the design and evaluation of DHIs (DEDHI). The DEDHI framework is meant to support both researchers and practitioners alike from early conceptual DHI models to large-scale implementations of DHIs in the healthcare market.

Keywords: barriers; criteria; digital health intervention; evaluation; life cycle; recommendations

ACM CCS: CCSApplied computingLife and medical sciencesHealth care information systems

1 Introduction

Over the last decades, the prevalence of chronic health problems, i. e. diseases, conditions, and syndromes that are continuing or occurring repeatedly for a long time, is steadily increasing. Chronic health problems include, for example, cardiovascular diseases, diabetes, chronic respiratory diseases (e. g., COPD or asthma), arthritis or certain types of cancer (e. g., multiple myeloma) [10], [46], [53]. These health problems lead not only to a substantial decrease in the quality of life of those being affected [33], [41], [68] or loss in productivity [64] but represent also the most important economic challenge in developed countries with up to 86 percent of all healthcare expenditures [12], [46], [53].

In addition to current approaches to address this important problem, for example, through national chronic disease strategies and policies [89], the use of information technology to either monitor health conditions and behavior or to deliver health interventions is another promising approach to support the growing number of chronic patients in their everyday lives [1], [53]. In this article, we use the term digital health intervention (DHI), describing the action of intervening [66] with “tools and services that use information and communication technologies (ICTs) to improve prevention, diagnosis, treatment, monitoring and management of health and lifestyle” [26], which is closely related to the notion of mHealth, telemedicine, telecare and health IT [28].

In light of the mature history of evidence-based medicine with clear guidelines on how to develop and assess the effectiveness of biomedical or behavioral health interventions [70], [82], up till now, guidelines for the systematic development and assessment of DHIs are still scarce and corresponding research has just started. For example, first evaluation criteria have been proposed during the last decade [8], [9], [18], [27], [60], [80], [83]. However, this preliminary work lacks guidance to which degree and when to apply these criteria along the life cycle of DHIs [63], [78]. Particularly, it is essential to consider appropriate evaluation criteria not only during the conceptual and prototype phases of a DHI, but also with respect to long-term implementations in the health care market, so that a sustainable, effective and efficient use of DHIs can be achieved. The evaluation results would also be the foundation for trust-building certifications similar to energy efficiency labels of consumer products, which can be used by patients and health professionals alike to find the “right” DHIs. Moreover, barriers for implementation and scaling-up of DHIs remain [65] that intervention authors must be aware of and that need to be addressed during and after the development process. A successful DHI conclusively needs to consider both, the selection of suitable evaluation criteria and the overcoming of implementation barriers. Therefore, a match-making is deemed useful to assess which evaluation criteria need to be considered and which implementation barriers need to be addressed at which particular phase of a DHI life cycle.

The current work has therefore the objective to propose a framework for the iterative Design and Evaluation of DHIs (DEDHI) that describes a typical life cycle of a DHI and recommends relevant evaluation criteria and implementation barriers to be considered for each phase of this life cycle.

The DEDHI framework is meant to support both researchers and practitioners alike during the design and evaluation of various instantiations of DHIs, i. e., from conceptual models to large-scale implementations in the healthcare area. The scientific contribution lies in the alignment of research streams from different fields at the intersection of behavioral medicine (e. g., behavioral interventions), medical informatics (e. g., medical applications) and information systems research (e. g., barriers of health information systems, including aspects of technology acceptance).

The remainder of this article is structured as follows. Design and evaluation frameworks for health interventions and DHI life cycle models are presented in the next section which build the foundation of the proposed DEDHI framework. For this purpose, an extended version of the multiphase-optimization strategy (MOST) [19], [20], [21] is used as the guiding life cycle model. Then, a systematic literature review is described and a consolidated list of evaluation criteria for DHIs are presented. Afterwards and based on a previous literature review [65], a consolidated list of implementation barriers for DHIs are outlined. In the following main results section, the consolidated evaluation criteria and implementation barriers for DHIs are both mapped to the DEDHI framework. This mapping is conducted in a deductive manner by applying qualitative content analysis [55]. Finally, the resulting DEDHI framework is discussed with recommendations for research and practice, and limitations. A summary and suggestions for future work conclude this article.

2 Design and evaluation frameworks for health interventions and DHI life cycle models

Various design and evaluation frameworks for health interventions have been proposed in the past. Examples of these frameworks are listed in Table 1. They range from guidelines for the development of public health interventions [87] and policies [24] at the population-level to behavioral health interventions [56] and DHIs [59] at the individual-level. A common shortcoming of these frameworks, however, lies in the lack of guidance with respect to evaluation criteria and implementation barriers along the different phases of a typical DHI life cycle. That is, appropriate guidance is missing from the conceptual model of a DHI to a product-grade DHI that is maintained in the long-term. In particular, none of these frameworks offers guidance on technology-related aspects (e. g. maturity, scalability or security) and there are only a few frameworks that consider the implementation phase explicitly [16], [22].

Table 1

Examples of design and evaluation frameworks for health interventions.

To address these shortcomings, findings from DHI life-cycle models [13], [39], [47], [77] can be used. These models describe the phases that systems undergo while they evolve from a prototypical development to an operational product [84]. For example, Broens et al. [13] proposed a four-layered life-cycle model. It distinguishes between the phases of prototypes, small-scale pilots, large-scale pilots, operational product and links specific determinants of successful DHI implementations to each of these phases. The generic Technology Readiness Level model also follows this structure but refines the initialization and prototype phases in a more granular way [54].

Against this background and in order to account for all relevant phases of DHI development and implementation, we propose an extended version of MOST [19], [20], [21] as the guiding life cycle model for DEDHI. It was selected, because (a) it describes the development of DHIs in a rigorous and iterative way with several design, optimization and evaluation steps and clearly defined optimization criteria, (b) it explicitly considers a novel class of personalized and promising health interventions, i. e., just-in-time adaptive interventions [61], [62] and corresponding assessment methods such as micro-randomized trials [48] that heavily rely on the use of technology and, finally, (c) because it also focuses on behavioral health interventions at the individual-level which is relevant for chronic health problems [46], [53]. Due to the fact that MOST does not consider a phase after a DHI has been successfully evaluated in a randomized controlled trial, a corresponding implementation phase is added from both related design and evaluation frameworks from Table 1 [15], [16], [22] and a DHI life cycle model [13]. Moreover, details on recommended maturity levels of DHI technology are also incorporated into this extended version of MOST from corresponding DHI life cycle models [13], [54].

The proposed DEDHI framework, which is based upon this extended version of MOST, is shown in Table 6. This table also includes the consolidated evaluation criteria and implementation barriers for DHIs which are described in more detail in the following two sections.

3 Evaluation criteria for DHIs

A systematic literature review was conducted to identify evaluation criteria for DHIs. A recently published systematic review of quality criteria for mobile health applications [63] in combination with an explorative search in the PubMed and Google Scholar databases were used to identify appropriate search terms that revealed a significant amount of relevant search results.

The final set of search terms is listed in Table 2 and was applied as follows: (ID1 and ID2 and ID3 and ID4 in Title) and (ID1 and ID2 in Abstract) (note that ID refers to the search term ID listed in Table 2).

Table 2

Overview of search terms.

The goal of the search strategy was to update and complement prior findings [63] due to the broader focus of the current work on DHIs which includes not only mobile health interventions but also web-based interventions and hybrid interventions in which also guidance by human health professionals are foreseen [51]. The resulting search strategy therefore consisted of three approaches. First, a backward search was conducted with relevant work already identified by Nouri et al. [63] but with the broader focus on DHIs. Here, relevant articles were screened back to the year 2000, which can be determined as the start of systematic research on DHIs [3], [4]. Second, the work of Nouri et al. [63] was updated with the broader DHI focus and relevant work from December 2016 till May 2019. Third, the search strategy of Nouri et al. [63] was extended to socio-technical databases and journals, i. e., ACM Digital Library, IEEE Explore, and A-ranked and B-ranked digital health journals as listed in [75]. An overview of the search strategy is outlined in Table 3.

Table 3

Overview of the search strategy.

A search result was included if the work was original, peer-reviewed, written in English, and described a tool with evaluation criteria for DHIs. Thus, systematic reviews of evaluation criteria were excluded but relevant work from these reviews was screened when published between January 2000 and May 2019.

The inclusion of relevant work was initially carried out by two authors of this article on the basis of title and abstract. In the event of uncertainty as to whether a particular work fulfilled the inclusion criteria, the entire text was read and, if necessary, a third co-author was consulted. The evaluation criteria with a corresponding definition were then extracted from the resulting list of included work. All criteria with corresponding definitions (if available) were then reviewed independently by two co-authors and summarized into inductive categories according to qualitative content analysis [55]. In case of uncertainty, the two co-authors consulted each other and also included a third co-author to find a consensus.

The systematic search led initially to 2616 journal articles and conference papers which were then screened step by step as outlined in Figure 1.

Overview of the screening process. Note: The #1–3 indicates the search strategy from Table 3.
Figure 1

Overview of the screening process. Note: The #1–3 indicates the search strategy from Table 3.

Overall, 331 evaluation criteria were then extracted from the resulting 36 records and consolidated into 13 categories. These categories are listed in Table 4 and accompanied by a description, references for further readings and the number of corresponding evaluation criteria.

Table 4

Consolidated categories of evaluation criteria for DHIs. Note: # EC = number of evaluation criteria (in % of all, i. e., 331); representative references are provided for further reading.

An overview of all selected articles, evaluation criteria and mapping of these criteria to the categories including examples is provided in [50]. The results of the consolidated categories show that ease of use is by far the most dominant category, with 87 evaluation criteria. By contrast, evaluation criteria related to ethical and safety aspects of a DHI are so far quite neglected by the scientific community. Moreover, it can be observed that one fundamental aspect of evidence-based medicine and the primary objective of design and evaluation frameworks as outlined in Table 1, i. e., to assess the degree to which an intervention is effective, does not take over a prominent position with the eighth rank in Table 4. Finally, it can be noticed that both subjective evaluation criteria (e. g., perceived benefit of a DHI) and criteria measured objectively (e. g., adherence to a DHI) are listed among the resulting categories.

4 Implementation barriers for DHIs

A list of implementation barriers of DHIs was already identified in prior work by means of a systematic literature review of reviews [65]. For the purpose of the current work, the 98 identified implementation barriers were summarized into inductive categories according to qualitative content analysis [55]. Out of the 98 barriers, 106 assignments to categories could be made. This higher number is due to the fact, that some barriers are related to more than one category. An overview of the resulting categories of implementation barriers, their descriptions and numbers are shown in Table 5.

Table 5

Consolidated categories of implementation barriers for DHIs. Note: # IB = number of implementation barriers (in % of all, i. e. 106); representative references are provided for further reading.

5 Mapping of evaluation criteria and implementation barriers along the DEDHI framework

The mapping of the evaluation criteria and implementation barriers for DHIs along the life cycle phases of the proposed DEDHI framework was conducted by means of a qualitative content analysis [55]. The analysis was done by at least two scientists independently, whereby inconsistencies were resolved through discussion until consensus was reached. The resulting overview of the DEDHI framework, including the mapping of evaluation criteria and implementation barriers, is listed in Table 6.

For each phase of the DEDHI framework, the overall goal and corresponding design and evaluation tasks are outlined. These goals and tasks are adapted to the concept of DHIs from MOST [19], [20], [21] for the Phases 1, 2 and 3 and from related work on intervention design and life cycle models [15], [16], [22] for Phase 4 as outlined in Section 2. In addition, a brief description of the technical maturity of the DHI is provided to help intervention authors better understand the technical perspective. Moreover, relevant evaluation criteria and implementation barriers are provided for each phase of the DEDHI framework that are suggested to be addressed by intervention authors in order to create evidence-based DHIs that can be successfully implemented in the health care market.

Table 6

Overview of the DEDHI framework.

While almost all criteria and barriers are only related to a single phase, some are related to two or all phases. For example, the two barrier categories funding and cost are related to all the phases as they represent start-up as well as maintenance cost and funding. Also, some individual characteristics (e. g., lack of trust in colleagues [43], [71], lack of trust in politics [71], sticking to old fashioned modalities of care [43]) and negative associations of healthcare providers relate to more than one phase. First, they need to be considered within user-centered design processes in the preparation phase. Second, they can be addressed during the implementation phase by means of advertisement and awareness campaigns. Furthermore, usability also relates to more than one phase. However, different facets of the usability category relate to different DEDHI framework phases.

Finally, it must be noted, that some of the implementation barriers could not be aligned to the DEDHI framework as they cannot be overcome during the life cycle of DHIs but are instead related to framing conditions. This includes missing benefits, cooperation and responsibilities as well as characteristics of the disease involved which hinder the usage of DHIs in general.

6 Discussion

The DEDHI framework provides an overview of evaluation criteria and implementation barriers to be considered during the life cycle phases of DHIs. All criteria and almost all barriers could be matched to the four phases. However, all phases could be linked to different numbers of criteria and barriers, which underlines the importance of addressing both factors during the whole life cycle. Furthermore, it underlines the fit of the DEDHI framework regarding the purpose of informing DHI developers and evaluators step-wise about criteria and barriers to be considered. However, dependencies between criteria (e. g., lower relevance of costs whenever a DHI is easy to use and sufficiently helpful) were not considered in our work as they could not be identified by the literature review and content analysis itself.

The evaluation criteria and implementation barriers presented in this work originate from different countries and geographic regions, for example, the United States [30], Europe [36], [69], Australia [69] or Africa [34], [85]. This shows the universality of the criteria and barriers and with it, also the universality of the DEDHI framework.

Moreover, it becomes obvious from the current work that the interdisciplinary field of Digital Health needs to integrate and consolidate perspectives and research findings from various fields such as behavioral medicine (e. g., the “active” ingredients of DHIs such as well-established behavior change techniques), computer science (e. g., machine learning algorithms embedded in DHIs that detect critical health conditions), software engineering (e. g., the rigorous design, implementation and test of DHIs) or information systems research (e. g., understanding the use and success factors of DHIs). That is, to better understand the development and evaluation of DHIs, it is crucial to broaden the scope and to account for related work at the intersection of the relevant disciplines involved.

Last but not least, no work comes without limitations which also applies to this one. First and foremost, the proposed DEDHI framework was developed purely in an inductive way based on content analysis techniques and existing justificatory knowledge. It was therefore not applied, validated and revised during the development and evaluation of DHIs in the field. Thus, empirical evidence that supports the utility of the DEDHI framework is not established yet.

Second, the current work considers findings from scientific outlets only and thus, incorporates country-specific regulatory frameworks only indirectly to the extent to which these regulations are covered by these outlets. That is, legal frameworks and prescriptions with respect to the life cycle phases will probably differ in detail and depend on the class of the (medical) DHIs in comparison to the more idealistic four phases of DEDHI. With the goal to accelerate the digital transformation of health care, for example, the German Ministry of Health proposes the implementation of easy to use and secure DHIs in a first phase before their effectiveness is assessed in a second step [86]. This approach has the advantage, in particular for start-up companies, that significant financial investments of up to several years (e. g., for optimization and evaluation trials) are not required up-front. Instead, in interdisciplinary collaborations with digital health (research or business) organizations, relevant stakeholders such as patient organizations, health insurance or pharmaceutical companies may take over a significant amount of these investments due to the early product character of DHIs. Another advantage is the primary focus on real-world trials compared to often artificial efficacy studies under controlled environments, for example, with highly selected participants or study nurses that are experienced with clinical trials [32]. The major shortcoming of such an approach, however, is the fact that the burden of patients will be increased by offering DHIs that are (potentially) not effective at all.

Third, the proposed DEDHI framework does not make an explicit distinction between the goals and motivations of the various stakeholders interested in the design and evaluation of DHIs, such as research teams funded by national research foundations or commercial digital health companies which are dependent on payers such as health insurance organizations. While research teams may be primarily interested in the publication of novel digital coaching concepts and their impact on therapy adherence (here, the focus lies primarily on the preparation phase and optimization phase), the primary interest of commercial digital health companies may be to bring a new DHI as fast as possible into the healthcare market (here, the focus lies on the implementation phase). Implications for the documentation and testing of the DHIs may be very different in these cases. For example, the digital health company must establish well-documented software development processes at the very beginning of a new DHI project as they are hard regulatory requirements when the DHI is offered in the healthcare market. On the contrary, the very same regulatory requirements are not relevant for the research team.

And finally, the chosen methods include subjective procedures. Conducting literature searches and qualitative content analysis is limited by the terms and databases chosen, and by the subjectivity of the researchers involved. However, such bias was reduced as much as possible. For example, relevant databases were included for the searches and synonyms of search terms were tested for results. Furthermore, each methodological step was done by at least two authors independently and inconsistencies were resolved by discussion and consensus.

7 Conclusion and future work

Due to the lack of well-established design and assessment guidelines for digital health interventions (DHIs), the current work had the objective to propose a framework for the Design and Evaluation of DHIs (DEDHI). For this purpose, justificatory knowledge from the fields of behavioral medicine, medical informatics and information systems was reviewed. Overall, four life cycle phases of DHIs, 331 evaluation criteria and 98 implementation barriers were identified and consolidated. The resulting DEDHI framework is meant to support both researchers and practitioners alike during the various design and evaluation phases of DHIs.

Future work is advised to critically apply, reflect, validate, and revise the proposed framework with its components as the field of Digital Health is still in its nascent stage. Accordingly, it is recommended that experts from the fields of ethics, regulatory affairs, public health, medicine, computer science and information systems work closely together to pave the way for evidence-based DHIs. The latter would not only push the field of Digital Health forward but it will, first and foremost, help a significant number of individuals to better manage their chronic health problems in their everyday lifes.


  • 1.

    Agarwal, R., et al. The Digital Transformation of Healthcare: Current Status and the Road Ahead. Information Systems Research, 21(4):796–809, 2010. CrossrefGoogle Scholar

  • 2.

    Albrecht, U.V., et al. Quality Principles of App Description Texts and Their Significance in Deciding to Use Health Apps as Assessed by Medical Students: Survey Study. JMIR Mhealth Uhealth, 7(2):e13375, 2019. Google Scholar

  • 3.

    Andersson, G. Internet interventions: Past, present and future. Internet Interventions, 12(June):181–188, 2018. CrossrefGoogle Scholar

  • 4.

    Andersson, G., H. Riper, and P. Carlbring. Editorial: Introducing Internet Interventions – A new Open Access Journal. Internet Interventions, 1(1):1–2, 2014. CrossrefGoogle Scholar

  • 5.

    Baker, T.B., et al. Enhancing the effectiveness of smoking treatment research: conceptual bases and progress. Addiction, 111(1):107–116, 2016. CrossrefGoogle Scholar

  • 6.

    Bartholomew, L.K., et al. Planning health promotion programs; an Intervention Mapping approach. 4th ed. San Francisco, CA: Jossey-Bass, 2016. Google Scholar

  • 7.

    Bartholomew, L.K., G.S. Parcel, and G. Kok. Intervention Mapping: a process for designing theory- and evidence-based health education programs. Health Education & Behavior, 25(5):545–563, 1998. CrossrefGoogle Scholar

  • 8.

    Bates, D.W., A. Landman, and D.M. Levine. Health Apps and Health Policy: What Is Needed? JAMA, 320(19):1975–1976, 2018. CrossrefGoogle Scholar

  • 9.

    Baumel, A., et al. Enlight: A Comprehensive Quality and Therapeutic Potential Evaluation Tool for Mobile and Web-Based eHealth Interventions. Journal of Medical Internet Research, 19(3):e82, 2017. CrossrefGoogle Scholar

  • 10.

    Bernell, S. and S.W. Howard. Use Your Words Carefully: What is a Chronic Disease? Frontiers in Public Health, 4(August: Article 159):1–3, 2016. Google Scholar

  • 11.

    Bobrow, K., et al. Using the Medical Research Council framework for development and evaluation of complex interventions in a low resource setting to develop a theory-based treatment support intervention delivered via SMS text message to improve blood pressure control. BMC Health Services Research, 18(33):1–15, 2018. Google Scholar

  • 12.

    Brennan, P., et al. Chronic disease research in Europe and the need for integrated population cohorts. Europen Journal of Epidemiology, 32(9):741–749, 2017. CrossrefGoogle Scholar

  • 13.

    Broens, T.H., et al. Determinants of successful telemedicine implementations: a literature study. Journal of Telemedicine and Telecare, 13(6):303–309, 2007. CrossrefGoogle Scholar

  • 14.

    Brown, W., 3rd, et al. Assessment of the Health IT Usability Evaluation Model (Health-ITUEM) for evaluating mobile health (mHealth) technology. Journal of Biomedical Informatics, 46(6):1080–1087, 2013. CrossrefGoogle Scholar

  • 15.

    Campbell, M., et al. Framework for design and evaluation of complex interventions to improve health. BMJ, 321(7262):694–696, 2000. CrossrefGoogle Scholar

  • 16.

    Campbell, N.C., et al. Designing and evaluating complex interventions to improve health care. BMJ, 334:455–459, 2007. CrossrefGoogle Scholar

  • 17.

    Chan, S., et al. Towards a Framework for Evaluating Mobile Mental Health Apps. Telemedicine Journal and E-Health, 21(12):1038–1041, 2015. CrossrefGoogle Scholar

  • 18.

    Christopoulou, S.C., T. Kotsilieris, and I. Anagnostopoulos. Assessment of Health Information Technology Interventions in Evidence-Based Medicine: A Systematic Review by Adopting a Methodological Evaluation Framework. Healthcare, 6(109):1–22, 2018. Google Scholar

  • 19.

    Collins, L.M., Optimization of Behavioral, Biobehavioral, and Biomedical Interventions: The Multiphase Optimization Strategy (MOST). New York: Springer, 2018. Google Scholar

  • 20.

    Collins, L.M., et al. The Multiphase Optimization Strategy for Engineering Effective Tobacco Use Interventions. Annals of Behavioral Medicine, 41(2):208–226, 2011. CrossrefGoogle Scholar

  • 21.

    Collins, L.M., S.A. Murphy, and V. Strecher. The Multiphase Optimization Strategy (MOST) and the Sequential Multiple Assignment Randomized Trial (SMART) – New Methods for More Potent eHealth Interventions. American Journal of Preventive Medicine, 32(5(Supplement)):S112–S118, 2007. CrossrefGoogle Scholar

  • 22.

    Craig, P., et al. Developing and evaluating complex interventions: the new Medical Research Council guidance. BMJ, 337(a1655):1–6, 2008. Google Scholar

  • 23.

    Daraz, L., et al. Health information from the web – assessing its quality: a KET intervention. Toronto International Conference Science and Technology for Humanity. Toronto, Canada, 2009. Google Scholar

  • 24.

    de Zoysa, I., et al. Research steps in the development and evaluation of public health interventions. Bulletin of the World Health Organization, 76(2):127–133, 1998. Google Scholar

  • 25.

    Direito, A., et al. Application of the behaviour change wheel framework to the development of interventions within the City4Age project. IEEE 25th International Conference on Software, Telecommunications and Computer Networks (SoftCOM). Split, Croatia: IEEE, 2017. Google Scholar

  • 26.

    European Commission. eHealth: Digital Health and Care. https://ec.europa.eu/health/ehealth/overview_en, 2019. Google Scholar

  • 27.

    Eysenbach, G. CONSORT-EHEALTH: Improving and Standardizing Evaluation Reports of Web-based and Mobile Health Interventions. Journal of Medical Internet Research, 13(4:e126), 2011. CrossrefGoogle Scholar

  • 28.

    FDA. Digital Health. https://www.fda.gov/medical-devices/digital-health, 2019. Google Scholar

  • 29.

    Fedele, D.A., et al. Design Considerations When Creating Pediatric Mobile Health Interventions: Applying the IDEAS Framework. Journal of Pediatric Psychology, 44(3):343–348, 2019. CrossrefGoogle Scholar

  • 30.

    Fitzner, K. and G. Moss. Telehealth-An Effective Delivery Method for Diabetes Self-Management Education? Population Health Management, 16(3):169–177, 2013. CrossrefGoogle Scholar

  • 31.

    Food Security and Nutrition Network Social and Behavioral Change Task Force. Designing for Behavior Change. Washington, DC: The TOPS Program, 2013. 

  • 32.

    Ford, I. and J. Norrie. Pragmatic Trials. New England Journal of Medicine, 375(5):454–463, 2016. CrossrefGoogle Scholar

  • 33.

    Garin, N., et al. Impact of multimorbidity on disability and quality of life in the Spanish older population. PloS One, 9(11:e111498), 2014. Google Scholar

  • 34.

    Govender, S.M. and M. Mars. The use of telehealth services to facilitate audiological management for children: A scoping review and content analysis. Journal of Telemedicine & Telecare, 23(3):392–401, 2016. Google Scholar

  • 35.

    Green, L. and M.K. Kreuter, Health program planning: an educational and ecological approach. 4th ed. New York, NY: McGraw Hill, 2005. Google Scholar

  • 36.

    Gros, D.F., et al. Delivery of Evidence-Based Psychotherapy via Video Telehealth. Journal of Psychopathology & Behavioral Assessment, 35(4):506–521, 2013. CrossrefGoogle Scholar

  • 37.

    Hage, E., et al. Implementation factors and their effect on e-Health service adoption in rural communities: a systematic literature review. BMC Health Services Research, 13(1):1–16, 2013. Google Scholar

  • 38.

    Hlaing, P.H., P.E. Sullivan, and P. Chaiyawat. Application of PRECEDE-PROCEED Planning Model in Transforming the Clinical Decision Making Behavior of Physical Therapists in Myanmar. Frontiers in Public Health, 7(Article 114), 2019. Google Scholar

  • 39.

    Høstgaard, A.M.B., P. Bertelsen, and C. Nøhr. Constructive eHealth evaluation: lessons from evaluation of EHR development in 4 Danish hospitals. Bmc Medical Informatics and Decision Making, 17, 2017. 

  • 40.

    Huckvale, K., et al. Apps for asthma self-management: a systematic assessment of content and tools. BMC Medicine, 10(144):1–11, 2012. Google Scholar

  • 41.

    Husky, M.M., et al. Chronic back pain and its association with quality of life in a large French population survey. Health and Quality of Life Outcomes, 16(195):1–16, 2018. Google Scholar

  • 42.

    Iribarren, S.J., et al. Smartphone Applications to Support Tuberculosis Prevention and Treatment: Review and Evaluation. JMIR Mhealth Uhealth, 4(2):e25, 2016. Google Scholar

  • 43.

    Jang-Jaccard, J., et al. Barriers for Delivering Telehealth in Rural Australia: A Review Based on Australian Trials and Studies. Telemedicine & e-Health, 20(5):496–504, 2014. CrossrefGoogle Scholar

  • 44.

    Jeon, E., et al. Analysis of the information quality of korean obesity-management smartphone applications. Healthcare Informatics Research, 20(1):23–29, 2014. CrossrefGoogle Scholar

  • 45.

    Jin, M. and J. Kim. Development and Evaluation of an Evaluation Tool for Healthcare Smartphone Applications. Telemedicine Journal and E-health, 21(10):831–837, 2015. CrossrefGoogle Scholar

  • 46.

    Katz, D.L., et al. Lifestyle as Medicine: The Case for a True Health Initiative. American Journal of Health Promotion, 32(6):1452–1458, 2018. CrossrefGoogle Scholar

  • 47.

    Khoja, S., et al. Conceptual Framework for Development of Comprehensive e-Health Evaluation Tool. Telemedicine and e-Health, 19(1):48–53, 2013. CrossrefGoogle Scholar

  • 48.

    Klasnja, P., et al. Microrandomized Trials: An Experimental Design for Developing Just-in-Time Adaptive Interventions. Health Psychology, 34(S):1220–1228, 2015. CrossrefGoogle Scholar

  • 49.

    Kotz, D., S. Avancha, and A. Baxi. A privacy framework for mobile health and home-care systems. Proceedings of the first ACM workshop on Security and privacy in medical and home-care systems – SPIMACS’09. Chicago, Illinois: ACM Press, 2009. Google Scholar

  • 50.

    Kowatsch, T., S. Harperink, and A. Cotti. Evaluation Criteria for Digital Health Interventions. https://doi.org/10.17605/OSF.IO/Q6ZK5, 2019. Google Scholar

  • 51.

    Kowatsch, T., et al. Text-based Healthcare Chatbots Supporting Patient and Health Professional Teams: Preliminary Results of a Randomized Controlled Trial on Childhood Obesity. PEACH Workshop, co-located with the 17th International Conference on Intelligent Virtual Agents (IVA 2017). Stockholm, Sweden, 2017. Google Scholar

  • 52.

    Kruse, C.S., et al. Telemedicine Use in Rural Native American Communities in the Era of the ACA: a Systematic Literature Review. Journal of medical systems, 40(6):145, 2016. CrossrefGoogle Scholar

  • 53.

    Kvedar, J.C., et al. Digital medicine’s march on chronic disease. Nature Biotechnology, 34(3):239–246, 2016. CrossrefGoogle Scholar

  • 54.

    Mankins, J.C., Technology readiness levels (White Paper), 1995. 

  • 55.

    Mayring, P. Qualitative Content Analysis. Forum: Qualitative Social Research, 1(2), 2000. Google Scholar

  • 56.

    Michie, S., M.M. van Stralen, and R. West. The behaviour change wheel: a new method for characterising and designing behaviour change interventions. Implementation Science, 6(42):1–11, 2011. Google Scholar

  • 57.

    Miranda, J. and J. Côté. The Use of Intervention Mapping to Develop a Tailored Web-Based Intervention, Condom-HIM. JMIR Public Health & Surveillance, 3(2):e20, 2017. Google Scholar

  • 58.

    Moustakis, V., et al. Website quality assessment criteria. 9th International Conference on Information Quality (ICIQ-04). Cambridge, MA, USA: MIT, 2004. Google Scholar

  • 59.

    Mummah, S.A., et al. IDEAS (Integrate, Design, Assess, and Share): A Framework and Toolkit of Strategies for the Development of More Effective Digital Interventions to Change Health Behavior. Journal of medical Internet Research, 18(12:e317), 2016. Google Scholar

  • 60.

    Murray, E., et al. Evaluating Digital Health Interventions Key Questions and Approaches. American Journal of Preventive Medicine, 51(5):843–851, 2016. CrossrefGoogle Scholar

  • 61.

    Nahum-Shani, I., E.B. Hekler, and D. Spruijt-Metz. Building Health Behavior Models to Guide the Development of Just-in-Time Adaptive Interventions: A Pragmatic Framework. Health Psychology, 34(Supplement):1209–1219, 2015. CrossrefGoogle Scholar

  • 62.

    Nahum-Shani, I., et al. Just-in-Time Adaptive Interventions (JITAIs) in Mobile Health: Key Components and Design Principles for Ongoing Health Behavior Support. Annals of Behavioral Medicine, 52(6):446–462, 2018. CrossrefGoogle Scholar

  • 63.

    Nouri, R., et al. Criteria for assessing the quality of mHealth apps: a systematic review. Journal of the American Medical Informatics Association, 25(8):1089–1098, 2018. CrossrefGoogle Scholar

  • 64.

    OECD/EU, Health at a Glance: Europe 2016 – State of Health in the EU Cycle. Paris, France: OECD, 2016. Google Scholar

  • 65.

    Otto, L. and L. Harst. Investigating Barriers for the Implementation of Telemedicine Initiatives: A Systematic Review of Reviews. 25th Americas Conference on Information Systems (AMCIS). Cancun, Mexico, 2019. Google Scholar

  • 66.

    Oxford University Press. Lexico.com: Intervention. https://www.lexico.com/en/definition/intervention, 2019. Google Scholar

  • 67.

    Powell, A.C., et al. Interrater Reliability of mHealth App Rating Measures: Analysis of Top Depression and Smoking Cessation Apps. JMIR Mhealth Uhealth, 4(1):e15, 2016. Google Scholar

  • 68.

    Renne, I. and R.J. Gobbens. Effects of frailty and chronic diseases on quality of life in Dutch community-dwelling older adults: a cross-sectional study. Clinical Interventions in Aging, 13:325–334, 2018. CrossrefGoogle Scholar

  • 69.

    Reynoldson, C., et al. Assessing the quality and usability of smartphone apps for pain self-management. Pain Med, 15(6):898–909, 2014. CrossrefGoogle Scholar

  • 70.

    Sackett, D.L., et al. Evidence based medicine: what it is and what it isn’t. BMJ, 312(7032):71–72, 1996. CrossrefGoogle Scholar

  • 71.

    Saliba, V., et al. Telemedicine across borders: A systematic review of factors that hinder or support implementation. International Journal of Medical Informatics, 81(12):793–809, 2012. CrossrefGoogle Scholar

  • 72.

    Schnall, R., et al. A user-centered model for designing consumer mobile health (mHealth) applications (apps). Journal of Biomedical Informatics, 60(April):243–251, 2016. CrossrefGoogle Scholar

  • 73.

    Schulze, K. and H. Krömker. A framework to measure user experience of interactive online products. 7th International Conference on Methods and Techniques in Behavioral Research – MB’10. Eindhoven, The Netherlands: ACM Press, 2010. Google Scholar

  • 74.

    Scott, K., D. Richards, and R. Adhikari. A Review and Comparative Analysis of Security Risks and Safety Measures of Mobile Health Apps. Australasian Journal of Information Systems, 19:1–18, 2015. Google Scholar

  • 75.

    Serenko, A., M.S. Dohan, and J. Tan. Global Ranking of Management- and Clinical-centered E-health Journals. Communications of AIS, 41(1):Article 9, 2017. Google Scholar

  • 76.

    Simpson, S.G. and C.L. Reid. Therapeutic alliance in videoconferencing psychotherapy: A review. Australian Journal of Rural Health, 22(6):280–299, 2014. CrossrefGoogle Scholar

  • 77.

    Steinberg, D., G. Horwitz, and D. Zohar. Building a business model in digital medicine. Nature Biotechnology, 33(9):910–920, 2015. CrossrefGoogle Scholar

  • 78.

    Stoyanov, S., et al. Mobile App Rating Scale: A new tool for assessing the quality of health-related mobile apps. JMIR mhealth and uhealth, 3(1):e27, 2015. Google Scholar

  • 79.

    Stoyanov, S.R., et al. Development and Validation of the User Version of the Mobile Application Rating Scale (uMARS). JMIR Mhealth Uhealth, 4(2):e72, 2016. Google Scholar

  • 80.

    Stoyanov, S.R., et al. Mobile App Rating Scale: A New Tool for Assessing the Quality of Health Mobile Apps. JMIR Mhealth Uhealth, 3(1):e27, 2015. Google Scholar

  • 81.

    Taki, S., et al. Infant Feeding Websites and Apps: A Systematic Assessment of Quality and Content. Interactive Journal of Medical Research, 4(3):e18, 2015. Google Scholar

  • 82.

    The Lancet. Evidence-based medicine, in its place. The Lancet, 346(8978):785, 1995. CrossrefGoogle Scholar

  • 83.

    Torous, J., et al. Towards a consensus around standards for smartphone apps and digital mental health. World Psychiatry, 18(1):97–98, 2019. CrossrefGoogle Scholar

  • 84.

    van Dyk, L. A Review of Telehealth Service Implementation Frameworks. International Journal of Environmental Research and Public Health, 11(2):1279–1298, 2014. CrossrefGoogle Scholar

  • 85.

    Veldsman, A. and D. Van Greunen. Comparative usability evaluation of a mobile health app. 2017 IST-Africa Week Conference (IST-Africa). Windhoek, Namibia: IEEE, 2017. Google Scholar

  • 86.

    Waschinski, G. So will Jens Spahn Gesundheits-Apps schneller zu den Patienten bringen. Accessed 15.05.2019, https://www.handelsblatt.com/24344290.html, 2019. Google Scholar

  • 87.

    Wight, D., et al. Six steps in quality intervention development (6SQuID). Journal of Epidemiology & Community Health, 70(5):520–525, 2018. Google Scholar

  • 88.

    Yasini, M., et al. mHealth Quality: A Process to Seal the Qualified Mobile Health Apps. Studies in Health Technologie and Informatics, 228:205–209, 2016. Google Scholar

  • 89.

    Yen, L., et al. Health professionals, patients and chronic illness policy: a qualitative study. Health expectations: an international journal of public participation in health care and health policy, 14(1):10–20, 2011. CrossrefGoogle Scholar

About the article

Tobias Kowatsch

Prof. Dr. Tobias Kowatsch studied Media and Computer Science at Hochschule Furtwangen University and Business Informatics at Saarland University. He is the Scientific Director of the Center for Digital Health Interventions at ETH Zurich and University of St. Gallen, which he co-founded in 2013. Since 2018 he is also Assistant Professor for Digital Health at the University of St. Gallen.

Lena Otto

Dipl.-Wirt.-Inf. Lena Otto studied Business Informatics at the TU Dresden. Since 2017 she is Research Associate at the Chair of Business Informatics, esp. Systems Development, at TU Dresden and part of the junior research group Care4Saxony. She researches on the adoption and scaling up of telemedicine.

Samira Harperink

Samira Harperink studies Business Administration at the University of St. Gallen. Since 2018 she is a Student Research Assistant at the Center for Digital Health Interventions, a joint initiative of the Department of Management, Technology and Economy at ETH Zurich and the Institute of Technology Management at the University of St. Gallen.

Amanda Cotti

Amanda Cotti studies Business Administration at the University of St. Gallen. Since 2019 she is a Student Research Assistant at the Center for Digital Health Interventions, a joint initiative of the Department of Management, Technology and Economy at ETH Zurich and the Institute of Technology Management at the University of St. Gallen.

Hannes Schlieter

Dr. rer. pol. Hannes Schlieter studied at TU Dresden where he received his doctoral degree in 2012. He is currently a postdoctoral fellow at the Chair of Business Informatics. His research interests include conceptual modeling, business process management, and digital ecosystems. By now, he is a technical leader in Horizon 2020 project and head of research group that investigates the digital transformation of health care systems.

Received: 2019-06-15

Revised: 2019-10-17

Accepted: 2019-11-04

Published Online: 2019-11-20

Published in Print: 2019-10-25

Funding Source: European Social Fund

Award identifier / Grant number: 100310385

This work was co-funded by Health Promotion Switzerland, and the European Social Fund and the Free State of Saxony (Grant no. 100310385).

Citation Information: it - Information Technology, Volume 61, Issue 5-6, Pages 253–263, ISSN (Online) 2196-7032, ISSN (Print) 1611-2776, DOI: https://doi.org/10.1515/itit-2019-0019.

Export Citation

© 2019 Kowatsch et al., published by De Gruyter. This work is licensed under the Creative Commons Attribution 4.0 Public License. BY 4.0

Comments (0)

Please log in or register to comment.
Log in