Skip to content
Publicly Available Published by De Gruyter May 23, 2017

Challenges and opportunities from the Agency for Healthcare Research and Quality (AHRQ) research summit on improving diagnosis: a proceedings review

Kerm Henriksen, Chris Dymek, Michael I. Harrison, P. Jeffrey Brady and Sharon B. Arnold
From the journal Diagnosis



The Improving Diagnosis in Health Care report from the National Academies of Sciences, Engineering and Medicine (NASEM) provided an opportunity for many groups to reflect on the role they could play in taking actions to improve diagnostic safety. As part of its own process, AHRQ held a research summit in the fall of 2016, inviting members from a diverse collection of organizations, both inside and outside of Government, to share their suggestions regarding what is known about diagnosis and the challenges that need to be addressed.


The goals of the summit were to learn from the insights of participants; examine issues associated with definitions of diagnostic error and gaps in the evidence base; explore clinician and patient perspectives; gain a better understanding of data and measurement, health information technology, and organizational factors that impact the diagnostic process; and identify potential future directions for research.

Summary and outlook:

Plenary sessions focused on the state of the new diagnostic safety discipline followed by breakout sessions on the use of data and measurement, health information technology, and the role of organizational factors. The proceedings review captures many of the key challenges and areas deserving further research, revealing stimulating yet complex issues.


As described in the Improving Diagnosis in Health Care report from the National Academy of Sciences, Engineering and Medicine (NASEM), diagnostic error happens throughout all settings of care and is estimated to occur among 5% of US adults who receive outpatient care annually [1]. Postmortem examinations show that diagnostic error contributes to approximately 10% of patient deaths. Medical record reviews suggest such errors account for 6%–17% of hospital adverse events, and diagnostic errors are the leading type of paid medical malpractice claims.

Estimates based on these and other studies suggest that most adults will experience at least one diagnostic error in their lifetimes, sometimes with devastating consequences [2]. Although correct treatment presumes a correct diagnosis, the report noted “federal resources devoted to diagnostic research are vastly eclipsed by those devoted to treatment” [1]. Given the complexity of the diagnostic process as part of health care delivery, these errors may worsen without a coordinated, dedicated focus on improving diagnosis.

AHRQ’s role in diagnostic safety initiatives

The Agency for Healthcare Research and Quality (AHRQ) is the lead US Federal agency charged with improving the safety and quality of America’s health care system. AHRQ develops the knowledge, tools, and data needed to improve the health care system and help Americans, health care professionals, and policymakers make informed health decisions. The agency’s efforts have contributed to recent national trends of reduced patient harms, lives saved, and cost savings [3]. AHRQ seeks to leverage its funding by aligning and coordinating its efforts with other stakeholder organizations, inside and outside of government, and was pleased to be among the organizations supporting NASEM’s report on improving diagnosis in health care.

While the AHRQ has been a funder of diagnostic safety initiatives over the past decade, these efforts have been balanced with a number of other patient safety areas. Given the report’s comprehensive recommendations for addressing multi-faceted diagnostic concerns, and the heightened level of interest generated among health care professionals, the Agency convened a research summit in September 2016, that brought together diagnostic safety experts, professional stakeholders, government representatives, and family members of loved ones harmed by diagnostic error. The purpose of the summit was threefold: 1) learn from the insights and experiences of gathered participants, 2) examine issues associated with definitions, physician and patient perspectives, the use of data and measurement, the role of health information technology (health IT), and the impact of organizational factors on the diagnostic process, and 3) identify potential future directions for research.


Planning and agenda

A planning committee was established 6 months prior to the 1-day summit to develop the goals, focus areas, and agenda. The agenda included plenary sessions, as well as three concurrent breakout sessions focused on the use of data and measurement, health IT, and organizational factors relevant to diagnostic safety. Each of the three concurrent sessions were repeated twice – one in the morning, the other in the afternoon – enabling participants to take advantage of attending two out of the three focus areas of their choosing.


To encourage audience participation, breakout session speakers limited their slides to a small set of essential points. Moderators’ questions further served to stimulate questions coming from the audience. AHRQ scribes captured major points and questions raised. Four individuals from both government and non-government organizations reported back salient observations from the breakout sessions. The last session of the summit included a wrap-up, an overview of professional collaborative activities, and next steps on a path forward.


Speaker slides, notes from scribes, and summaries of the sessions were compiled for analysis in accordance with six major groupings: introduction and the role of AHRQ, setting the stage for improving diagnosis, data and measurement, the role of health IT, the impact of organizational factors, and potential future directions. Proceedings authors captured the most relevant content by employing three criteria as a guide: relevance to diagnostic safety, degree to which information adds to the knowledge base, and usefulness of information in shaping future research programs and activities.


Setting the stage for improving diagnosis

While much of the summit focused on gaps and challenges to improving diagnosis, the plenary presentations provided a common understanding and underscored the importance of both patient safety and diagnostic safety progress to date. The impact on both patient and diagnostic safety of the National Academies’ seminal reports, To Err is Human and Crossing the Quality Chasm, its many subsequent reports such as Keeping Patients Safe and Health IT and Patient Safety, and most recently Improving Diagnosis in Health Care, has been immense. The launching of a new professional society, the Society to Improve Diagnosis in Medicine (SIDM), with its own journal and annual conferences is increasing awareness of diagnostic safety issues and needs for education and research.

The steady research efforts of AHRQ grantees were acknowledged. For example, the foundational work of Schiff and colleagues in analyzing physician-reported errors during the diagnostic process continues to influence research efforts [4]. Many of AHRQ’s products developed for patient safety – a toolkit for improving the testing process in medical offices, a guide for patient and family engagement, and TeamSTEPPS – have relevance for diagnostic improvement efforts as well.

The impacts of successful campaigns, spearheaded by patient safety advocates for kernicterus and sepsis, have been considerable. Also cited during the plenary session were innovative activities occurring in health care and patient safety organizations in places like the Maine Medical Center, the Midwest Alliance for Patient Safety PSO, and Kaiser Permanente Southern California. The Office of the National Coordinator and the Veterans Health Administration also are recognized for supporting important research and practical work in this area, for example, the research contributions of Singh and colleagues on test result communication and development of the Safer Guides that promote the use of health informatics tools to improve safety [5]. More organizations, however, need to be engaged in diagnostic improvement initiatives to realize greater societal impact [6].

Noteworthy are the data generated from national efforts to make health care safer involving hospital-acquired conditions (HACs) from 2010 to 2015. While plenty of hospital harms persist, HACs (e.g. blood stream infections, adverse drug events, and pressure ulcers) have declined by 21% along with an estimated 125,000 lives saved, 3 million harms avoided, and $28 billion in savings [3].

As patient safety researchers attend more to the diagnostic side of medicine, a question emerges: could there be a similar methodology developed to measure diagnostic error and to track improvements over time? Is it a reasonable expectation? Regardless of how one responds, such a notion underscores the importance of measurement, knowing where to look for diagnostic harms, recognizing the associated costs, and understanding the socio-technical factors that converge in unforeseen ways, allowing suboptimal diagnostic conditions to exist.

Use of data and measurement

The NASEM committee defined diagnostic error as the failure to (a) establish an accurate and timely explanation of the patient’s health problem(s) or (b) communicate that explanation to the patient [1]. The definition is appropriately patient-centered, and experts are still debating how to operationalize the definition as they derive useful measures of both diagnostic processes and outcomes. Efforts to achieve a more unifying terminology and greater clarity are ongoing [7], [8]. Nearly everyone recognizes the complexity involved. Some conditions appear deceptively simple to diagnose yet lead clinicians down the wrong path, whereas others evolve over an indefinite period of time, or resolve themselves without treatment. Integral to the diagnostic process is clinical reasoning, yet its mental and cognitive nature makes it difficult to quantify and study. Defining key operational concepts, data sources, and measurement methods is a continuing effort.

Socio-technical system models combine human and technical aspects of clinical work, providing a useful framework when considering measurement. They focus on interactions among major system components – individuals and teams, tasks, technologies and tools, organizational factors, the physical environment, and the external environment – that can impact patient and system outcomes [5], [9]. The core diagnostic activities of history taking, physical exam, diagnostic testing, referral and consultation (involving iterations of information gathering, integration, interpretation, and generating a differential diagnosis) occur within the socio-technical system.

The systems perspective combined with the diagnostic process captures some of the complexity, offering clues to vulnerable areas where measurement efforts can be directed as illustrated in the process and outcome diagrams of NASEM’s report [1]. Where research interests focus on gaining insight into patients’ underlying illnesses, the incidence of associated harms, or ways of prioritizing research effort, socio-technical models are best supplemented by other approaches.

Measurement methods

There are three primary purposes for measurement in diagnostic safety: establish the incidence of diagnostic error, determine the causes and risks of error, and assess the effectiveness of strategies and interventions. A variety of methods and data sources have been used to establish incidence (e.g. autopsies, medical records, diagnostic testing audits, case reviews, malpractice claims, and voluntary reports from patients and clinicians) [1], [10]. Unique insights can be gleaned from each data source; however, limitations of each preclude the possibility of deriving broader population-based estimates or ranges of diagnostic error on a national scale. Our knowledge of the incidence of diagnostic error has been derived from research studies, whereas monitoring the incidence of diagnostic error in actual practice remains a goal yet to be realized. There are gaps in the availability and quality of data from the diverse range of health care settings where diagnostic error occurs and obstacles associated with data aggregation across the various sites and data sources employed. These difficulties underlie the challenges of reliably estimating a denominator (number of opportunities to make a diagnosis) and numerator (instances where the diagnosis is inaccurate, untimely, or not communicated). Also deserving attention are the unique strengths and limitations associated with methods for determining the contributing factors to diagnostic error (e.g. task and work analysis, observation techniques, process mapping) and with establishing the effectiveness of interventions (e.g. experimental designs and assessment tools for measuring baseline and post-intervention performance appropriate for quality improvement and research purposes).

While aspirations for greater transparency and measures for public reporting as ways to improve diagnostic performance are understandable, session presenters considered it premature to assume that measure development had reached a stage that would reliably enable public reporting and payment adjustments. Given the current early stage of measure development and the valuable information already garnered, there is much yet to be learned by more fully utilizing and leveraging the strengths of existing methods that are appropriately matched to quality improvement and research objectives.

Electronic monitoring

One area that holds promise is electronic monitoring. It can be a cost-efficient and reliable way of collecting data that is already available and point to likely errors, especially when explicit, specified criteria (e.g. triggers) are used for abstracting data. Electronic health record (EHR)-based triggers have been used retrospectively in quality improvement efforts to identify patients with an unexpected hospitalization within 10 days of a primary care visit. The percentage of patients with errors identified in this manner was nearly 20% vs. 2% for patients receiving random chart reviews [11]. Similarly, symptom-disease pair analysis of large administrative data sets has been used to identify incorrect diagnoses after ER “treat-and-release” visits [12]. EHR-based triggers also can be used prospectively by enabling EHRs to provide “red flags” when there is a delay in the follow-up of abnormal test results [13]. Likewise, the Kaiser Permanente Southern California Outpatient Safety Net Program leverages electronic health information to identify and address a variety of potential care gaps for different clinical conditions, including potentially missed diagnoses (e.g. patients with positive fecal occult blood that have not received follow-up diagnostic studies) [6], [14]. The program is designed to catch errors before harm reaches the patient and in a large integrated delivery system many patients with abnormal test results can be spared the harm that could occur from not receiving appropriate subsequent care. Limitations of triggers include dependency on good EHR data quality (i.e. variations exist among clinicians regarding content, completeness, and underlying clinical reasoning), absence of data on contributing factors, and resources needed to act upon on the information. Strengths entail relative efficiency in capturing diagnostic delay and follow-up data, and drawing attention to the failures.

Prioritizing targets

While there has been no scarcity of troublesome diagnostic issues which deserve measurement attention, researchers have started to address the challenge of prioritizing areas in which to focus, identify fertile settings (e.g. outpatient services including the emergency room) and inefficient deployment of resources. In addition to general approaches that could be applicable across a wide range of clinical problems, Newman-Toker has noted that many diagnostic problems are domain or disease-specific. A focus on high-yield targets could be of benefit – those that carry a high burden of harm to patients (e.g. heart attack, stroke, sepsis, cancer) that are often associated with common symptoms (e.g. chest pain, dizziness, fever, cough) and where appropriate interventions can be undertaken [15]. Moreover, many problems and inefficiencies associated with “underdiagnosis” and “overdiagnosis” need to be considered so that efforts to correct the former do not unwittingly exacerbate the latter (incidental, non-significant findings leading to overtreatment or harm from test overuse). Economic analyses may help prioritize targets and solutions with the greatest likelihood for public health impact where the focus could be on high-value targets associated with a high burden of harm from misdiagnosis and a low cost of reducing misdiagnosis, rather than the converse (i.e. low burden, high cost) [15].

Demographic disparities

Areas of health care delivery characterized by substantial variation often foretell something is not as it should be. Disparities in diagnosis and access to health services continue to exist for particular groups, despite the dramatic improvements in health for most Americans over the past several decades [16]. Analysis of large administrative data sets has shown that missed strokes are more common in women and ethnic/racial minorities by 20%–30% [12]. Controlled experiments using case vignettes or standardized patients can be conducted to demonstrate the occurrence of implicit bias in diagnostic decision-making. A vignette-based UK study involving general practitioners showed disparities for diagnostic evaluation for lung cancer [17]. The same vignette-based approach may hold promise in testing strategies that mitigate the occurrence of bias and subsequent disparities.

Other challenges that require improved measurement methods include the assessment of diagnostic uncertainty, overconfidence, and calibration (i.e. the extent to which physicians’ confidence in the accuracy of their decisions is aligned with their actual accuracy). Gaining a better understanding of clinical reasoning and the relative contributions of knowledge gaps and cognitive biases to diagnostic failure and inequity will likely depend upon continued improvements in measurement. While there is no scarcity of commentary on these topics, empirical studies are less plentiful.

Role of health IT

Health information technology (IT) can be a powerful tool for improving diagnosis. Since diagnosis is a complex socio-technical process, studying the efficacy of health IT requires exploring how technology supports not only the process, but the people working within the process, including patients and their families. The diagnostic process typically begins when a patient experiences a health problem and engages with the health system. Patient-facing technologies such as patient portals and mobile health applications that collect patient-generated health data can assist with health system engagement and the requisite information gathering phase of the diagnostic process. Early evidence indicates health IT applications may improve process measures such as patient-provider communication, but evidence that shows actual outcomes are improved by use of these technologies is limited [18], [19].

Information-gathering tools

In addition to new information gathered directly from patients, clinicians working on a diagnosis rely on information from other sources including existing patient records, consulting specialists, and various test results ordered by the clinician. Some of that information may be relayed via a health information exchange (HIE). The accuracy and timely receipt of such information affects the accuracy and timeliness of a diagnosis. Hence, understanding how health IT can work to improve the receipt, speed, and precision of needed information will benefit the diagnostic process. Currently, work is underway to understand how natural language processing can be employed to glean relevant information from the unstructured data within a patient’s electronic health record and present the data in a usable way for the care team [20]. The evidence that electronic triggers can facilitate the receipt, acknowledgement, and follow-up of abnormal test results also is encouraging [21]. Understanding what works best to facilitate the timely collection and subsequent patient matching of HIE data is an ongoing endeavor at AHRQ. In particular, effective evidence-based patient matching methods are needed to maximize the accuracy and completeness of health care data received via HIEs.

Information synthesis

Integrating and interpreting the information that is gathered in order to form a working diagnosis are process phases that can also benefit from appropriately designed health IT. In particular, these iterative phases of the diagnostic process require health IT that is designed to reduce the cognitive burden for the care team. Understanding the cognitive requirements for effective information synthesis by primary care clinicians and care teams is underway, but is still in the early stages [22]. Automated methods that identify redundant vs. relevant new information in patient notes may aid clinicians in navigating to clinically important details, and improve patient information synthesis [23].

In an age when medical data is estimated to double every 73 days by 2020 [24], tools are needed to support and enhance cognition. Clinical decision support and big data analytics combined with artificial intelligence may indeed hold promise, but building the evidence base for how and when to best integrate these technologies into diagnostic workflows will still be needed. Session participants noted that the value of cognitive support technologies like differential diagnosis generators, diagnostic crowdsourcing mechanisms, and sensible user-health IT interfaces that support clinical workflows deserve further exploration.

Detecting safety risk

Assessing outcomes resulting from diagnosis and treatment also is a key process point for health IT support. Ensuring patient safety and preventing diagnostic error are necessary and desired outcomes, but voluntary reporting of errors and time-consuming manual chart reviews result in underreporting. Hence, more robust methods using health IT to detect potential safety risks remain as a key research and development challenge. Wrong patient errors which have the potential to lead to diagnostic errors have received attention. In one study, a health IT safety measure, known as the Retract-and-Reorder measure, was introduced as a tool hospitals can utilize to examine “near misses” of wrong patient error so that subsequent preventive action could be taken [25].

Other research is underway to assess the risk for error when clinicians have multiple electronic health records open simultaneously [26]. There was general consensus that health IT designed to support the diagnostic process must be designed with safety in mind – safety should not be an afterthought. In addition to retroactive schemes that identify the occurrence of missed opportunities to make an appropriate diagnosis, proactive designs that anticipate common failure modes and areas of confusion also are needed to mitigate diagnostic risk.

The importance of health IT designs that support patients and families, expanding their active engagement in the diagnostic process, was underscored by many of the participants. At the same time, AHRQ is supporting research on how best to collect and utilize patient-reported outcome data and through human-centered design principles, facilitate its integration into clinical practice [27], [28].

Impact of organizational factors

A basic premise to improving diagnosis is attention must be paid to the organizational context in which diagnosis occurs. Context shapes diagnostic safety and can lead to error, but it also impacts other aspects of diagnostic performance, such as timeliness, efficiency, patient centeredness, and inequity. Many of the organizational factors that influence patient safety also are likely to impact diagnostic safety and performance, but these same factors may also exert different or unique effects in the diagnostic realm. Conversely, organizational factors that may have only marginal significance to patient safety may be of greater importance in diagnostic efforts. As lessons learned and useful modes of inquiry are gleaned from two decades of patient safety research, investigators likewise need to be mindful of new ways of thinking and innovative strategies to advance diagnostic safety.

An organization’s culture, its shared views on the value of teamwork, and how well its leadership is attuned to the nature of the work performed, contribute in fair measure to how well its departments function as a working system both internally and in relation to other entities (e.g. imaging services, labs, and specialist consultations). The importance of organizational culture and its assessment in patient safety is well understood, especially in regard to reporting and having an open, blame-free discussion about errors [29], [30]. Both teamwork and leadership have received attention in the patient safety literature and provide useful points of departure for addressing some of the unique challenges of diagnostic work.


Given the complexity of health care with its accelerating pace of biomedical advances that inform practice, its expanding number of diagnostic and treatment options, and the greater numbers of patients with multiple co-morbid conditions, the traditional image of an insightful clinician solitarily rendering a single, elusive diagnosis is becoming less easy to entertain [1]. Because of the dispersed nature of diagnostic work and the variable time spans in which information gets gathered, interpreted, and acted upon, it is an open question whether the many agents that play a role in the overall process really perceived themselves as members of a team. However, the NASEM report and recent diagnostic safety literature view the diagnostic process as a team-based endeavor [1], [31], [32], [33], [34]. Together, they support the need for more effective forms of collaboration, sharing of expertise and information among clinicians, patients, and caretakers, and new ways of thinking. While it may take time for new frameworks to gain momentum and acceptance, reframing diagnosis as a process that requires coordinated teamwork among all the agents involved and then developing metrics for what successful teamwork looks like across the complex system-wide landscape are essential first steps.


Leaders at the unit and executive levels need to be actively involved in diagnostic safety. Because of their positions, they have the opportunity to work across organizational units and address the vulnerable interdependencies of clinical work [35]. With perhaps some exceptions of leadership Walk Rounds initiatives where there is full commitment and a support structure for delineating systems-based actions [36], [37], there is little evidence that leaders spend much time attending to the complex interdependencies. Greater understanding and willingness to address system-entrenched issues that both diagnosticians and providers encounter in their interdependent environments is needed.

At the unit levels, further guidance and education may be of benefit to leaders in understanding system complexity [38]; the importance of shared goals, knowledge, and accountability; the value of change champions; and the creative use of incentives and resources at their own disposal to improve diagnostic safety. At the executive level, catalysts for change may entail marshalling concrete evidence of the financial and reputational costs of diagnostic error and a heightened awareness of how existing initiatives and priorities can be aligned with the better health, better care, and lower cost goals that all patients deserve [39]. Organizational change is more likely to occur when perceived from the vantage point of compatible efforts already underway rather than an external imposition adding only to quality improvement fatigue.

Strategies and interventions

While diagnostic safety is a new frontier with its own unique challenges, it is not surprising that many suggested strategies and interventions to reduce diagnostic error stem from patient safety improvement efforts. Narrative and systematic reviews have focused on cognitive [40], system [41], and organizational [42] approaches. One review identified 109 strategies that targeted diagnostic error, sorting them into six organizational categories: personnel changes, educational interventions, technique (e.g. changes in equipment and clinical procedures), structure process changes, technology-based system changes, and additional review methods [42]. While the review found limited evidence associated with most of the strategies, it suggested that technology-based systems strategies may hold promise. Only a few studies addressed clinical outcomes.

Several strategies in need of further operationalization and testing are noteworthy. First, greater understanding of the factors that facilitate and inhibit patient involvement in the diagnostic process deserves attention [43]. For example, would keeping a diagnostic diary that keeps track of delays, communication lapses, and process missteps facilitate proactive involvement? Does being told “let us worry about the diagnosis” by clinicians inhibit patient involvement? Second, diagnosis, to a large extent, has been described as an open-loop system where there is no self-correcting feedback mechanism, as in more reliable systems [44], [45]. In the absence of information about patient outcomes and a measurement scheme for making sense of them, clinicians are denied the opportunity for learning something about their diagnostic performance. What closed-loop feedback techniques can be tested? Third, there is not much in the emerging diagnostic safety literature that builds upon the principles that underlie high reliability organizations (HROs), yet sensitivity to operations, reluctance to accept simple explanations to problems, preoccupation with failure, deference to expertise, and resiliency for managing the unexpected – the hallmarks of HROs – seem very relevant to many diagnostic encounters [46], [47], [48], [49]. How can these principles be operationalized for diagnostic settings? And fourth, has enough consideration been given to the tenets of learning organizations? Those diagnostic specialties and organizations that recognize they are involved in a generative process; that is, trying to learn how to implement evidence-based information that research produces, and formulating new ways of perceiving and thinking about diagnostic challenges to more proactively shape them, deserve to be known as learning organizations [45], [50]. Learning organizations are those that take learning-to-learn seriously, empower individuals to transcend self-interests for higher collective goals, and continually expand their capacity to generate safe and satisfying outcomes for their patients and themselves. More diagnostic learning organizations are needed in health care.

Key challenges

Diagnostic safety is in a nascent stage. The summit identified a number of challenges and potential areas for future research to improve diagnosis. The challenges summarized by no means exhaust the set of possibilities that could or should be addressed, but they share a widespread concern that diagnosis will need to receive considerably more attention on several fronts than it has in the past if diagnostic safety is to be substantially improved.

Though researchers may employ different terminology and taxonomies, these differences reflect the intellectual vitality of a growing discipline as it sharpens its focus on understanding the incidence of diagnostic failure, contributing factors, existing barriers, and high-yield solutions for making progress. Measures of diagnostic error need further development should there be interest in public reporting, while measures that focus on diagnostic safety practices in clinical practice (e.g. evidence of a differential diagnosis in place, procedures to detect delays in diagnostic test results) that serve to anticipate and mitigate harm are possible and desirable. More attention is needed on data retrieval techniques that leverage patient and family experiences in gaining a better understanding of missed and untimely diagnoses as well as capture disparities in diagnostic work-ups for diverse populations at greater risk.

There is evidence that health IT applications such as patient portals can improve certain process measures (e.g. patient-provider communication), yet demonstrating improvements of patient-facing technologies on clinical outcomes will require further research effort. At the same time, understanding the cognitive requirements for effective information synthesis is essential to diagnostic improvement. The early stage work on understanding the information integration and interpretation needs of clinicians deserves further support.

In light of the diversity of professionals playing a role in diagnosis, poor care coordination processes are likely to increase the risk of diagnostic failure. Greater clarity, responsibility, and accountability are needed to ensure the accuracy, timeliness, and sufficiency of communication to clinicians, patients, and their families. Leaders shape organizational culture, and greater leadership involvement at the unit and executive levels is needed for a substantive reduction in diagnostic failures and misuse of diagnostic resources. Also needed are strategies that assist health care organizations in understanding system complexity and in prioritizing high impact targets while optimizing deployment of existing resources and safety improvement principles.

AHRQ’s strategies

Going forward, AHRQ will build on its efforts to support public-private collaborations, in sharing evidence-based information and tools with patient and provider communities, and in optimizing existing and prospective funding resources. In addition to continuing its ongoing activities, summit participants helped to identify a number of areas in which AHRQ can focus:

  • AHRQ’s patient safety tools – culture surveys, guides to patient and family engagement, and questions are the answer – can be adapted to diagnostic safety. The Improving Your Office Testing Process toolkit currently is available for larger scale implementation. AHRQ is exploring a more streamlined approach for assessing and adapting tools of greatest relevance to diagnostic safety.

  • Data and measurement provides the foundation for many of AHRQ’s outreach initiatives as the agency informs the public and stakeholder organizations on the evidence-base underpinning health care improvements. In gaining a better understanding of the incidence of diagnostic error, coordinated efforts will be pursued for improved data utilization and diagnostic measure development, drawing from measurement and data analytic expertise both inside and outside the agency.

  • The Agency’s health IT program is playing a leading role in conducting research on clinical decision support, helping to bring evidence to practice in a way that supports clinicians’ cognitive requirements. AHRQ’s knowledge in this area can be leveraged as part of further efforts to understand the efficacy of various cognitive support technologies for improving diagnosis.

  • Continued efforts to engage patients and family members in the diagnostic process will likely require a multi-pronged approach. Three potentially promising areas worthy of in-depth exploration are the roles patients can play in their own diagnostic encounters; opportunities for providing feedback and advice to delivery system decision makers; and innovative approaches to improving the patient-diagnostic encounter in a way that promotes mutual trust and respect, communication quality, and improved patient-clinician interaction.

  • To help support expanded research capability, other funding alternatives may be necessary. AHRQ is considering the establishment of centers of excellence (COE) and Centers of Diagnostic Development (CDD). COEs would draw from larger medical centers with strong research track records, capable of bringing together key stakeholders, and addressing multiple projects of diagnostic safety importance, while CDDs would aim to develop early career individuals from a wider pool of institutions, helping to attract and train the next generation of diagnostic researchers.

Diagnostic safety researchers and aligned stakeholder groups have an opportunity to join with patients and clinicians in practice to serve in leadership roles and continually expand healthcare’s capacity to provide safe and high quality diagnostic care. Based on AHRQ’s track record of implementing patient- and diagnostic-safety programs for nearly two decades, and the impressive progress of the improving diagnosis discipline to date, the agency is optimistic that the diagnostic care that patients and clinicians care about and deserve, can be more fully realized.


The authors and summit planning staff (Ms. Jaime Zimmerman, Ms. Brigid Russell) gratefully acknowledge the valuable contributions and roles played by the many individuals listed here. Plenary speakers: Andy Bindman, Agency for Healthcare Research and Quality; Victor Dzau, National Academy of Medicine; Jeffrey Brady, Agency for Healthcare Research and Quality; Mark Graber, Society to Improve Diagnosis in Medicine; and Gordon Schiff, Brigham and Women’s Hospital. Breakout sessions: David Newman-Toker, The Johns Hopkins University School of Medicine, Hardeep Singh, Michael E. DeBakey VA Medical Center and Baylor College of Medicine, Houston; Jason Adelman, Columbia University Medical Center, Pascal Carayon, University of Wisconsin-Madison, Christine Goeschel, Medstar Health, and Kathryn McDonald, Stanford University. Rapporteurs: David Hunt, Office of the National Coordinator for Health Information Technology, Shari Ling, Centers for Medicare & Medicaid Services, Lucy Savitz, Intermountain Health, Helen Haskell, Mothers Against Medical Error and Consumers Advancing Patient Safety, and Paul Epner, Coalition to Improve Diagnosis – Society to Improve Diagnosis in Medicine.

  1. Author contributions: All the authors have accepted responsibility for the entire content of this submitted manuscript and approved submission.

  2. Research funding: None declared.

  3. Employment or leadership: None declared.

  4. Honorarium: None declared.

  5. Competing interests: The funding organization(s) played no role in the study design; in the collection, analysis, and interpretation of data; in the writing of the report; or in the decision to submit the report for publication.


1. Balogh EP, Miller BT, Ball J, editors. Improving diagnosis in health care. National Academies of Sciences, Engineering, and Medicine. Washington, DC: The National Academies Press, 2015.10.17226/21794Search in Google Scholar

2. Singh H, Meyer AN, Thomas EJ. The frequency of diagnostic errors in outpatient care: estimations from three large observational studies involving US adult populations. BMJ Qual Saf 2014;23:727–31.10.1136/bmjqs-2013-002627Search in Google Scholar

3. Agency for Healthcare Research and Quality. AHRQ national scorecard on rates of hospital-acquired conditions. Available at: Accessed: 23 Mar 2017.Search in Google Scholar

4. Schiff GD, Hasan O, Kim S, Abrams R, Cosby K, Lambert BL, et al. Diagnostic error in medicine – analysis of 583 physician-reported errors. Arch Intern Med 2009;169:1881–87.10.1001/archinternmed.2009.333Search in Google Scholar

5. Singh H, Sittig DF. Advancing the science of measurement of diagnostic errors in healthcare: the Safer Dx framework. BMJ Qual Saf 2015;24:103–10.10.1136/bmjqs-2014-003675Search in Google Scholar

6. Graber ML, Trowbridge R, Myers JS, Umscheid CA, Strull W, Kanter MH. The next organizational challenge: finding and addressing diagnostic error. Jt Comm J Qual Patient Saf 2014;40:102–10.10.1016/S1553-7250(14)40013-8Search in Google Scholar

7. Newman-Toker DE. A unified conceptual model for diagnostic errors: underdiagnosis, overdiagnosis, and misdiagnosis. Diagnosis 2014;1:43–8.10.1515/dx-2013-0027Search in Google Scholar PubMed PubMed Central

8. Zwann L, Singh H. The challenges in defining and measuring diagnostic error. Diagnosis 2015;2:97–103.10.1515/dx-2014-0069Search in Google Scholar PubMed PubMed Central

9. Carayon P, Alvarado C, Hundt AS. Work system design in health care. In: Carayon P, editor. Handbook of human factors and ergonomics in health care and patient safety. Boca Raton, FL: CRC Press-Taylor & Francis Group, 2007: 65–79.Search in Google Scholar

10. Graber ML. The incidence of diagnostic error in medicine. Br Med J Qual Saf 2013;22:ii21–2.10.1136/bmjqs-2012-001615Search in Google Scholar PubMed PubMed Central

11. Singh H, Giardina TD, Meyer AN, Forjouh SN, Reis BA, Thomas EJ. Types and origins of diagnostic errors in primary care settings. JAMA Intern Med 2013;173:418–25.10.1001/jamainternmed.2013.2777Search in Google Scholar PubMed PubMed Central

12. Newman-Toker DE, Moy E, Valente E, Coffey R, Hines AL. Missed diagnosis of stroke in the emergency department: a cross-sectional analysis of a large population-based sample. Diagnosis 2014;1:155–66.10.1515/dx-2013-0038Search in Google Scholar PubMed PubMed Central

13. Murphy DR, Laximisan A, Reis BA, Thomas EJ, Esquivel A, Forjuoh SN, et al. Electronic health record-based triggers to detect potential delays in cancer diagnosis. BMJ Qual Saf 2014;23:8–16.10.1136/bmjqs-2013-001874Search in Google Scholar PubMed

14. Danforth KN, Smith AE, Loo RK, Jacobsen SJ, Mittman BS, Kanter MH, et al. Electronic clinical surveillance to improve outpatient care: diverse applications within an integrated delivery system. eGEMs (Generating Evidence & Methods to improve patient outcomes) 2014;2(Iss 1, Art 9): DOI: Available at: Assessed: 21 Apr 2017.10.13063/2327-9214.1056Search in Google Scholar PubMed PubMed Central

15. Newman-Toker DE, McDonald KM, Meltzer DO. How much diagnostic safety can we afford, and how should we decide? A health economics perspective. BMJ Qual Saf 2013;22(Suppl 2):ii11–20.10.1136/bmjqs-2012-001616Search in Google Scholar PubMed PubMed Central

16. Smedley BD, Stith AY, Nelson AR, editors. Unequal treatment: confronting racial and ethnic disparities in health care. National Academy of Sciences. Washington, DC: The National Academies Press, 2003.Search in Google Scholar

17. Sheringham J, Sequeira R, Myles J, Hamilton W, McDonnell J, Offman J, et al. Variations in GPs’ decisions to investigate suspected lung cancer: a factorial experiment using multimedia vignettes. BMJ Qual Saf 2016. doi: 10.1136/bmjqs-2016-005679. [Epub ahead of print].10.1136/bmjqs-2016-005679Search in Google Scholar PubMed

18. Baldwin JL, Singh H, Sittig DF, Giardina TD. Patient portals and health apps: Pitfalls, promises, and what one might learn from the other. Healthcare 2016. Available at: in Google Scholar PubMed PubMed Central

19. Carayon P, Hoonakker P, Cartmill R, Hassol A. Using health information technology (IT) in practice redesign: impact of health IT on workflow – patient-reported health information technology and workflow – final report. Rockville (MD): Agency for Healthcare Research and Quality (US). AHRQ Publication No. 15-0043-EF, 2015:110.Search in Google Scholar

20. Agency for Healthcare Research and Quality (AHRQ). Discovery and visualization of new information from clinical reports in the electronic health record (Minnesota). 2013. Available at: Accessed: 24 Feb 2017.Search in Google Scholar

21. Murphy DR, Thomas EJ, Meyer AN, Singh H. Development and validation of electronic health record-based triggers to detect delays in follow-up of abnormal lung imaging findings. Radiology 2015;277:81–7.10.1148/radiol.2015142530Search in Google Scholar PubMed PubMed Central

22. Agency for Healthcare Research and Quality (AHRQ). Health information technology-supported process for preventing and managing venous thromboembolism (Wisconsin). 2013. Available at: Accessed: 24 Feb 2017.Search in Google Scholar

23. Zhang R, Pakhomov SV, Lee JT, Melton GB. Using language models to identify relevant new information in inpatient clinical notes. AMIA Annu Symp Proc 2014;2014:1268–76.Search in Google Scholar

24. IBM. Datagram: Medical data. 2015. Available at: Accessed: 23 Feb 23, 2017.Search in Google Scholar

25. Adelman JS, Kalkut GE, Schechter CB, Weiss JM, Berger MA, Reissman SH, et al. Understanding and preventing wrong-patient electronic orders: a randomized controlled trial. J Am Med Inform Assoc 2013;20:305–10.10.1136/amiajnl-2012-001055Search in Google Scholar PubMed PubMed Central

26. Agency for Healthcare Research and Quality (AHRQ). Assess risk of wrong patient errors in an EMR that allows multiple records open (New York). 2014. Available at: Accessed: 24 Feb 2017.Search in Google Scholar

27. Agency for Healthcare Research and Quality (AHRQ). Utilizing health information technology to scale and spread successful practice models using patient-reported outcomes (R18). 2016. Available at: Accessed: 24 Feb 2017.Search in Google Scholar

28. Hartzler AL, Chaudhuri S, Fey BC, Flum DR, Lavallee D. Integrating patient-reported outcomes into spine surgical care through visual dashboards: lessons learned from human-centered design. EGEMS (Wash DC) 2015;3:1133.10.13063/2327-9214.1133Search in Google Scholar PubMed PubMed Central

29. Frankel AS, Leonard MW, Denham CR. Fair and just culture, team behavior, and leadership engagement: the tools to achieve high reliability. Health Serv Res 2006;46:1690–709.10.1111/j.1475-6773.2006.00572.xSearch in Google Scholar PubMed PubMed Central

30. Nieva VF, Sorra J. Safety culture assessment: a tool for improving patient safety in healthcare organizations. Qual Safe Health Care 2003;12(Suppl II):ii17–23.10.1136/qhc.12.suppl_2.ii17Search in Google Scholar PubMed PubMed Central

31. Haskell HW. What’s in a story? Lessons from patients who have suffered diagnostic failure. Diagnosis 2014;1:53–4.10.1515/dx-2013-0024Search in Google Scholar

32. Graedon T, Graedon J. Let patients help with diagnosis. Diagnosis 2014;1:49–51.10.1515/dx-2013-0006Search in Google Scholar

33. McDonald KM. The diagnostic field’s players and interactions: from the inside out. Diagnosis 2014;1:55–8.10.1515/dx-2013-0023Search in Google Scholar

34. Schiff GD. Diagnosis and diagnostic errors: time for a new paradigm. BMJ Qual Saf 2014;23:1–3.10.1136/bmjqs-2013-002426Search in Google Scholar

35. Henriksen K, Dayton E. Organizational silence and hidden threats to patient safety. Health Serv Res 2006;46:1539–54.10.1111/j.1475-6773.2006.00564.xSearch in Google Scholar

36. Frankel A, Graydon-Baker E, Neppi C, Simmonds T, Gustafson M, Gandhi TK. Patient safety leadership walkrounds. Jt Comm J Qual Pat Saf 2003;29:16–26.10.1016/S1549-3741(03)29003-1Search in Google Scholar

37. Singer SJ, Tucker AL. The evolving literature on safety walkrounds: emerging themes and practical messages. BMJ Qual Saf 2014;23:789–800.10.1136/bmjqs-2014-003416Search in Google Scholar PubMed

38. Trowbridge RL, Dhaliwal G, Cosby KS. Educational agenda for diagnostic error reduction. BMJ Qual Saf 2013;22:ii28–32.10.1136/bmjqs-2012-001622Search in Google Scholar PubMed PubMed Central

39. Agency for Healthcare Research and Quality (AHRQ). The National Quality Strategy (NQS). Available at: Accessed: 27 Feb 2017.Search in Google Scholar

40. Graber ML, Kissam S, Payne VL, Meyer AN, Sorensen A, Lenfestey N, et al. Cognitive interventions to reduce diagnostic error: a narrative review. BMJ Qual Saf 2012;21:535–57.10.1136/bmjqs-2011-000149Search in Google Scholar PubMed

41. Singh H, Graber ML, Kissam SM, Sorensen AV, Lenfestey NF, Tant EM, et al. System-related interventions to reduce diagnostic errors: a narrative review. BMJ Qual Saf 2012;21: 160–70.10.1136/bmjqs-2011-000150Search in Google Scholar PubMed PubMed Central

42. McDonald KM, Matesic B, Contopoulos-Ioannidis DG, Lonhart J, Schmidt E, Pineda N, et al. Patient safety strategies targeted at diagnostic errors: a systematic review. Ann Intern Med 2013;158:381–9.10.7326/0003-4819-158-5-201303051-00004Search in Google Scholar PubMed

43. McDonald KM, Bryce CL, Graber ML. The patient is in: patient involvement strategies for diagnostic error mitigation. BMJ Qual Saf 2013;22:ii33–9.10.1136/bmjqs-2012-001623Search in Google Scholar PubMed PubMed Central

44. Schiff GD. Minimizing diagnostic error: the importance of follow-up and feedback. Am J Med 2008;121(Suppl 5):S38–42.10.1016/j.amjmed.2008.02.004Search in Google Scholar PubMed

45. Henriksen K. Improving diagnostic performance: some unrecognized obstacles. Diagnosis 2014;1:35–8.10.1515/dx-2013-0015Search in Google Scholar PubMed

46. Chassin MR, Loeb JM. High reliability health care: getting there from here. Milbank Q 2013;91:459–90.10.1111/1468-0009.12023Search in Google Scholar PubMed PubMed Central

47. Weick KE, Sutcliffe KM. Managing the unexpected: assuring high performance in an age of complexity. San Francisco: Jossey-Bass, 2001.Search in Google Scholar

48. Tamuz M, Harrison MI. Improving patient safety in hospitals: contributions of high-reliability theory and normal accident theory. Health Serv Res 2006;46:1654–76.10.1111/j.1475-6773.2006.00570.xSearch in Google Scholar PubMed PubMed Central

49. Hines S, Luna K, Lofthus J, Marquart M, Stelmokas D. Becoming a high reliability organization: operational advice for hospital leaders. Prepared under Contract No. 290-04-0011; AHRQ Publication No. 08-0022. Rockville MD: Agency for Healthcare Research and Quality, 2008.Search in Google Scholar

50. Senge P. The fifth discipline: the art & practice of the learning organization. New York: Double/Currency, 1990.Search in Google Scholar

Received: 2017-4-6
Accepted: 2017-4-26
Published Online: 2017-5-23
Published in Print: 2017-6-27

©2017 Walter de Gruyter GmbH, Berlin/Boston