Jump to ContentJump to Main Navigation
Show Summary Details
More options …

Diagnosis

Official Journal of the Society to Improve Diagnosis in Medicine (SIDM)

Editor-in-Chief: Graber, Mark L. / Plebani, Mario

Ed. by Argy, Nicolas / Epner, Paul L. / Lippi, Giuseppe / Singhal, Geeta / McDonald, Kathryn / Singh, Hardeep / Newman-Toker, David

Editorial Board: Basso , Daniela / Crock, Carmel / Croskerry, Pat / Dhaliwal, Gurpreet / Ely, John / Giannitsis, Evangelos / Katus, Hugo A. / Laposata, Michael / Lyratzopoulos, Yoryos / Maude, Jason / Sittig, Dean F. / Sonntag, Oswald / Zwaan, Laura


CiteScore 2018: 0.69

SCImago Journal Rank (SJR) 2018: 0.359
Source Normalized Impact per Paper (SNIP) 2018: 0.424

Online
ISSN
2194-802X
See all formats and pricing
More options …

Improving diagnostic performance: some unrecognized obstacles

Kerm Henriksen
Published Online: 2014-01-08 | DOI: https://doi.org/10.1515/dx-2013-0015

Abstract

The growing interest and activity focused on improving diagnostic performance is a needed and welcomed change to which the new journal, Diagnosis, nicely attests. While the importance of raising awareness and building the evidence-base needs to be underscored, these efforts by themselves do not translate directly into improvements for patients. The complex and multifaceted nature of diagnostic work is becoming better understood, yet many of the obstacles seem to operate beneath the surface where the press of daily practice allows many of the untoward issues to be tacitly ignored. The purpose of the paper is to shine a light on some of these less recognized issues or obstacles. Among the issues addressed is whether use of the error term as in diagnostic error or medical error is serving us well in gaining of better understanding of diagnostic work, the lack of sensible feedback mechanisms that deny diagnosticians and their organizations the opportunity to learn something about their diagnostic performances, the double-edge nature of feedback, the obstacle of frame blindness, and the need for innovative empirical testing of promising ideas for improving diagnostic performance such as taking advantage of diagnostic simulation techniques that are inexpensive to generate and do not put patients at risk.

Keywords: diagnostic error; diagnostic performance; feedback mechanisms; frame blindness; simulation

Introduction

The steadily increasing interest in diagnostic work and its improvement over the past decade indeed is noteworthy. A rise of thoughtful articles, regularly scheduled conferences, solicitations for research, a new professional society, and now the debut of a journal dedicated to gaining a better understanding of an important area of patient safety and quality that initially did not receive much attention. While the rise of interest and awareness is a good thing, solely raising awareness does not bring about change. While there is a need for a credible evidence-base, an evidence-base does not guarantee that fruitful interventions and strategies for improving diagnostic performance will be formulated, implemented, and sustained. Those involved in diagnostic pursuits operate in a complex, multifaceted, and extended socio-technical space where many of the obstacles become benignly accepted, consensually neglected, and less recognizable. As discussed next, they need to be highlighted and made more visible.

Is the focus on error serving us well?

A frequently cited definition of diagnostic error is a diagnosis that is missed, wrong, or delayed as determined subsequently by more definitive information [1]. Use of the “error” term as in human error, user error, medical error, and now diagnostic error certainly helps to grab our attention. Immediately we sense that something happened that should not have happened. But that is all the term does. The term lacks explanatory power, leading us to an end point rather than starting point, pointing out a problem rather than the problems-behind-the-problem, and viewing error as a cause rather than a consequent – all of which can retard further understanding. It serves as a cloak for our ignorance, and when used uncritically, can connote blame, carry judgmental baggage, and appeal to human bias. As the definition informs us, more definitive information or outcome knowledge is needed before the error label can be applied. But there are pitfalls with focusing on the end-point error label so much. Given the advantage of a known outcome, it is easy for investigators, after the fact, to connect the dots, wonder “why they couldn’t see it” and issue proclamations about diagnostic error [2]. Diagnostic work-ups occur from the vantage point of the unknown and foresight, not from the known and hindsight. If the goal is to acquire a better understanding of diagnostic work, a preoccupation with end-point error labels will not take us very far. Instead, we need to learn more about the how the decisions and actions taken made sense at the time, given the prevailing conditions and circumstances that existed. A greater focus is needed on the creation of diagnostic processes that have the capacity to anticipate and absorb the failures that occur along the diagnostic journey that can otherwise cascade through the system and harm patients.

Denied the opportunity for learning

Diagnostic work, like other clinical work, occurs within a greater socio-technical system. The tyranny of daily practice can blind practitioners to the realization that diagnostic performance is influenced by more than what takes place during a physician-patient encounter. Diagnostic work is continuously subject to the direct and indirect effects of multiple interactions among providers, specialists, technicians, patients, test results, tools and technology, artifacts, local contextual environments, organizational structures and cultures as well as shifting health policy and sentiment [3, 4]. Distributed across time and place with no clear end-point in many instances, and given the fragmented and uncoordinated nature of our health system [5], the diagnostic journey can be disjointed and frustrating for patients and providers alike. Clinical diagnosis, to a large extent, has been described as an open-loop system where there is no self-correcting, feedback mechanism, as in more reliable systems [6]. There is no way for physicians to recalibrate their diagnostic efforts in light of patient outcomes. Given the absence of feedback information and a carefully measured scheme for making sense of it, physicians are denied the opportunity to learn something about their performance. Collectively their practices and organizations, should they wish to be in the forefront and regarded as true learning organizations, also are denied the opportunity. The diagnostic specialties that realize they are involved in a generative process, that are trying to learn how to process the information they generate, and that are formulating new ways of perceiving and thinking about their diagnostic performance deserve to be known as learning organizations [7]. In brief, diagnostic learning organizations are those that take learning-to-learn seriously and that continually expand their capacity to generate safer and more satisfying futures for their patients and themselves.

The double-edge nature of feedback

While it might be tempting to expect diagnosticians to have the same astute sleuthing skills of one’s favorite fictional detective when patients present with undifferentiated complaints, a moment’s reflection congers up a less exalted image. It is less an image of a brilliant, dedicated person uncovering elusive disease and more an image of a vast temporal space that an unknowing patient enters. It is a space where a lot of things that should happen do not happen. Characterized by fragmentation, complexity and lack of coordination, it is a space where managing the routine such as patient follow-up seldom occurs, and where unknown outcomes and lack of information become the norm, causing little concern. At the same time, there is a need be careful about what we ask for. Yes, the need for feedback mechanisms and recalibration still stands, but the perils of being unduly influenced by feedback also need to be recognized [2, 6]. To what extent should a readily available memory of a missed rare diagnosis (e.g., missing pinpoint aortic stenosis in favor of more common chronic obstructive pulmonary disease as portrayed in an otherwise instructive Annals of Internal Medicine cartoon [8]) influence subsequent diagnostic effort? Is there a risk of allowing memorable outcomes, even when the outcome is a dead patient, to bias one’s thinking about the quality of his or her medical decision-making? It is understandable yet unfortunate when the strong and personal impact of a bad outcome can cast doubt on appropriate decision processes used along the way. Given an imperfect relationship between process and outcome in uncertain and variable diagnostic settings, and given the potential of outcome knowledge to skew our thinking, diagnosticians are well advised to be mindful of the risks associated with overcompensation. Good processes can and do lead to unsuccessful outcomes. When this happens, rushing in and revamping processes can be premature. Likewise, sloppy and inconsistent processes sometimes are associated with successful outcomes. When this happens, it’s a fool’s errand to celebrate the success.

Avoiding frame blindness

A recent listserv discussion on improving diagnoses (that unintentionally demonstrated the value of shared diagnostic efforts) started with the suggestion that pathologists could render more accurate analyses of underlying disease when pertinent clinical information and radiologic testing is provided. Otherwise as someone noted, it simply seems to be a matter of chucking a specimen over a brick wall and then chucking the report back. There was a quick reply that providing clinical information runs the risk of biasing the analysis given the “frame” that can be created thoughtlessly and prematurely. The well-known case of a boy labeled as “gastroenteritis” that was sent to the ER by his pediatrician, only to have the routine diagnosis reaffirmed and thereby missing his sepsis, was cited. Someone else noted that providing clinical information to diagnosticians or to other clinicians for second opinions is indeed important, yet so is the ability to render a fresh, unbiased diagnosis, free of the influence of being led in a particular direction by the referring information. The recommendation was to hold off on accessing the information until one’s own independent assessment is complete. If it is compatible with the referring information or initial frame, one may be justified in having greater confidence in the diagnosis. If it is discrepant, further assessment and reframing likely is needed. Other participants in the discussion underscored the value of considering other variables and nuances. In terms of whether to provide a clinical impression when engaging a radiology study, an emergency physician mentioned how differentiated a patient is prior to the study is an important factor. Someone else raised the question of why the radiologist couldn’t be engaged prior to performing the study to ensure the correct study was ordered in the first place. What a revelation! Someone had to suggest that benefits might be realized if clinicians actually talked to one another. Rather than staking out positions on whether to supply radiologists and pathologists with clinical information or limit it, it did not take long for the discussion to generate robust approaches as to how the needs for both clinical information and independent assessment could be satisfied, how the underlying specifics of the case may alter the frame, and how other frames such as creating a space where give-and-take exchanges can occur need to be entertained.

Frame blindness can be a problem in any field of inquiry [9, 10]. The first step of framing a line of inquiry is perhaps the most dangerous since it is given scant attention and profoundly influences subsequent choices that are made. The further along the path of inquiry we venture, the likelihood of returning to the beginning of the decision process for a fresh start is severely diminished. There is no reason why we should automatically accept the frames of others or fail to question the way requests are made of us. The listserv discussion as it evolved with over 20 individuals participating in a short period of time nicely illustrated the value of learning about the frames of others, and learning that the frames with which an inquiry is started is likely to need adjusting. In their shared diagnostic discussions, participants demonstrated, perhaps unintentionally, the art of avoiding frame blindness.

Poverty amid plenty

There has been no shortage of ideas for interventions that could potentially improve diagnostic performance. Risk assessments, checklists, algorithms, second opinions, independent reads, debiasing techniques, participative diagnosis, electronic health record reminders, decision aids, dashboards, patient-engagement techniques, and financial incentive adjustments are a few that have been suggested. The trick seems to be in operationalizing the ideas and testing them empirically [11, 12]. The limitations of conducting experiments in actual clinical settings are well understood. As a consequence, it is puzzling why the diagnostic research community has not taken advantage of clinical simulation as a test-bed for the ideas it generates. Other patient safety domains have done so. For those interested in cognitive biases and how rapidly-derived misleading patient information becomes conveniently accepted and cascades thoughtlessly through the diagnostic process, simulation provides a readily available platform. Diagnostic challenges and their interventions do not require a high fidelity simulation environment, but simply a requisite level of functional fidelity; that is, require the diagnostician or team to process the same incomplete patient histories, sub-optimal exams, and misinterpreted lab/radiology test results while subject to the same constraints, interruptions, and time pressures, make the same decisions, carry out the same actions, and be informed of the same consequences as would occur in the clinical setting. Several diagnostic failure scenarios with their misleading information can be scripted with the aid of standardized patients. Users of electronic health records can be exposed to misleading (or missing) simulated information in the record and how this interferes with deriving a coherent picture of the patient’s condition can be tested. Diagnostic subjects can be exposed to a range of realistic, variable conditions and receive valuable feedback on the consequences of their decisions and actions. Debriefing sessions are used for reviewing the salient events and actions that took place, for further reinforcing the lessons learned, and for making recommendations should similar circumstances occur if the future. Whether used for training effectiveness research or as a test-bed for a new diagnostic procedure or decision support tool, the advantages of diagnostic simulation are the relatively low-cost method of creating scenarios and the clinical environment, the creation of optimal conditions for feedback and learning, the ability to manipulate realistic contextual and clinical variables in a replicable fashion, and of course safety, the ability to experiment without putting patients at risk.

References

  • 1.

    Graber M. Diagnostic errors in medicine: a case of neglect. Jt Comm J Qual Patient Saf 2005;31:106–13.Google Scholar

  • 2.

    Henriksen K, Kaplan H. Hindsight bias, outcome knowledge and adaptive learning. Qual Saf Health Care 2003;12(Suppl II): ii46–50.CrossrefGoogle Scholar

  • 3.

    Wears RL. What makes diagnosis hard? In: Berner ES, Graber ML, editors. Adv in Health Sci Educ 2009;14 (Supp1):19–25.CrossrefGoogle Scholar

  • 4.

    Henriksen K, Brady J. The pursuit of better diagnostic performance: a human factors perspective. BMJ Qual Saf 2013 published online May 23, 2013; doi: 10.1136/bmjqws-2013-001827.CrossrefGoogle Scholar

  • 5.

    Institute of Medicine. To err is human: building a safer health system. Washington, DC: National Academy Press, 2000.Google Scholar

  • 6.

    Schiff GD. Minimizing diagnostic error: the importance of follow-up and feedback. Am J Med 2008;121 (Suppl 5):S38–42.Web of ScienceGoogle Scholar

  • 7.

    Senge PM. The fifth discipline: the art & practice of the learning organization. New York: Doubleday/Currency, 1990.Google Scholar

  • 8.

    Green MJ, Rieck R. “Missed it.” Ann Intern Med 2013;158: 357–61.Google Scholar

  • 9.

    Hammond JS, Keeney RL, Raiffa H. The hidden traps in decision making. Harvard Bus Rev 1998;76:47–58.Google Scholar

  • 10.

    Russo JE, Schoemaker PJ. Decision traps. New York: Simon & Schuster, 1989.Google Scholar

  • 11.

    Graber ML, Kissam SM, Payne VL, Meyer AN, Sorensen A, Lenfestey N, et al. Cognitive interventions to reduce diagnostic errors: a narrative review. BMJ Qual Saf 2012;21:1535–57.Google Scholar

  • 12.

    Singh H, Graber ML, Kissam SM, Sorensen AV, Lenfestey NF, Tant EM, et al. System-related interventions to reduce diagnostic errors: a narrative review. BMJ Qual Saf 2012;21:160–70.Google Scholar

About the article

Corresponding author: Kerm Henriksen, Agency for Healthcare Research and Quality, 540 Gaither Rd, Rockville, MD 20850, USA, E-mail:


Received: 2013-09-11

Accepted: 2013-10-30

Published Online: 2014-01-08

Published in Print: 2014-01-01


Conflict of interest statement The author declares no conflict of interest.


Citation Information: Diagnosis, Volume 1, Issue 1, Pages 35–38, ISSN (Online) 2194-802X, ISSN (Print) 2194-8011, DOI: https://doi.org/10.1515/dx-2013-0015.

Export Citation

©2014 by Walter de Gruyter Berlin/Boston. This work is licensed under the Creative Commons Attribution-NonCommercial-NoDerivatives 3.0 License. BY-NC-ND 3.0

Citing Articles

Here you can find all Crossref-listed publications in which this article is cited. If you would like to receive automatic email messages as soon as this article is cited in other publications, simply activate the “Citation Alert” on the top of this page.

[1]
J. Hudspeth, R. El-Kareh, and G. Schiff
Applied Clinical Informatics, 2015, Volume 6, Number 4, Page 619

Comments (0)

Please log in or register to comment.
Log in