Jump to ContentJump to Main Navigation
Show Summary Details
More options …

Diagnosis

Official Journal of the Society to Improve Diagnosis in Medicine (SIDM)

Editor-in-Chief: Graber, Mark L. / Plebani, Mario

Ed. by Argy, Nicolas / Epner, Paul L. / Lippi, Giuseppe / McDonald, Kathryn / Singh, Hardeep

Editorial Board: Basso , Daniela / Crock, Carmel / Croskerry, Pat / Dhaliwal, Gurpreet / Ely, John / Giannitsis, Evangelos / Katus, Hugo A. / Laposata, Michael / Lyratzopoulos, Yoryos / Maude, Jason / Newman-Toker, David / Singhal, Geeta / Sittig, Dean F. / Sonntag, Oswald / Zwaan, Laura

Online
ISSN
2194-802X
See all formats and pricing
More options …

Diagnosing diagnostic failure

Lawrence L. Weed / Lincoln Weed
Published Online: 2014-01-08 | DOI: https://doi.org/10.1515/dx-2013-0020

Abstract

Diagnostic failure results from misplaced dependence on the clinical judgments of expert physicians. The remedy for diagnostic failure involves defining standards of care for managing clinical information (medical knowledge and patient data), and implementing those standards with information tools designed for that purpose. These standards and tools are external to the minds of physicians, thus bypassing two inherent constraints on human cognition: limited capacities for information retrieval and processing, and innate heuristics and biases. Medical education and credentialing socialize physicians into misplaced acceptance of these constraints. Medical students acquire scientific knowledge, but not scientific behaviors. A scientific approach to diagnosis begins with using information tools to identify all diagnostic possibilities for the presenting problem and the initial findings needed to determine which possibilities are worth investigating in the patient. If the initial findings do not reveal a clear diagnostic solution, then information tools must be employed as part of a system of care to enforce highly organized follow-up processes, that is, careful problem definition, planning, execution, feedback, and corrective action over time, all documented under strict standards of care for managing the complexities involved.

Keywords: cognition; credentialing; decision support; diagnostic error; Flexner report; heuristics and biases; information tools; medical education; medical records; standards of care

Diagnostic failure is not a mystery. Its root cause is misplaced dependence on the clinical judgments of expert physicians. Their minds are incapable of satisfying rudimentary standards of care for information retrieval and processing. The remedy for diagnostic failure is twofold: first, to define the necessary standards of care, and second, to implement those standards with information tools designed for that purpose. Applying this remedy would address more than diagnostic failure. As we have argued elsewhere in detail, this remedy would lay a foundation for building a true system of care. Diagnostic failure is merely a symptom of the disorder created by absence of a system of care [1].

A system becomes possible only when the necessary standards and tools are external to the minds of physicians. Their minds are constrained in two ways that no training can overcome: limited capacities for information retrieval and processing, plus heuristics and biases built into human cognition. From that perspective, it is pointless for diagnosticians to try to recognize and overcome their cognitive limits and vulnerabilities. The point is not to overcome these human constraints but to bypass them altogether – when external tools make that possible and useful. As Francis Bacon told us 400 years ago, “while we falsely admire and extol the powers of the human mind, we do not search for its real helps [2].”

The external tools we advocate should not be conceived as “artificial intelligence” or “expert systems” tools. These terms suggest that the role of IT is to replicate the functioning of human experts. Such a role, even if it were achievable, completely misses the primary benefits offered by IT: massive, reliable information retrieval and processing capabilities. These benefits are compromised if physician experts are the benchmark for health IT design or performance.

Confusion on these issues persists among leaders in academia, government, business and the medical profession. They mistakenly assume that the primary vehicle for applying medical knowledge in patient care must be a highly educated expert, i.e., the physician. On this view, the internal knowledge and information processing resources residing in the minds of physicians is primary; similar resources residing in external IT devices are secondary. Also secondary are the capacities of non-physician personnel and patients themselves to access and apply medical knowledge. The primacy of the highly educated, autonomous physician is assumed to be inherent in advanced medical practice.

This assumption dates back to the Flexner report a century ago [3]. Ever since then, physicians have been educated to rely heavily on their own intellects in applying medical knowledge to patient care. This “education” begins with classroom teaching of abstract knowledge in enormous detail during the first 1–2 years of medical school. The abstract knowledge includes both basic science about human biology and medical-specific science about disease and therapy. The classroom education is followed by rotations exposing students to different medical specialties and the hands-on skills involved in each, after which medical students receive their M.D. degree. Then they undergo at least 4 years of apprenticeship followed by board exams before they become fully credentialed practitioners.

These rituals involve three leaps of faith. First, students are believed to somehow synthesize basic science, medical knowledge and limited practical experience into a core of knowledge and skill sufficient to justify the M.D. degree. Second, graduates are believed to synthesize that initial core with the additional knowledge and hands-on skills they acquire for entering their chosen specialties or primary care. Third, they are believed to synthesize their personal expertise with the expertise of other physicians in a manner that results in coordinated care.

In short, the legitimacy of physician education and credentialing rests entirely on a faith that physicians achieve the synthesis just described. That faith is completely unfounded. Synthesis cannot be assumed. Indeed, it cannot even be hoped for, because it is not achievable by any individual physician.

Only a well-designed system of care can achieve the synthesis of knowledge and skill and experience that patients need. And a system of care is incompatible with physician education and licensing in their current form. Their current form evolved by default from Abraham Flexner’s incomplete vision of medical school reform. He thought that instilling students with a core of scientific knowledge would somehow translate into scientifically advanced medical care.

What Flexner missed was that medical students need to learn a core of behavior, the intellectual behaviors essential to modern science. First identified by Francis Bacon four centuries ago, these behaviors include the habitual use of external tools and techniques and standards to produce and manipulate complex information. Yet, in medical education, credentialing and practice, these scientific behaviors are conspicuously absent. This gap between the behaviors of scientific and medical practitioners becomes all too obvious when one compares the training and examination of basic science PhD candidates with that of medical students and clinical specialty board candidates [4]. Space does not permit making that comparison here. Suffice it to say that scientific rigor is demanded with PhD candidates and completely undermined with medical degree candidates who will be entrusted with patient lives. The result is not only that patients are at risk but also that medical practice is not fit for rigorous scientific research. Researchers must fall back on costly and often artificial clinical trials.

Flexner’s vision led to the Sisyphean ordeal that all medical students undergo – loading their minds with massive amounts of medical knowledge, and using their minds to apply all this knowledge to detailed patient data. This prohibitive burden brings their cognitive limits and vulnerabilities to the fore.

The issue was articulated by Alfred North Whitehead within a year of the Flexner report. Writing in a different context, Whitehead observed: “It is a profoundly erroneous truism … that we should cultivate the habit of thinking about what we are doing. The precise opposite is the case. Civilization advances by extending the number of important operations which we can perform without thinking about them [5].”

How does Whitehead’s insight apply to the diagnostic process? Consider a patient who presents with acute abdominal pain. The medical literature shows that >80 potential causes, encompassing numerous medical specialties, should be taken into account as diagnostic possibilities for acute abdominal pain. The diagnostic investigation should begin by assessing which of those possibilities are worth considering as diagnostic options for an individual patient, and which possibilities can be safely ignored.

This threshold inquiry requires defining each diagnostic possibility as a combination of simple, safe, inexpensive findings from the history, physical and basic laboratory tests. The total number of potential findings for all the diagnostic possibilities is close to 400 (although not all of these will be applicable to any individual patient). Each positive finding suggests one or more of the diagnostic possibilities. Each patient’s particular combination of positive findings can be matched against each combination of findings representing each of the diagnostic possibilities for acute abdominal pain.

This combinatorial matching process generates a list of diagnostic options (a subset of the initial possibilities), plus the positive, negative and uncertain findings on each option. Each option within this subset is worth considering for that patient, because at least one of the expected findings for each option is positive. The list of findings on each option provides an initial basis for comparing the options in terms of how well they match with the patient (a perfect match occurs when all expected findings are positive), and thus prioritizing which options should be the highest priority for further investigation. In some cases, one option may stand out from the others at the outset. This occurs when all or most of the expected findings are positive, while none of the other options are a good match, i.e., only some or few of the expected findings are positive. (For a detailed example of such a case, see part II. A of our book Medicine in Denial, analyzing a diagnostic failure that could easily have been avoided.) In other cases, several options may appear to justify further diagnostic investigation, while others do not. Roughly speaking, this occurs when a substantial proportion but not all of the expected findings are positive for several options, while the expected findings for the remaining options are mostly negative.

The tool-driven process just described (which applies to treatment as well as diagnostic decisions) illustrates the key characteristics of a true system of care. Consistent with Whitehead, a systematic process minimizes the exercise of judgment, with all of its idiosyncratic limits and vulnerabilities. A systematic process means that the inputs can be explicitly defined in advance of the patient encounter and then carried out consistently by use of software guidance tools, without resort to clinical judgment. The tools define the diagnostic possibilities and the needed findings for each, without being limited by specialty orientation. Because of the large number of findings and the countless possible combinations among them, individual variations are captured. Once the findings are entered, the tools inform the practitioner and patient what the medical literature has to say about the particular combinations of findings on the patient. Specifically, the tools display the positive and negative findings on each option, provide commentary from the literature useful for further evaluation, and provide citations to the literature on which this material is based.

No clinical judgment is necessary for these initial steps to be accomplished. Indeed, the exercise of clinical judgment should not be permitted if it gives the practitioner discretion to omit findings judged to be unnecessary for a particular patient. If findings are omitted due to time or resource constraints, then those items should be recorded as uncertain or unknown, clearly differentiating them from findings that were checked and found to be negative. In this way, consistency, completeness, and accountability are achieved.

Now let us contrast this tool-driven, combinatorial process with the judgmental process that physicians are trained to employ. Physicians know only a fraction of the 80-plus diagnostic possibilities that need be taken into account for diagnosing acute abdominal pain. Moreover, they know only a fraction of the hundreds of data points to check for determining which of the possibilities are worth investigating in the individual patient. They thus collect only a fraction of the data points needed. When they assess their incomplete data, they are unlikely to recognize all of the diagnostic options suggested by countless possible combinations of data points, and they do not even have the opportunity to recognize many relevant combinations due to their incomplete data collection. That lack of completeness in data collection and assessment means that individual patient variations are obscured, and simplistic diagnostic labels are uncritically applied to very different patients as if they were comparable. These factors are idiosyncratic, varying significantly from one physician to another, even those within the same specialty.

This judgmental approach to the practitioner-patient encounter, unlike the combinatorial approach described above, is a recipe for failure. It should be no surprise to find, as stated in one recent study of diagnostic error:

Most errors were related to patient-practitioner clinical encounter-related processes, such as taking medical histories, performing physical examinations, and ordering tests. … preventive interventions must focus on common contributory factors, particularly those that influence the effectiveness of data gathering and synthesis in the patient-practitioner encounter [6].

Physicians’ personal knowledge inevitably falls short of the knowledge needed for diagnostic accuracy. And physicians find it difficult to apply correctly even the limited knowledge they do possess. This difficulty results from internal heuristics and biases plus various external influences that compromise judgment, especially under conditions of time pressure and information overload. These conditions are characteristic of the patient encounters where diagnosis begins. And the beginning is the most crucial stage of the diagnostic process.

The failings of a judgmental approach explain what happens when a patient takes the same diagnostic problem to different physicians. The patient finds little consistency in the data sets collected or the conclusions drawn by each physician. Stated differently, there are no established and enforceable best practices for initial diagnostic investigation of presenting problems like abdominal pain. Ad hoc personal judgments and habits prevail. “Diagnosis is still regarded an individual art,” as one commentary has observed, “rather than evidence-based science [7].”

A scientific approach is not achieved with diagnostic practice guidelines in paper form. These do not adequately take into account individual variation, they are difficult to use during actual patient encounters, they are not easily kept current, and their use is difficult to enforce. All of these difficulties evaporate when guidelines take the form of electronic tools designed to implement a combinatorial approach.

Notwithstanding the failings of a judgmental approach, physicians are willing to proceed based on whatever limited knowledge comes to mind during the patient encounter. This deeply defective diagnostic practice is what they learn from precept and example. The credentialing rituals they undergo socialize them into acceptance of this unscientific approach to diagnosis. In accepting it, they implicitly represent to their patients that they know whatever is needed to solve the patients’ problems or if not, that they can figure it out or find it out from a consulting specialist with a different core of knowledge. The reality, however, is every physician (and any other physician whom they might consult) are trying to navigate through uncertainty, without being able to distinguish between genuine uncertainty (that which is unknown to medical science) and merely personal uncertainty (that which is unknown to the patient’s physician, but which might be known to someone else).

Humans have limited tolerance for uncertainty. Medical students thus learn to avoid doubt, and they too often fail to develop awareness of the imperfect fit between medical knowledge and the enormous variability of individual patients. In this way, they acquire unwarranted confidence in their developing clinical judgment. They learn to display this false confidence to colleagues and patients. Physicians are thus socialized into a lack of scientific integrity, a condition that permeates medical practice.

Consciously or unconsciously, physicians know this. They know that what they do falls far short of a safe, secure, trustworthy approach to medical decision making. They are left with the constant threat of malpractice litigation, and a hidden burden of fear and doubt.

Reform must involve disaggregating the tasks of the physician, and determining which tasks should be performed by external information tools, by medical practitioners of all kinds, and by patients and their caregivers [8]. In short, we must define a rational division of labor. Until this basic step is taken, much of the enormous time, talent and dedication invested by so many practitioners will continue to go to waste. “The greatest improvement in the productive powers of labor,” Adam Smith observed, “and the greater part of the skill, dexterity, and judgment with which it is any where directed, or applied, seem to have been the effects of the division of labor [9].”

A rational division of labor in medicine requires that medical decision making be conceived in two stages. The first stage is assembling the informational basis for decisions, that is, identifying the options relevant to the individual patient’s problem situation and the individualized pros and cons of each option. This can only be accomplished by enforcing high standards of care for managing clinical information (data and knowledge) through use of electronic tools designed to implement those standards. Once that foundation is laid, it may then be supplemented with judgments from the practitioner – and from the patient – about the data collected and additional data or hypotheses they believe relevant.

The second stage is choosing among the options, based on the evidence developed in the first stage. In some cases, the choice can be made with relative certainty after initial data collection and analysis. “Certainty” means that the same choice would be made regardless of who the decision maker is. But in many other cases, the initial data collection and analysis yield several options worthy of further investigation. In these cases, the first stage of decision making must continue, so that the second stage can become fully informed. Continuing the first stage involves highly organized follow-up processes, that is, careful problem definition, planning, execution, feedback, and corrective action over time, all documented under strict standards of care for managing the complexities involved.

These points are not theoretical. They are based on experiences of a few practitioners who have demonstrated, in everyday medical practice, the usability and power of the tools and standards we describe. See, for example, the detailed description provided by Dr. Ken Bartholomew [10].

As Dr. Bartholomew’s description illustrates, the necessary standards of care must be built into the electronic decision support and medical record tools used by practitioners and patients. When diagnostic processes are tool-driven in that way, then they can be made thorough, reliable, transparent and capable of continuous improvement. These are the characteristics of a true system of care.

A system of care is unattainable as long as its processes are left to the judgments and habits of physicians. In the words of Francis Bacon, “Our only remaining hope and salvation is to begin the whole labor of the mind again; not leaving it to itself, but directing it perpetually from the very first, and attaining our end as it were by mechanical aid [11].” The tool-driven process we describe offers what Bacon envisioned four centuries ago. It is long past time for medical practice to catch up.

References

About the article

Corresponding authors: Lawrence L. Weed, Emeritus Professor of the University of Vermont College of Medicine, Burlington, VT, USA, E-mail: ; and Lincoln Weed, 11219 Timberline Dr., Oakton, VA 22124, USA, Phone: +1-703-424-4408, E-mail:


Received: 2013-09-13

Accepted: 2013-10-17

Published Online: 2014-01-08

Published in Print: 2014-01-01


Conflict of interest statement The authors (father and son) have very small stock ownership interests in a company that markets a software product based on one of the information tools described in this article; this interest, however, played no role in the conception, analysis, or writing of this article, or in the decision to submit the article for publication.


Citation Information: Diagnosis, Volume 1, Issue 1, Pages 13–17, ISSN (Online) 2194-802X, ISSN (Print) 2194-8011, DOI: https://doi.org/10.1515/dx-2013-0020.

Export Citation

©2014 by Walter de Gruyter Berlin/Boston. This work is licensed under the Creative Commons Attribution-NonCommercial-NoDerivatives 3.0 License. BY-NC-ND 3.0

Citing Articles

Here you can find all Crossref-listed publications in which this article is cited. If you would like to receive automatic email messages as soon as this article is cited in other publications, simply activate the “Citation Alert” on the top of this page.

[1]
E.J. Lowenstein and R. Sidlow
British Journal of Dermatology, 2018
[2]
Robert R. Weaver
Journal of Evaluation in Clinical Practice, 2015, Volume 21, Number 6, Page 1076
[3]
Robert D. Lafsky
Gastroenterology, 2015, Volume 148, Number 5, Page 1079

Comments (0)

Please log in or register to comment.
Log in