Jump to ContentJump to Main Navigation
Show Summary Details
More options …

Diagnosis

Official Journal of the Society to Improve Diagnosis in Medicine (SIDM)

Editor-in-Chief: Graber, Mark L. / Plebani, Mario

Ed. by Argy, Nicolas / Epner, Paul L. / Lippi, Giuseppe / McDonald, Kathryn / Singh, Hardeep

Editorial Board: Basso , Daniela / Crock, Carmel / Croskerry, Pat / Dhaliwal, Gurpreet / Ely, John / Giannitsis, Evangelos / Katus, Hugo A. / Laposata, Michael / Lyratzopoulos, Yoryos / Maude, Jason / Newman-Toker, David / Singhal, Geeta / Sittig, Dean F. / Sonntag, Oswald / Zwaan, Laura

Online
ISSN
2194-802X
See all formats and pricing
More options …

Identifying and analyzing diagnostic paths: a new approach for studying diagnostic practices

Goutham Rao
  • Corresponding author
  • Family Medicine and Community Health, Case Western Reserve University School of Medicine, 11100 Euclid Avenue, Cleveland, OH 44106-4915, USA
  • Email
  • Other articles by this author:
  • De Gruyter OnlineGoogle Scholar
/ Paul Epner / Victoria Bauer
  • Ambulatory Primary Care Innovations Group, NorthShore University HealthSystem, Evanston, IL, USA
  • Other articles by this author:
  • De Gruyter OnlineGoogle Scholar
/ Anthony Solomonides
  • Ambulatory Primary Care Innovations Group, NorthShore University HealthSystem, Evanston, IL, USA
  • Other articles by this author:
  • De Gruyter OnlineGoogle Scholar
/ David E. Newman-Toker
  • Department of Neurology, Johns Hopkins University School of Medicine, The Johns Hopkins Hospital Meyer Building, Baltimore, MD, USA
  • Other articles by this author:
  • De Gruyter OnlineGoogle Scholar
Published Online: 2017-04-19 | DOI: https://doi.org/10.1515/dx-2016-0049

Abstract

Diagnostic error is a serious public health problem to which knowledge gaps and associated cognitive error contribute significantly. Identifying diagnostic approaches to common problems in ambulatory care associated with more timely and accurate diagnosis and lower cost and harm associated with diagnostic evaluation is an important priority for health care systems, clinicians, and of course patients. Unfortunately, guidance on how best to approach diagnosis in patients with common presenting complaints such as abdominal pain, dizziness, and fatigue is lacking. Exploring diagnostic practice variation and patterns of diagnostic evaluation is a potentially valuable approach to identifying best current diagnostic practices. A “diagnostic path” is the sequence of actions taken to evaluate a new complaint from first presentation until a diagnosis is established, or the evaluation ends for other reasons. A “big data” approach to identifying diagnostic paths from electronic health records can be used to identify practice variation and best practices from a large number of patients. Limitations of this approach include incompleteness and inaccuracy of electronic medical record data, the fact that diagnostic paths may not represent clinician thinking, and the fact that diagnostic paths may be used to identify best current practices, rather than optimal practices.

Keywords: diagnosis; electronic medical records; practice patterns

Background

Diagnostic errors and the nature of diagnostic process failures

Diagnosis has always been an inherently difficult, uncertain, and error-prone process. In the 19th century, English physician Peter Mere Latham wrote, “Diagnosis is often easy, often difficult, and often impossible” [1]. More than a 100 years later, in its 2015 report titled Improving Diagnosis in Health Care, the National Academy of Medicine described diagnostic errors as a blind spot in the delivery of quality of health care. The report further describes improving diagnosis as a “moral, professional, and public health imperative” [2]. According to one estimate, roughly 12 million American adults are affected by diagnostic errors annually in outpatient settings alone [3], up to one third of whom may suffer harms as a result [4]. Diagnostic error can take several forms, including missed, delayed, or incorrect diagnosis [5]. Interventions to reduce diagnostic error include system-level and cognitive strategies [6]. A complementary approach is to focus on individual steps in the diagnostic process as well as the diagnostic process as a whole to identify sources of error and areas for improvement.

Symptom-specific approaches to studying diagnostic strategies and diagnostic error

A symptom-oriented, process-focused approach emphasizes questions such as, “What is the best approach to establishing a correct diagnosis (e.g. stroke vs. not stroke) among patients with dizziness?” This is in contrast to addressing a question such as, “What are the most effective strategies to reduce missed stroke among stroke patients presenting dizziness?” The answer to the latter question might be to “obtain neurologic consultation and neuroimaging in all patients with dizziness”. However, this is not likely to be the correct answer to the former question, given that 97% of patients with dizziness do not have stroke as a cause [7]. The first question deals with a symptom-defined cohort, whereas the latter deals with a disease-defined cohort. When solving specific diagnostic problems, diseases of interest must inform the development of solutions, but, ultimately, diagnostic process improvements must apply to all patients with the target symptom, precisely because the disease etiology is, as yet, unknown. Essentially, when clinicians begin the diagnostic process, they usually deal with symptoms rather than diagnoses.

Unfortunately, diagnostic guidance for symptoms, particularly in ambulatory settings, is scarce, and when available, often of low quality. Relatively few clinical practice guidelines address diagnosis specifically. Recommendations in guidelines addressing diagnosis alone or diagnosis combined with treatment are often based on expert consensus or similar weak levels of evidence. For example, the majority of recommendations in a recently updated guideline for fever of uncertain source in infants ≤60 days of age were based on either weak evidence or consensus opinion only [8]. In another example, consider a recently published guideline on the evaluation and management of headache in primary care settings [9]. All the recommendations for evaluation and diagnosis are based on expert opinion, case series, or nonrandomized studies. By contrast, the majority of treatment recommendations are based on randomized trials and high-quality systematic reviews. This is not surprising as studying diagnosis and identifying best practices through experimental designs is more challenging. This relatively weak evidence underlying diagnostic recommendations may lead to lower adherence to available guidelines, as it has been shown that the strength of evidence has a strong impact on clinicians’ likelihood to adopt guidelines [10]. This lack of (and lack of faith in) consensus guidelines for diagnosis, may, in turn, lead to significant diagnostic practice variation.

Practice variation

There is a great deal of variation in diagnostic practices from clinician to clinician, institution to institution, and region to region, which cannot be explained by variation in the characteristics of individual patients or patient populations [3]. Some patients are diagnosed in a timely, accurate, and cost efficient way, whereas others suffer from diagnostic error. Systematically studying variation in diagnostic practices therefore has the potential to reveal relatively successful or unsuccessful diagnostic strategies. We now have the potential to identify practice patterns on a large scale (a “big data” approach), including diagnostic practice patterns, from repositories of electronic health records (EHR) data [11].

Priorities for the study of symptom-oriented diagnostic practice patterns include symptoms that are common in ambulatory primary or emergency care, known to be challenging to evaluate, associated with both serious and benign underlying disease, and also associated with high degrees of diagnostic practice variation. These symptoms include nonspecific abdominal pain (when unaccompanied by “red flags” such as weight loss or rectal bleeding), dizziness, and fatigue in ambulatory primary care and the emergency department. Nonspecific complaints such as these are not only associated with high levels of practice variation but also particularly prone to diagnostic error [12, 13].

Introduction to diagnostic paths

We define a diagnostic path as the sequence of steps that make up the diagnostic process from first presentation of a patient with a specific symptom until either a diagnosis is made and treatment is initiated or the diagnostic evaluation ends for other reasons.

Diagnostic paths may be very short (e.g. a single outpatient visit to a primary care physician in which a diagnosis is established) or lengthy and complex (e.g. involving multispecialty referrals and numerous tests). Figure 1 illustrates a typical diagnostic path for a young man with nonspecific abdominal pain. A diagnostic path begins with a presenting complaint followed by a clinical evaluation, which typically includes a detailed history and physical examination. If the diagnosis is not immediately apparent to the treating clinician, this is often followed by some combination of diagnostic tests (typically laboratory or imaging studies) or consultations, observation with follow-up visits, or an empiric trial of therapy (either itself as a diagnostic test or as a means of symptom management without need for a firm diagnosis). Referral to a consultant may result in additional steps being added to the path. For example, if a neurologist to whom a patient is referred orders neuroimaging, this would also be considered part of the path. The diagnostic path may end when a diagnosis is reached (based on clinical, laboratory, or pathological findings) and treatment, if appropriate, initiated. Alternatively, the path may end if a patient’s symptoms have resolved (spontaneously or with empiric treatment) without reaching a causal diagnosis, or, by mutual agreement, if the provider and patient elect not to pursue additional testing to obtain a diagnosis. Finally, a patient may be lost to follow-up prior to completion of diagnostic evaluation, ending the path. The end point of all diagnostic paths is the point at which diagnostic evaluation is no longer taking place, regardless of whether a correct diagnosis is achieved. For practical purposes, if a prespecified time period has passed (e.g. 12 months) with continuing evaluation but no precise diagnosis, the path may be labeled “no diagnosis obtained” or “ongoing without diagnosis.”

Sample diagnostic path.
Figure 1:

Sample diagnostic path.

Diagnostic paths and EHRs

Data extraction

EHRs provide a useful opportunity to study diagnostic paths. It is possible to study the records of a large number of patients within a single institution or across institutions, as clinical data research networks (CDRNs) have demonstrated [14]. Standards for data confidentiality and sharing have been established [15]. Much of the data within EHRs is entered in structured fields and can then be systematically extracted and analyzed. These include, in many cases, fields for chief complaint/reason for visit and specific actions related to diagnosis such as laboratory and imaging test orders, referrals, prescriptions, procedures, follow-up arrangements, and recorded diagnoses. These specific actions are accompanied by a time stamp, making it possible to accurately place specific steps along a diagnostic path. Given the size of CDRNs, the paths of thousands of similar patients presenting, for example, with abdominal pain can be extracted and analyzed to identify patterns of diagnostic practices.

Diagnostic practices can be grouped or divided into any number of available categories or individual steps but the most helpful level is likely to be one most likely to meaningfully influence downstream diagnostic decisions, diagnoses rendered, or diagnostic outcomes. For example, for dizziness, one might define all neuroimaging as a single event type (CT, MRI, and all variations); alternatively, one might subdivide neuroimaging into every imaginable subtype (MRI brain vs. MRI brain with MR angiography vs. MRI brain with contrast-enhanced angiography of the head vs. MRI brain with contrast-enhanced angiography of the head and neck, etc.). It is likely that the correct level of granularity is somewhere in between. For dizziness, given that CT and MRI have radically different sensitivities for diagnosing stroke, the appropriate breakdown is probably in distinguishing CT from MRI, but not separating every subtype of each [16].

Analyzing diagnostic paths

Drawing meaningful inferences from diagnostic paths is a three step process: first, complete diagnostic paths for individual patients for specific symptoms such as abdominal pain or dizziness should be systematically identified for a very large patient sample. Second, similarities and differences among diagnostic paths should be identified, taking into consideration a wide variety of factors such as major demographic characteristics (e.g. age), and availability of diagnostic resources. Finally, diagnostic paths and specific steps within them should be analyzed to determine their impact on important outcomes such as diagnostic accuracy, timeliness of diagnosis, and overall cost of diagnostic evaluation. For example, for adults presenting with nonspecific abdominal pain, a strategy of a trial of proton pump inhibitor and making a diagnosis of gastroesophageal reflux disease based on a positive response may be associated with more timely diagnosis and lower overall cost but lower accuracy than initial referral for endoscopy.

Our preliminary work included conceptualizing diagnostic paths, operationalizing definitions, and successfully applying the concept to a small data set of patients with abdominal pain [17]. To be useful and serve as a potential source of guidance for clinicians, however, we must be able to identify and analyze diagnostic paths on a much larger scale. This requires both large data sets (we estimate >100,000 patients) and techniques for automating analysis of these large data sets to identify patterns and draw inferences.

A purely numerical approach to large-scale analysis of diagnostic paths could be useful to identify some important outcomes such as mean path duration or total cost of diagnostic evaluation. However, much more powerful inferences are likely to come from the emerging science of visual analytics [18] to generate visual representations of diagnostic paths, which are useful to researchers, clinicians, and patients to more easily identify specific diagnostic patterns, as has been done previously for treatments. Prior work in visualization of treatment patterns for specific illnesses can adopted for visualizing diagnostic paths [19].

Challenges in extracting and diagnostic paths from EHRs

Missing information

Although large clinical data sets may someday be much more robust, it is clear that some sources of practice variation will be difficult to ascertain from current EHR or CDRN data. Principal problems include inconsistently coded predictors and incomplete ascertainment of diagnostic outcomes. Contextual factors (e.g. seasonal variation of specific illnesses such as influenza) are also likely to be missing from records and may also influence diagnostic paths. Most of these issues can be at least partially managed through careful analysis, and random variation is routinely surmountable through use of larger data sets.

Structured bedside history and exam data are often missing or not reliably coded. This is in contrast to laboratory and imaging data, which are generally highly conserved across encounters, providers, and institutions. If diagnostic paths are optimally informed by bedside diagnostic practices and the resulting clinical information (rather than test ordering), a simple analysis of available lab and imaging data may prove inadequate. In such cases, free text searches or natural language processing may be necessary to achieve optimal path analysis results. For example, the words “no rebound” could be assumed to reflect palpation of the abdomen as a diagnostic maneuver. Patients’ ability to communicate undoubtedly influences clinical evaluation and diagnostic paths and is difficult to capture. Some elements of communication, such as “preferred language” may be included in structured fields and can be used to stratify patients for analysis of diagnostic paths.

Encounters relevant to the presenting problem may not be captured in the data set being analyzed because they take place in an outside health system. This is a common problem when patients do not have a usual source of care. Systematic follow-up data from regional health information exchanges could help mitigate this problem.

Attribution

Accurate attribution of specific diagnostic steps to the presenting problem is essential for identifying accurate diagnostic paths. In many cases, assumptions about attribution are easy. For example, a referral to gastroenterology can be attributed to the presenting problem of nonspecific abdominal pain. In other cases, attribution is more challenging. For example, a patient may have a primary presenting problem of abdominal pain, but may also have a longstanding history of anemia of chronic disease. If a hemoglobin level is ordered, is this intended to diagnose associated gastrointestinal bleeding or to monitor established anemia? In some EHR data sources, a clear linkage to diagnosis is not available. Although individual instances of attribution may be uncertain, when averaged across thousands of patients, the impact of such random effects will be minimized.

Scale

The ability to construct and analyze diagnostic paths for thousands of patients is certainly a great advance in identifying patterns associated with more timely and accurate diagnosis and lower cost of diagnostic evaluation. However, on such a large scale, even relatively small differences in path characteristics or outcomes are likely to be statistically significant and should be interpreted cautiously. For example, in diagnostic evaluation for abdominal pain, paths that encompass early referral to a gastroenterologist may lead to a diagnosis that is established on average 1 day earlier than those which do not include early referral, a difference that may be statistically significant when thousands of diagnostic paths are analyzed. But this difference should also be interpreted in the context of clinical meaningfulness, cost, availability of resources, etc.

Intrinsic limitations of documentation

Diagnostic paths from structured EHR fields are summaries of diagnostic actions and may not accurately or completely reflect a clinician’s diagnostic reasoning. If a patient with abdominal pain, for example, is prescribed an anti-reflux medication and a physician records gastro-esophageal reflux disease in the record, has the physician concluded her diagnostic process or is the medication an empiric trial, the response to which will guide further diagnostic evaluation? It is simply not possible to know from structured EHR data. Also, diagnostic encounters may be systematically “up-coded” to maximize payment, which can introduce bias into outcome assessment. For example, although a clinician is still investigating a complaint of undifferentiated abdominal pain, she may record a precise diagnosis in the record, as recording a symptom (e.g. “abdominal pain”) may compromise payment for her services.

Finally, as diagnostic paths are derived from observational data, identifying the “best” among them, meaning those associated with more timely and accurate diagnosis, may not actually represent the optimal approach to diagnosis but simply the best approaches among those being practiced.

Conclusions

The National Academy of Medicine’s report Improving Diagnosis in Healthcare recommends that health care organizations should “monitor the diagnostic process and identify, learn from and reduce diagnostic errors” and “implement procedures and practices to provide systematic feedback on diagnostic performance” [2]. Current tools do not provide the means to accomplish these goals, and new tools are needed if substantial progress is to be made. Development of a robust, automated methodology for studying diagnostic paths for a broad range of diagnostic problems creates the promise of a quality improvement tool that can address the NAM’s recommendation directly. Limitations in existing EHR data sets that risk bias or random variation can be addressed through careful analysis, larger data sets, or future enhancements to data collection methods. The ability to understand and glean insights from diagnostic practice variation across multiple settings could be transformational for improving medical diagnosis.

References

  • 1.

    Latham PM, Watson T. The collected works of Dr. P.M. Latham. Book 1, Chapter 173. London: The New Syndenham Society, 1876. Google Scholar

  • 2.

    National Academies of Sciences, Engineering, and Medicine. Improving diagnosis in health care. Washington, DC: The National Academies Press, 2015. Google Scholar

  • 3.

    Singh H, Sittig DF. Advancing the science of measurement of diagnostic errors in healthcare: the Safer Dx framework. BMJ Qual Saf 2015;24:103–10. CrossrefPubMedWeb of ScienceGoogle Scholar

  • 4.

    Singh H, Giardina TD, Meyer AN, Reis MD, Thomas EJ. Types and origins of diagnostic errors in primary care settings. JAMA Intern Med 2013;173:418–25. PubMedWeb of ScienceCrossrefGoogle Scholar

  • 5.

    Newman-Toker DE. A unified conceptual model for diagnostic errors: underdiagnosis, overdiagnosis, and misdiagnosis. Diagnosis 2014;1:43–8. Google Scholar

  • 6.

    Singh H, Graber ML, Kissam SM, Sorensen AV, Lenfestey NF, Tant EM, et al. System-related interventions to reduce diagnostic errors: a narrative review. BMJ Qual Saf 2012;21:160–70. PubMedWeb of ScienceCrossrefGoogle Scholar

  • 7.

    Newman-Toker DE. Missed stroke in acute vertigo and dizziness: it is time for action, not debate. Ann Neurol 2016;79:27–31. CrossrefWeb of SciencePubMedGoogle Scholar

  • 8.

    Cincinnati Children’s Hospital Medical Center. Evidence-based care guideline for fever of uncertain source in infants 60 days of age or less. Cincinnati (OH): Cincinnati Children’s Hospital Medical Center, 2010. Google Scholar

  • 9.

    Toward Optimized Practice. Guideline for primary care management of headache in adults. Edmonton (AB): Toward Optimized Practice, 2012:71. Google Scholar

  • 10.

    Royal College of Obstetricians and Gynaecologists. How evidence can influence clinical practice. Scientific impact paper no. 28. August 2011. https://www.rcog.org.uk/globalassets/documents/guidelines/scientific-impact-papers/sip_28.pdf. Accessed 28 November 2016. 

  • 11.

    Zhang Y, Padman R, Patel N. Paving the COWpath: learning and visualizing clinical pathways from electronic health record data. J Biomed Inform 2015;58:186–97. CrossrefPubMedWeb of ScienceGoogle Scholar

  • 12.

    Ely JW, Kaldjian LC, S’Alexxandro DM. Diagnostic errors in primary care: lessons learned. J Am Board Fam Med 2012;25:87–97. CrossrefWeb of SciencePubMedGoogle Scholar

  • 13.

    Kerber KA, Newman-Toker DE. Misdiagnosing dizzy patients: common pitfalls in clinical practice. Neurol Clin 2015;33:565–75. Web of ScienceCrossrefPubMedGoogle Scholar

  • 14.

    PCORNET. The National-Patient Centered Clinical-Research Network. http://pcornet.org/clinical-data-research-networks/. Accessed 28 November 2016. 

  • 15.

    PCORNET. Data security and standards. http://www.pcornet.org/data-security/. Accessed 28 November 2016. 

  • 16.

    Newman-Toker DE, Della Santina CC, Blitz AM. Vertigo and hearing loss. Handb Clin Neurol 2016;136:905–21. PubMedCrossrefGoogle Scholar

  • 17.

    Rao G, Kirley K, Bauer V, Epner P, Solomonides A, Silverstein JC, et al. Identifying diagnostic pathways for undifferentiated abdominal pain. Poster presented at: diagnostic Error in Medicine, 7th International Conference; 2014 September 14–17; Atlanta, GA. Google Scholar

  • 18.

    Caban JJ, Gotz D. Visual analytics in healthcare – opportunities and research challenges. J Am Medical Inform Assoc 2015;22:260–2. Web of ScienceCrossrefGoogle Scholar

  • 19.

    Zhang Y, Padman R, Patel N. Paving the COWpath: learning and visualizing clinical pathways from electronic health records data. J Biomed Inform 2015;58:186–97. CrossrefPubMedGoogle Scholar

About the article

Corresponding author: Goutham Rao, MD, Family Medicine and Community Health, Case Western Reserve University School of Medicine, 11100 Euclid Avenue, Cleveland, OH 44106-4915, USA, Phone: +216-844-3791, Fax: +216-844-3799


Received: 2016-12-27

Accepted: 2017-03-15

Published Online: 2017-04-19

Published in Print: 2017-06-27


Author contributions: All the authors have accepted responsibility for the entire content of this submitted manuscript and approved submission.

Research funding: None declared.

Employment or leadership: None declared.

Honorarium: None declared.

Competing interests: The funding organization(s) played no role in the study design; in the collection, analysis, and interpretation of data; in the writing of the report; or in the decision to submit the report for publication.


Citation Information: Diagnosis, Volume 4, Issue 2, Pages 67–72, ISSN (Online) 2194-802X, ISSN (Print) 2194-8011, DOI: https://doi.org/10.1515/dx-2016-0049.

Export Citation

©2017 Walter de Gruyter GmbH, Berlin/Boston.Get Permission

Citing Articles

Here you can find all Crossref-listed publications in which this article is cited. If you would like to receive automatic email messages as soon as this article is cited in other publications, simply activate the “Citation Alert” on the top of this page.

[1]
Goutham Rao, Sara Naureckas, Avisek Datta, Nivedita Mohanty, Victoria Bauer, Roxane Padilla, Sarah S. Rittner, Sandra Tilmon, and Paul Epner
Diagnosis, 2018, Volume 0, Number 0

Comments (0)

Please log in or register to comment.
Log in