An estimated 1.2 million people in the US have an acute myocardial infarction (AMI) each year . In 2009, inpatient hospital costs for AMI were nearly 12 billion dollars . Although overall AMI mortality rates are declining , about 7% of all AMI hospitalizations in 2007 resulted in death [1, 4].
Most patients experiencing an acute coronary syndrome (ACS) (i.e., AMI or a precursor such as unstable angina) come through an emergency department (ED). EDs commonly evaluate patients presenting with chest pain and other symptoms suggestive of ACS with electrocardiograms and biochemical diagnostic tests. After evaluation, patients considered at high risk of AMI are hospitalized or held for observation until an AMI can be diagnosed or excluded, and patients considered at low risk are released with outpatient follow-up.
The decision to hospitalize or release is not always clear. Previous studies have estimated that 2% to 8% of patients with AMI are not diagnosed in the ED and are inadvertently released home [5–9]. These studies have been small – usually including fewer than a dozen hospitals. They have also focused on the characteristics of patients who are more likely to have missed diagnoses of AMI: women younger than 55 years, patients who are not White, and those who present with atypical features of cardiac ischemia [7, 10, 11]. Additional research with a larger number of patients would yield more generalizable estimates.
Some patients hospitalized with AMI after a treat-and-release ED visit likely represent missed opportunities for correct diagnosis and treatment . Although the effects of missed AMI diagnoses are not completely understood, some studies have found a nearly two-fold increase in the risk of death . Tracking rates of these missed diagnoses might allow providers to target specific patients and policy makers to target specific facilities for improvement. Furthermore, the ability to track missed diagnoses across a range of symptoms and problems would facilitate public health prioritization efforts to reduce misdiagnosis and mitigate harms .
The purpose of the present study is to estimate the frequency of missed AMI or its precursors (e.g., unstable angina) in the ED by examining use of EDs prior to hospitalization for AMI. We focus on patients evaluated for chest pain or cardiac conditions within 1 week of hospitalization; these patients were the most likely to have missed opportunities for diagnosis and intervention that might have reduced their risk for AMI. We use administrative data from the Healthcare Cost and Utilization Project (HCUP) – a family of databases that encompasses inpatient discharge data for over 95% of visits to hospitals in the US . We estimate the overall rate of missed diagnoses and examine the association between missed diagnoses and patient, ED, and hospital characteristics.
Materials and methods
Definition of misdiagnosis
Definitions and standards for describing diagnostic failures vary . We defined misdiagnosis as a diagnostic error; that is, a diagnosis that is “missed, wrong, or delayed, as detected by some subsequent definitive test or finding” . We focused on probable missed AMI using hospital admission with a discharge diagnosis of AMI as the subsequent definitive test. We looked back in time from these index admissions for patients whose symptoms were probably missed or misdiagnosed at a recent ED visit. We did not distinguish missed, wrong, or delayed diagnoses or differentiate between misdiagnosis and diagnostic error. Because we did not have detailed clinical data, we could not examine potential diagnostic process failures, preventability of the missed AMI, or potential harm resulting from the diagnostic procedure.
We conducted a retrospective, cross-sectional analysis of probable missed AMI using linked inpatient discharge records and ED visit records. We identified patients hospitalized for AMI in inpatient data. Then, we identified the patients who had been treated and released from an ED in the preceding 7 days in linked ED data. Data were prepared and analyzed consistent with Health Insurance Portability and Accountability Act (HIPAA) privacy rules as described below.
This is a retrospective study using administrative data with synthetic person identifiers. No human subjects were involved in the preparation of this manuscript, and no IRB approval was required.
Our analysis used the 2007 HCUP State Inpatient Databases , which is a census of inpatient discharge records, and the 2007 HCUP State Emergency Department Databases , which is a census of hospital-affiliated ED visits that did not result in hospitalizations. ED visits that resulted in hospitalizations are captured in the SID. We linked individuals in inpatient and ED settings of care using a synthetic person identifier that state data organizations had devised with each patient’s personal information. Synthetic person identifiers can be used to track patients across hospitals and settings while satisfying strict privacy guidelines . We included in the study the states with SID and SEDD data as well as reliable, encrypted person identifiers and race and ethnicity coding. The resulting data came from 9 states (Arizona, Florida, Massachusetts, Missouri, New Hampshire, New York, South Carolina, Tennessee, and Utah) comprising 797 EDs.
We included records of 111,973 patients aged 18 years and older who had an AMI index admission between February and December, 2007. We identified AMI admissions by using Clinical Classifications Software (CCS), which groups International Classification of Diseases, Ninth Revision, Clinical Modification (ICD-9-CM) codes by clinically meaningful categories . We required patients have a principal diagnosis of AMI as determined by the CCS code 100. To adhere closely to the aforementioned definition of “diagnostic error”, we excluded from the analysis patients who left the ED against medical advice (<1% of all AMI index admissions with a prior ED visit and 13% of AMI index admissions with a prior ED visit for cardiac symptoms). We also excluded patients with missing ZIP-Code-level income information (2.9% of AMI index admissions).
Hospitalization for AMI following an ED visit does not necessarily indicate that an opportunity for diagnosis or treatment was missed. A patient’s ED diagnosis could be unrelated to the subsequent AMI. To minimize this problem, we focus on patients who visited an ED with chest pain or cardiac conditions, were released from the ED, subsequently returned to a hospital within 0 to 7 days, and were admitted with a principal diagnosis of AMI. Patients were not counted in the category of missed diagnoses if, on their initial ED visit, they were admitted to the same hospital through the ED or transferred to another hospital. Consistent with previous studies [5–9] and to make a conservative attribution of the AMI to the earlier symptom, we applied a maximum cutoff of 7 days between the ED visit and an inpatient admission.
We based our list of cardiac conditions on those used in previous clinical studies and on preliminary examination of CCS codes. Using CCS codes for the first-listed diagnosis in the SEDD, we identified ED diagnoses preceding admission for AMI. Table 1 lists the ED diagnoses for patients with an ED visit within the prior 7 days of their AMI admission. About 70% of these patients were diagnosed with four cardiac conditions: (1) nonspecific chest pain (45.5%); (2) coronary atherosclerosis and other heart disease (15.4%); (3) congestive heart failure (5.4%); and (4) cardiac dysrhythmias (3.3%). Another 15% were diagnosed with other lower respiratory disease (8.9%) or abdominal pain (6.6%). All other conditions accounted for <3% of the missed diagnoses. Although some ED visits for lower respiratory disease, abdominal pain, and other diagnoses may represent atypical presentations of heart disease, we restricted our definition of missed diagnoses to ED visits for the four cardiac conditions.
As a validation check to ensure coherence between our misdiagnosis construct and the results we report, we assessed the temporal profile of missed diagnoses and controls within the 7-day window prior to index admission for AMI. We hypothesized that revisits for symptoms designated as probable missed diagnoses would be clustered in the days immediately following an ED visit. We expected revisits for designated control symptoms (i.e., other lower respiratory disease, abdominal pain, esophageal disorders, syncope, other gastrointestinal disorders, essential hypertension, malaise and fatigue, dizziness or vertigo, or gastritis) to be more evenly dispersed across the 7-day period.
We categorized patients by age, sex, race and ethnicity (non-Hispanic White, non-Hispanic Black, Hispanic, other non-Hispanic), and expected primary payer (private insurance, Medicare, Medicaid, other insurance, uninsured). We also classified patients by the national quartile of median household income based on the patient’s ZIP Code. We used Elixhauser Comorbidity Software to adjust for patient comorbidities .
We assigned hospital and ED characteristics based on the facility that initially treated the patient for AMI symptoms. For patients with an ED visit prior to the AMI admission, we used the characteristics of the ED that treated and discharged the patient. For patients without a previous ED visit, we used the characteristics of the admitting facility. Facility-specific characteristics included: (1) region (Northeast, Midwest, South, West); (2) population size (large metropolitan area, small metropolitan area, micropolitan area, and rural area); (3) hospital ownership (public, private not-for-profit, private for-profit); (4) the availability of a cardiac catheterization lab, as reported to the American Hospital Association; and (5) the facility’s teaching status (teaching or non-teaching). A teaching facility was defined as having an approved American Medical Association accredited graduate medical education program, being a member of the Council of Teaching Hospitals, or having a ratio of full-time-equivalent interns and residents to beds that was 0.25 or higher.
Facility volume characteristics
We characterized hospitals according to the volume of patients seen in the ED and inpatient settings. To assess volume, we divided into tertiles: (1) total volume of ED visits during the year, (2) the proportion of patients admitted to an inpatient setting from the ED, and (3) the annual occupancy rate as reported to the American Hospital Association.
We also accounted for visit-specific characteristics in the analyses. These included: (1) visit occurrence on a weekend or weekday; (2) visit occurrence during the first half (July–December) or second half (January–June) of the traditional resident training year; and (3) the relative ED volume on the day of the visit. We calculated relative ED volume as the total number of visits to the ED on the day of the patient’s visit, divided by the maximum number of visits to the ED on any day during the year.
We examined rates of mortality for patients with and without a missed diagnosis of AMI based on the patient’s discharge disposition.
We performed statistical analyses using SAS (SAS Institute, Inc; Cary, NC, USA) statistical software Version 9.2. We used hierarchical multi-level modeling [21–23] to examine the likelihood of missed diagnoses, adjusting for patient and facility characteristics. We fit the simplest form of a hierarchical model using SAS PROC GLIMMIX, where we investigated patient visits nested within the EDs. We also included hospital fixed effects to control for unobservable characteristics, such as ED technology, infrastructure, culture, personal biases, and other factors that could not be measured directly at each ED.
Figure 1 displays the results of the temporal profile analysis of probable missed diagnoses and controls. As hypothesized, treat-and-release ED visits for missed diagnoses were clustered in the few days before the hospital admission for AMI. In contrast, visits for control symptoms were more evenly distributed throughout the 7-day period.
Table 2 displays the descriptive characteristics of the patients with missed diagnoses in the ED and patients with AMI and no ED visit for chest pain or a cardiac condition within the previous 7 days. We identified 993 patients (0.9% of all AMI admissions) with missed diagnoses. State-level estimates ranged from 0.29% to 1.96%. Compared to patients without a preceding ED visit for a cardiac condition, patients with missed diagnoses were younger; more likely to be Black; less likely to be Hispanic; more likely to have private insurance, Medicaid, or no insurance; more likely to reside in areas with the lowest household incomes; and less likely to die in the hospital. There were 15 significant differences in comorbidities between patients with and without missed diagnoses; of these, 13 comorbidities were more common in patients who did not have missed diagnoses. Also, most patients with missed diagnoses were seen at non-teaching hospitals and visited EDs without cardiac catheterization capabilities.
Table 3 displays the estimates, odds ratios (ORs), and probability values from the hierarchical model of missed diagnoses. Compared to younger patients, older patients had lower odds of missed diagnoses (aged 45–64 years, OR=0.70, p=0.001; 65–74 years, OR=0.61, p<0.001; 75+ years, OR=0.49, p<0.0001). Compared to White patients, Black patients (OR=1.31, p=025) and patients identified as other races or ethnicities (OR=1.45, p<0.005) had higher odds of missed diagnoses. The odds of missed diagnoses were 20% lower for patients covered by Medicare compared to those who were privately insured (OR=0.80, p=0.04). However, the associations between missed diagnoses and expected payers (other than Medicare), household income, and most comorbidity characteristics were not significant when other demographic and clinical conditions were controlled.
The odds of missed diagnoses also varied with facility and visit characteristics. Hospitals in the Midwest had more than twice the odds of missed diagnoses as hospitals in the Northeast (OR=2.17, p=0.0003). Compared to hospitals in large population centers, those in areas of between 10,000 and 50,000 residents (OR=1.97, p<0.0001) and those with <10,000 residents (OR=1.86, p=0.0002) demonstrated higher odds of missed diagnoses. The odds of missed diagnoses were about 80% lower for facilities with available cardiac catheterization laboratories (OR=0.19, p<0.0001). Teaching hospitals had lower odds of missed diagnoses compared to non-teaching hospitals (OR=0.60, p=0.0002). EDs that admit a higher proportion of patients to the hospital had about 85% lower odds of missed diagnoses (highest category, OR=0.15, p<0.0001) compared to hospitals with lower admissions from the ED. Hospitals with high occupancy rates (OR=0.63, p<0.0001) demonstrated lower odds than those with low occupancy. ED visits in January through June had lower odds of missed diagnoses than visits in July through December (OR=0.69, p<0.0001).
We retrospectively evaluated patients who presented to an ED with chest pain or a cardiac condition and for whom an AMI diagnosis was confirmed on an inpatient admission within 1 week – that is, those who had a probable missed diagnoses in the ED. Our study indicates an overall rate of 0.9% for missed diagnoses of AMI. This rate is lower than the 2% rate reported in recent decades [6–8], and it is considerably lower than earlier estimates of 3.8% in 6 U.S. hospitals  and of 7.7% in an Israeli hospital . Differences could reflect progress in cardiac care over time or methodological differences in the studies. We used a conservative approach by including only a few symptoms as suspected misdiagnoses and leaving others (e.g., esophageal disorders, abdominal pain) uncounted. Some of these uncounted patients may have had missed AMI diagnoses. Also, our study could only count patients who subsequently were hospitalized for AMI within 7 days. Therefore, we would have missed patients who did not seek further medical care, sought care more than 7 days later, sought care in another state, or who died of AMI at home. Inpatient mortality was lower among those with missed diagnoses, which could imply that missed diagnoses presented with less extensive disease than those who were not missed. Although previous studies with prospective clinical data could track and identify AMIs in all patients, the total numbers of patients were limited relative to those included in the HCUP data. Thus, our findings are substantially more robust with regard to subgroup analyses of demographic and facility characteristics.
Younger patients and Black patients experienced higher odds of missed diagnoses than older and White patients, respectively. These findings are consistent with previous studies suggesting that patients’ demographic characteristics influence diagnosis  and treatment . Black patients have a higher risk for coronary artery disease [26, 27], which should lead to a heightened awareness of cardiac symptoms. However, these patients are more likely to present with atypical symptoms, which may increase the odds of missed diagnoses. Young patients may receive a missed diagnosis because they are viewed as unlikely to have AMI – similar to missed diagnosis in young patients who have a stroke . These findings underscore the need for clinician education in care of patients with lower baseline prevalence of AMI and atypical presentations.
Hospitals in locales of fewer than 50,000 residents demonstrated higher odds of missed diagnoses than hospitals in larger population centers. Resources, including medical staff and modern technologies, are more limited in smaller areas , which could affect diagnostic accuracy. For example, hospitals with cardiac catheterization facilities demonstrated lower odds of missed diagnoses. In addition, missed diagnoses were less likely to occur in teaching hospitals. Teaching hospitals may have more ready access to cardiologists and diagnostic tests that support the accurate detection of AMI, and medical students and residents may encourage the application of evidence-based algorithms and clinical decision support software. It should also be noted that missed diagnoses happened less often from January through June (which corresponds with the second half of the traditional residency training year) than from July through December. Further investigation of the interaction between the timing of ED rotations for residents and missed diagnoses is warranted. Another explanation may be the preponderance of summer and winter holidays during July through December, which may attenuate ED effectiveness.
In our analyses, we did not find that EDs with high volumes missed fewer diagnoses, as previously reported . However, hospitals with higher proportions of admissions from the ED and higher occupancy were less likely to miss diagnoses. These findings may be tautological – fewer diagnoses are missed because more patients with chest pain are admitted from the ED to the inpatient setting. Alternatively, this may suggest that busy hospitals that are oriented toward emergency care, rather than elective and direct admissions from physician offices, are less likely to miss diagnoses in the ED.
Although diagnostic errors – including missed, incorrect, or delayed diagnoses – are common  and can be devastating and costly for patients , they have received relatively little attention and study from the patient safety community [15, 29–31]. The rapid pace and short duration of observation may make patients in EDs particularly vulnerable to diagnostic error. Missed diagnoses in the ED often result in additional acute care services along with repeated testing, delays in appropriate treatment, increased mortality [32, 33], and more dollars recovered in malpractice suits than any other medical error [34–37].
Extensive previous research on missed diagnoses has focused on clinical reasoning and provider-related causal factors. Contributors to diagnostic error include gaps in provider knowledge or memory (e.g., the ability to recall a similar case), mistakes in judgment, fatigue, poor referral choices, incomplete history-taking, misinterpretation of laboratory tests, and provider overconfidence [36, 38–42]. Cognitive  and system-related interventions  to reduce the likelihood of diagnostic errors have also been examined.
Because of the complexity of recognizing and analyzing missed diagnoses, they are not monitored regularly, and new methods are needed to detect and track these events [15, 45]. It is probably not possible to eliminate all missed diagnoses ; however, measuring and monitoring their incidence across EDs could help administrators identify ranges of acceptable missed diagnosis rates and target training for facilities with extreme rates. Knowledge about variation in missed diagnoses across populations should help health care teams identify patients who are most vulnerable to these events. Previous work showed that hospital administrative data can be used to identify missed diagnoses of stroke in EDs . The present study demonstrates that rates of missed diagnoses in the ED before an AMI may also be calculated using hospital data. This approach may prove to be generalizable across symptoms (at least for acute diseases), which could lead to missed diagnosis dashboards and prioritization of clinical symptoms, conditions, or populations at greatest risk.
Based on our findings, non-teaching hospitals without cardiac catheterization facilities that are in less-populated locales might benefit from tracking rates of missed diagnoses before AMI. If the results are high, these hospitals may want to review their diagnostic protocols, including standardized triage, serial electrocardiogram (ECG), and troponin testing . Tools to aid in ECG interpretation are known to be effective in improving evaluation and potentially reducing errors . Facilities without cardiologists may consider access to specialists through telemedicine links for help with diagnosis and treatment . In addition, efforts to reduce the number of patients leaving emergency departments against medical advice when presenting with cardiac symptoms may help improve diagnosis of AMI.
This study has limitations. First, we used a cross-sectional analysis, which determines associations but not causality. We also lacked granular clinical data, so some of our missed cases were likely coincidental. Second, administrative data cannot reveal clinical detail. We used ICD codes as a surrogate for presenting symptoms, which might lead to some miscoding or misclassification, although two thirds of the missed cases were coded as non-specific chest pain at the initial visit. Furthermore, potentially useful data such as ED triage level were not available in the study data, and the reporting of secondary diagnoses on ED records is limited. Third, a retrospective approach cannot account for patients who are lost to follow-up (including those who died after discharge from the ED), which would underestimate the number of potentially missed diagnoses. In addition, a hospital admission does not necessarily indicate that a correct diagnosis was made. Some patients who had AMI diagnoses that were missed in the ED might have been admitted for a wrong diagnosis, which would underestimate our rate of misses in the ED. Conversely, planned admissions for patients returning to the hospital in the event of intensifying symptoms would overestimate missed diagnoses in this study. Fourth, we focus on available hospital and ED measures; staffing and workflow measures would provide a more complete analysis. Nevertheless, the fixed effects employed in this model adjust for unobservable factors related to hospital environment, culture, and personnel. Finally, this work is based on data from 9 states and is not generalizable to all states. Despite these limitations, this study remains the largest on missed diagnoses associated with AMI to date, including nearly 112,000 patients and 797 hospitals.
Missed diagnoses are complex, costly, underemphasized, and difficult to detect. There is a need to increase recognition of these events. Our results reinforce previous findings that AMI diagnosis is missed in a small percentage of visits to EDs when the patient has cardiac symptoms. The results suggest that the likelihood of missed diagnoses is related to both patient and ED factors, especially the patient’s age and race and the facility’s resources for detecting AMI.
We also demonstrate that it is feasible to assess missed diagnoses using linked ED and inpatient administrative data. This method allows examination of missed diagnoses across a broader spectrum of facilities and geographic areas than previous methods, which rely on more costly methods to abstract clinical information. Findings from this study support a systemic approach across inpatient and ED settings to understand where missed diagnoses occur. Future work should examine the roles of staffing and workflow on missed diagnoses in the ED and study trends in missed diagnoses over time as new diagnostic technologies are developed and adopted.
The analysis included data from the following Healthcare Cost and Utilization Project Partner organizations: Arizona Department of Health Services, Florida Agency for Health Care Administration, Massachusetts Division of Health Care Finance and Policy, Missouri Hospital Industry Data Institute, New Hampshire Department of Health & Human Services, New York State Department of Health, South Carolina State Budget & Control Board, Tennessee Hospital Association, and Utah Department of Health. We would also like to acknowledge Arpit Misra and Minya Sheng for their technical support on the study and Linda Lee, PhD, for editorial review.
National Heart, Lung and Blood Institute. What is a heart attack? Available at: http://www.nhlbi.nih.gov/health/dci/Diseases/HeartAttack/HeartAttack_WhatIs.html. Accessed May 26, 2014.
Stranges E, Kowlessar N, Elixhauser A. Components of growth in inpatient hospital costs, 1997–2009. HCUP Statistical Brief #123. Rockville, MD: Agency for Healthcare Research and Quality, 2011, Available at: http://www.hcup-us.ahrq.gov/reports/statbriefs/sb123.pdf. Accessed June 25, 2014.
Krumholz HM, Wang Y, Chen J, Drye EE, Spertus JA, Ross JS, et al. Reduction in acute myocardial infarction mortality in the United States. J Am Med Assoc 2009;302:767–73.Google Scholar
Hines A, Stranges E, Andrews RM. Trends in hospital risk-adjusted mortality for select diagnoses by patient subgroups, 2000–2007. HCUP Statistical Brief #98. Rockville, MD: Agency for Healthcare Research and Quality, 2010. Available at: http://www.hcup-us.ahrq.gov/reports/statbriefs/sb98.pdf. Accessed June 4, 2014.
Lee TH, Rouan GW, Weisberg MC, Brand DA, Acampora D, Stasiulewicz C, et al. Clinical characteristics and natural history of patients with acute myocardial infarction sent home from the emergency department. Am J Cardiol 1987;60:219–24.CrossrefGoogle Scholar
McCarthy BD, Beshansky JR, D’Agostino RB, Selker HP. Missed diagnoses of acute myocardial infarction in the emergency department: results from a multicenter study. Ann Emerg Med 1993;22:579–82.CrossrefGoogle Scholar
Pope HJ, Aufderheide TP, Ruthazer R, Woolard RH, Feldman JA, Beshansky JR, et al. Missed diagnoses of acute cardiac ischemia in the emergency department. N Engl J Med 2000;342:1163–70.Google Scholar
Schull MJ, Vermeulen MJ, Stukel TA. The risk of missed diagnosis of acute myocardial infarction associated with emergency department volume. Ann Emerg Med 2006;48:647–55.Web of ScienceCrossrefGoogle Scholar
Goldman L, Kirtane AJ. Triage of patients with acute chest pain and possible cardiac ischemia: the elusive search for diagnostic perfection. Ann Intern Med 2003;139:987–95.Google Scholar
Maynard C, Beshansky JR, Griffith JL, Selker HP. Causes of chest pain and symptoms suggestive of acute cardiac ischemia in African-American patients presenting to the emergency department: a multicenter study. J Natl Med Assoc 1997;89:665–71.Google Scholar
Singh H, Meyer AN, Thomas EJ. The frequency of diagnostic errors in outpatient care: estimations from three large observational studies involving US adult populations. Brit Med J Qual Saf 2014. doi: 10.1136/bmjqs-2013-002627. [Epub ahead of print].CrossrefGoogle Scholar
Newman-Toker DE, Moy E, Valente E, Coffey R, Hines AL. Missed diagnosis of stroke in the emergency department: a cross-sectional analysis of a large population-based sample. Diagnosis 2014;1:155–66.Google Scholar
HCUP Nationwide Databases. Healthcare Cost and Utilization Project (HCUP), 2006–2009. Rockville, MD: Agency for Healthcare Research and Quality. Available at: www.hcup-us.ahrq.gov/databases.jsp. Accessed June 25, 2014.
Graber M. Diagnostic errors in medicine: a case of neglect. Jt Com J Qual Patient Saf 2005;31:106–13.Google Scholar
HCUP State Inpatient Databases (SID). Healthcare Cost and Utilization Project (HCUP), 2005–2009. Rockville, MD: Agency for Healthcare Research and Quality. Available at: www.hcup-us.ahrq.gov/sidoverview.jsp. Accessed June 25, 2014.
HCUP State Emergency Department Databases (SEDD). Healthcare Cost and Utilization Project (HCUP), 2009. Rockville, MD: Agency for Healthcare Research and Quality. Available at: www.hcup-us.ahrq.gov/seddoverview.jsp. Accessed June 25, 2014.
HCUP Supplemental Variables for Revisit Analyses. Healthcare Cost and Utilization Project (HCUP), 2009. Rockville, MD: Agency for Healthcare Research and Quality. Available at: http://www.hcup-us.ahrq.gov/toolssoftware/revisit/revisit.jsp. Accessed June 25, 2014.
HCUP Clinical Classifications Software (CCS) for ICD-9-CM. Healthcare Cost and Utilization Project (HCUP), 2009. Rockville, MD: Agency for Healthcare Research and Quality. Available at: www.hcup-us.ahrq.gov/toolssoftware/ccs/ccs.jsp. Accessed June 25, 2014.
Rasbash J, Steele F, Browne WJ, Goldstein H. A user’s guide to MLwiN, Version 2.0. Bristol, UK: Centre for Multilevel Modeling, University of Bristol, 2009. Available at: http://www.cmm.bristol.ac.uk/MLwiN/download/userman_2005.pdf. Accessed April 14, 2014.
Raudenbush SW, Bryk AS. Hierarchical linear models. Thousand Oaks, CA: Sage Publications, 2002.Google Scholar
Schor S, Behar S, Modan B, Barell V, Drory J, Kariv I. Disposition of presumed coronary patients from an emergency room: a follow up study. J Am Med Assoc 1976;236:941–3.Google Scholar
Schulman KA, Berlin JA, Harless W, Kerner JF, Sistrunk S, Gersh BJ, et al. The effect of race and sex on physicians’ recommendations for cardiac catheterization. N Engl J Med 1999;340:618–26.Google Scholar
Altman DE, Clancy C, Blendon RJ. Improving patient safety: five years after IOM report. N Engl J Med 2004;351:2041–3.Google Scholar
Collinson PO, Premachandram S, Hashemi K. Prospective audit of incidence of prognostically important myocardial damage in patients discharged from emergency department: commentary: time for improved diagnosis and management of patients presenting with acute chest pain. Brit Med J 2000;320:1702–5.Google Scholar
Sequist TD, Bates DW, Cook EF, Lampert S, Schaefer M, Wright J, et al. Prediction of missed myocardial infarction among symptomatic outpatients without coronary heart disease. Am Heart J 2005;149:74–81.Google Scholar
Herren KR, Mackway-Jones K. Emergency management of cardiac chest pain: a review. Emerg Med J 2001;18:6–10.Google Scholar
Vukmir RB. Medical malpractice: managing the risk. Med Law 2004;23:495–513.Google Scholar
White AA, Wright SW, Blanco R, Lemonds B, Sisco J, Bledsoe S, et al. Cause-and-effect analysis of risk management files to assess patient care in the emergency department. Acad Emerg Med 2004;11:1035–41.CrossrefGoogle Scholar
Saber Tehrani AS, Lee H, Mathews SC, Shore A, Makary MA, Pronovost PJ, et al. 25-Year summary of US malpractice claims for diagnostic errors 1986–2010: an analysis from the National Practitioner Data Bank. Brit Med J Qual Saf 2013;22:672–80.Google Scholar
Crosskerry P. The cognitive imperative: thinking about how we think. Acad Emerg Med 2000;7:1223–31.Google Scholar
Wears RL, Perry SJ. Human factors and ergonomics in the emergency department. Ann Emerg Med 2002;40:206–12.Google Scholar
Graber ML, Kissam S, Payne VL, Meyer AN, Sorensen A, Lenfestey N, et al. Cognitive interventions to reduce diagnostic error: a narrative review. Brit Med J Qual Saf 2012;21:535–57.Google Scholar
Singh H, Graber ML, Kissam SM, Sorensen AV, Lenfestey NF, Tant EM, et al. System-related interventions to reduce diagnostic errors: a narrative review. Brit Med J Qual Saf 2012;21:160–70.Google Scholar
Schiff GD, Kim S, Abrams R, Cosby K, Lambert B, Elstein AS, et al. Diagnosing diagnosis errors: lessons from a multi-institutional collaborative project. Adv Patient Safety 2005;2:255–78.Google Scholar
Graber M, Gordon R, Franklin N. Reducing diagnostic errors in medicine: what’s the goal? Acad Med 2002;77:981–92.Google Scholar
Crosskerry P. Achieving quality in clinical decision making: cognitive strategies and detection of bias. Acad Emerg Med 2000;9:1184–204.Google Scholar
Selker HP, Beshansky JR, Griffith JL, Aufderheide TP, Ballin DS, Bernard SA, et al. Use of the Acute Cardiac Ischemia Time-Insensitive Predictive Instrument (ACI-TIPI) to assist with triage of patients with chest pain or other symptoms suggestive of acute cardiac ischemia: a multicenter, controlled clinical trial. Ann Intern Med 1998;129:845–55.Google Scholar
Terkelsen CJ, Norgaard BL, Lassen JF, Gerdes JC, Ankersen JP, Romer F, et al. Telemedicine used for remote prehospital diagnosing in patient suspected of acute myocardial infarction. J Intern Med 2002;252:412–20.Google Scholar
About the article
Published Online: 2014-12-06
Published in Print: 2015-02-01
Author contributions: All the authors have accepted responsibility for the entire content of this submitted manuscript and approved submission. Ernest Moy: Led study design and analysis from conception through completion; contributed to all major revisions; and approved final manuscript. Marguerite Barrett: Contributed to study design, analysis, and interpretation; led data preparation; revised methods section; and approved final manuscript. She had full access to all of the data in the study and takes responsibility for the integrity of the data and the accuracy of the data analysis. Rosanna Coffey: Contributed to study design and analysis; oversaw drafts and interpreted results; contributed to all major revisions; and approved final manuscript. Anika Hines: Interpreted results; drafted and revised the manuscript; and approved final manuscript. David Newman-Toker: Provided review and clinical context; revised manuscript; and approved final.
Research funding: The analysis and preparation of this manuscript were funded by the Agency for Healthcare Research and Quality.
Employment or leadership: The authors from Truven Health Analytics were under contract to the Agency for Healthcare Research and Quality (Contract No. HHSA-290-2013-00002-C).
Honorarium: None declared.
Competing interests: The funding organization(s) played no role in the study design; in the collection, analysis, and interpretation of data; in the writing of the report; or in the decision to submit the report for publication.
Important disclaimers: The views expressed in this article are those of the authors and do not necessarily reflect those of the Agency for Healthcare Research and Quality or the U.S. Department of Health and Human Services.
Citation Information: Diagnosis, Volume 2, Issue 1, Pages 29–40, ISSN (Online) 2194-802X, ISSN (Print) 2194-8011, DOI: https://doi.org/10.1515/dx-2014-0053.
©2014, Anika L. Hines et al., published by De Gruyter. This work is licensed under the Creative Commons Attribution-NonCommercial-NoDerivatives 3.0 License. BY-NC-ND 3.0