In laboratory medicine, consultation by adding interpretative comments to reports has long been recognized as one of the activities that help to improve patient treatment outcomes and strengthen the position of our profession. Interpretation and understanding of laboratory test results might in some cases considerably be enhanced by adding test when considered appropriate by the laboratory specialist – an activity that was named reflective testing. With patient material available at this stage, this might considerably improve the diagnostic efficiency. The need and value of these forms of consultation have been proven by a diversity of studies. Both general practitioners and medical specialists have been shown to value interpretative comments. Other forms of consultation are emerging: in this time of patient empowerment and shared decision making, reporting of laboratory results to patients will be common. Patients have in general little understanding of these results, and consultation of patients could add a new dimension to the service of the laboratory. These developments have been recognized by the European Federation of Clinical Chemistry and Laboratory Medicine, which has established the working group on Patient Focused Laboratory Medicine for work on the matter. Providing proper interpretative comments is, however, labor intensive because harmonization is necessary to maintain quality between individual specialists. In present-day high-volume laboratories, there are few options on how to generate high-quality, patient-specific comments for all the relevant results without overwhelming the laboratory specialists. Automation and application of expert systems could be a solution, and systems have been developed that could ease this task.
The profession of laboratory medicine differs between countries within the European Union (EU) in many respects. The objective of professional organizations of the promotion of mutual recognition of specialists within the EU is closely related to the free movement of people. This policy translates to equivalence of standards and harmonization of the training curriculum. The aim of the present study is the description of the organization and practice of laboratory medicine within the countries that constitute the EU. A questionnaire covering many aspects of the profession was sent to delegates of the European Federation of Clinical Chemistry and Laboratory Medicine (EFLM) and Union Européenne de Médecins Spécialistes (UEMS) of the 28 EU countries. Results were sent to the delegates for confirmation. Many differences between countries were identified: predominantly medical or scientific professionals; a broad or limited professional field of interest; inclusion of patient treatment; formal or absent recognition; a regulated or absent formal training program; general or minor application of a quality system based on ISO Norms. The harmonization of the postgraduate training of both clinical chemists and of laboratory physicians has been a goal for many years. Differences in the organization of the laboratory professions still exist in the respective countries which all have a long historical development with their own rationality. It is an important challenge to harmonize our profession, and difficult choices will need to be made. Recent developments with respect to the directive on Recognition of Professional Qualifications call for new initiatives to harmonize laboratory medicine both across national borders, and across the borders of scientific and medical professions.
The first strategic EFLM conference “Defining analytical performance goals, 15 years after the Stockholm Conference” was held in the autumn of 2014 in Milan. It maintained the Stockholm 1999 hierarchy of performance goals but rearranged them and established five task and finish groups to work on topics related to analytical performance goals including one on the “total error” theory. Jim Westgard recently wrote a comprehensive overview of performance goals and of the total error theory critical of the results and intentions of the Milan 2014 conference. The “total error” theory originated by Jim Westgard and co-workers has a dominating influence on the theory and practice of clinical chemistry but is not accepted in other fields of metrology. The generally accepted uncertainty theory, however, suffers from complex mathematics and conceived impracticability in clinical chemistry. The pros and cons of the total error theory need to be debated, making way for methods that can incorporate all relevant causes of uncertainty when making medical diagnoses and monitoring treatment effects. This development should preferably proceed not as a revolution but as an evolution.
Appropriate quality of test results is fundamental to the work of the medical laboratory. How to define the level of quality needed is a question that has been subject to much debate. Quality specifications have been defined based on criteria derived from the clinical applicability, validity of reference limits and reference change values, state-of-the-art performance, and other criteria, depending on the clinical application or technical characteristics of the measurement. Quality specifications are often expressed as the total error allowable (TEA) – the total amount of error that is medically, administratively, or legally acceptable. Following the TEA concept, bias and imprecision are combined into one number representing the “maximum allowable” error in the result. The commonly accepted method for calculation of the allowable error based on biological variation might, however, have room for improvement. In the present paper, we discuss common theories on the determination of quality specifications. A model is presented that combines the state-of-the-art with biological variation for the calculation of performance specifications. The validity of reference limits and reference change values are central to this model. The model applies to almost any test if biological variation can be defined. A pragmatic method for the design of internal quality control is presented.
Background: Systematic reviews have gradually replaced single studies as the highest level of documented effectiveness of health care interventions. Systematic reviewing is a new scientific method, concerned with the development and application of methods for identifying relevant literature, analysing the material while increasing validity and precision, and presenting and discussing the results in a way that does justice to the research question and to the available evidence. The objective of this study was to review the systematic reviews in laboratory medicine, to evaluate the methods applied in these reviews and the applicability of guidelines of the Cochrane Methods Working Group on Screening and Diagnostic Tests, and identify areas for future research.
Methods: All the systematic reviews in the field of clinical chemistry and laboratory haematology that could be identified in Medline, EMBASE and other literature databases up to December 1998, were evaluated.
Results: We studied 23 reviews of diagnostic trials. Although all reviews share the same basic methodology, there was a wide variation in the methods applied. There was no consensus on the quality criteria for inclusion of primary studies. The results of the primary studies were heterogeneous in most cases. This was partly due to design flaws in the primary studies, but was also inherent in the diverse study designs in diagnostic trials. We observed differences in the analysis of the factors that cause heterogeneity of the results, and in the summary statistics used to pool the data from the primary studies. The additional diagnostic value of a test, after other test results are taken into consideration, was only addressed in one study.
Conclusion: This overview of 23 reviews of diagnostic trials identifies areas in the methods of systematic reviewing where consensus is lacking, such as quality rating of primary studies, analysis of heterogeneity between primary studies and pooling of data. Guidelines need to be improved on these points.
Reflective testing is a procedure in which the laboratory specialist adds additional tests and/or comments to an original request, after inspection (reflection) of the results. It can be considered as an extension of the authorization process where laboratory tests are inspected before reporting to the physician. The laboratory specialist will inevitably find inconclusive results, and additional testing can contribute to make the appropriate diagnosis. Several studies have been published on the effects of reflective testing. Some studies focus on the opinion of the general practitioners or other clinicians, whereas other studies were intended to determine the patient’s perspective. Overall, reflective testing was judged as a useful way to improve the process of diagnosing (and treating) patients. There is to date scarce high quality scientific evidence of the effectiveness of this procedure in terms of patient management. A randomized clinical trial investigating this aspect is however ongoing. Cost effectiveness of reflective testing still needs to be determined in the future. In conclusion, reflective testing can be seen as a new dimension in the service of the clinical chemistry laboratory to primary health care. Additional research is needed to deliver the scientific proof of the effectiveness of reflective testing for patient management.
Error methods – compared with uncertainty methods – offer simpler, more intuitive and practical procedures for calculating measurement uncertainty and conducting quality assurance in laboratory medicine. However, uncertainty methods are preferred in other fields of science as reflected by the guide to the expression of uncertainty in measurement. When laboratory results are used for supporting medical diagnoses, the total uncertainty consists only partially of analytical variation. Biological variation, pre- and postanalytical variation all need to be included. Furthermore, all components of the measuring procedure need to be taken into account. Performance specifications for diagnostic tests should include the diagnostic uncertainty of the entire testing process. Uncertainty methods may be particularly useful for this purpose but have yet to show their strength in laboratory medicine. The purpose of this paper is to elucidate the pros and cons of error and uncertainty methods as groundwork for future consensus on their use in practical performance specifications. Error and uncertainty methods are complementary when evaluating measurement data.
Clinical practice guidelines (CPG) are written with the aim of collating the most up to date information into a single document that will aid clinicians in providing the best practice for their patients. There is evidence to suggest that those clinicians who adhere to CPG deliver better outcomes for their patients. Why, therefore, are clinicians so poor at adhering to CPG? The main barriers include awareness, familiarity and agreement with the contents. Secondly, clinicians must feel that they have the skills and are therefore able to deliver on the CPG. Clinicians also need to be able to overcome the inertia of “normal practice” and understand the need for change. Thirdly, the goals of clinicians and patients are not always the same as each other (or the guidelines). Finally, there are a multitude of external barriers including equipment, space, educational materials, time, staff, and financial resource. In view of the considerable energy that has been placed on guidelines, there has been extensive research into their uptake. Laboratory medicine specialists are not immune from these barriers. Most CPG that include laboratory tests do not have sufficient detail for laboratories to provide any added value. However, where appropriate recommendations are made, then it appears that laboratory specialist express the same difficulties in compliance as front-line clinicians.