Achieving harmonised results for our clinical users has become a high priority in the laboratory medicine community . There are a number of reasons for this, but foremost is that patients receiving results from different laboratories should be getting similar results to facilitate optimal care by their healthcare provider . The healthcare community has entrusted laboratory professionals with this task. We have numerous tools to help us achieve this objective, and certainly for those analytes that can be measured with defined reference methods, we should be able to achieve harmonisation within acceptable precision targets . The recent consensus statements on setting these precision targets have reinforced the criteria we should be striving for . However, from a practical perspective, most of us will be applying biological variability-derived targets in our clinical laboratories rather than one from the two other models .
One tool for evaluating bias for our laboratory tests is through the participation in external quality control/proficiency testing programs (EQA/PT). Most of these programs are not designed to truly assess bias, as many programs and in fact many analytes lack reference measurement procedures (RMPs). Nonetheless, these EQA programs do help us facilitate a degree of confidence in our testing process and do play an important role in providing optimal laboratory results. Recently, it has been proposed that EQA/PT schemes that offer assigned targets (established by RMPs) and that use commutable samples that are repeated in different cycles be graded as level 1 EQA/PT schemes .
In this issue of the Journal, Weykamp and colleagues evaluate the use of such samples (i.e. commutable and with reference targets) from a level 1 EQA/PT program across five countries through national EQA/PT schemes . The authors assessed 17 general chemistry analytes, all of which have RMPs. They applied biological variation derived total error (TE) targets and rated performance per the categories of optimum, desirable, minimum and failed (not meeting) .
The study demonstrates that these samples from a level 1 EQA/PT program can be used with success in five countries analysed by seven different and frequently used manufacturer platforms. The data was analysed by both country of analysis and by instrument platform. The finding that there were no significant differences between the country of analysis and the instrument platform for each analyte confirms that the sample used was commutable. This is an important contribution to our confidence in using such a material for EQA/PT schemes and should encourage the wider adoption of such material by EQA/PT providers, as this will improve the ability of laboratory tests to become harmonised.
By choosing to evaluate the data against an RMP target, the authors have also been able to provide a snapshot of how well we are doing in our clinical laboratories against biological variability targets. Here, things are not looking so good. Only 11 of 17 assays met or were more than the minimum level of performance. For the elements analysed (Na, Cl, Ca, Mg, K), only K achieved a TE below the minimum percentage of total allowable error (% TAE) across all manufacturers and countries. One may take this finding as further proof that the manufacturers of our assays and reference materials need to continue to develop more rigorous technology to match the accuracy that is required for clinical use. Or alternatively, one could argue that these failures were perhaps due to an overly stringent % TAE criteria for these analytes. For example, the minimum % TAE is largest for K at 8.4% and smallest for Na at 1.1%. Using these criteria in the following equation, TE=bias (B)+2CV (coefficient of variation), if the B was set to equal 0, the CV for Na would have to be <0.6% to meet the minimum, while in the same circumstance, the CV for K would have to be <4.2%. In practice, a Na CV=0.6% would be considered quite acceptable, whereas a K CV=4.2% would not. Looking across the six analytes that failed to meet the minimum specification, five of the six had minimum % TAE<7.2%, with only one enzyme failing to meet the minimum % TAE. Thus, the higher the % TAE based on biological variability, the more equivalent assays appear to be within the allowable percent difference from the target values. In either case, these findings should be of great interest to the manufacturers, as greater attention to the commutability of their reference materials and standards  as well as tightening up reagent and calibrator lot variability through the production process should be encouraged.
Where this becomes impossible, we as a community of laboratory professionals may need to consider how we define our allowable error. If achieving biological variability targets is not possible with excellent precision, we may need to change our mindset from biological variation model to the “state-of-the-art” model. These state-of-the-art criteria may well be defined after consideration of data produced by samples such as the one circulated by Weykamp et al. . Defining allowable error based on multiple instrument platforms across multiple countries and a number of samples may provide a more realistic picture of what is actually achievable in laboratory medicine.
Plebani M. Harmonization in laboratory medicine: requests, samples, measurements and reports. Crit Rev Clin Lab Sci 2016;53:184–96. Google Scholar
Miller WG, Eckfeldt JH, Passarelli J, Rosner W, Young IS. Harmonization of test results: what are the challenges; how can we make it better? Clin Chem 2013;60:923–7. Google Scholar
Sandberg S, Fraser CG, Horvath AR, Jansen R, Jones G, Oosterhuis W, et al. Defining analytical performance specifications: consensus statement from the 1st Strategic Conference of the European Federation of Clinical Chemistry and Laboratory Medicine. Clin Chem Lab Med 2015;53:833–5. Google Scholar
Miller WG, Jones GR, Horowitz GL, Weykamp C. Proficiency testing/external quality assessment: current challenges and future directions. Clin Chem 2011;57:1670–80. Google Scholar
Weykamp C, Secchiero S, Plebani M, Thelen M, Cobbaert C, Thomas A, et al. Analytical performance of 17 general chemistry analytes across countries and across manufacturers in the INPUtS project of EQA organizers in Italy, the Netherlands, Portugal, United Kingdom and Spain. Clin Chem Lab Med 2016. doi: . [Epub ahead of print]. CrossrefGoogle Scholar
Fraser CG. Biological variation: from principles to practice. Washington, DC: AACC Press, 2001:151. Google Scholar
Miller WG, Myers GL. Commutability still matters. Clin Chem 2013;59:1291–3. Google Scholar
About the article
Published Online: 2016-07-26
Published in Print: 2017-02-01