List of abbreviations: CL, central laboratories; POCT, point-of-care test; EQA, external quality assessment; GO, glucose oxidase; GDH, glucose dehydrogenase; HK, hexokinase; Rili-BAEK, Guideline of the German Medical Association on Quality Assurance in Medical Laboratory; RfB, reference institute for bioanalytics; Instand, INSTAND Gesellschaft zur Förderung der Qualitätssicherung in medizinischen Laboratorien e.V; CL-RfB, proficiency test RfB KS (“Clinical chemical analytes in serum – wet chemistry”); CL-Instand, proficiency test Instand 100 (“Clinical Chemistry – Wet Chemistry”); POCT-RfB, proficiency test RfB GL (“Glucose [wet and dry chemistry]”); POCT-Instand, proficiency test Instand 800 (“Dry Chemistry 01 – POCT: Glucose”); OR, odds ratio; CV, coefficients of variation; SEG, surveillance error grid; TGC, tight glycemic control; mCV, weighted mean of the coefficients of variation.
The diagnosis and treatment of diabetes mellitus relies heavily on blood glucose concentration measurements. With a worldwide prevalence of diabetes mellitus close to 7%, glucose measurements are among the most frequently performed analyses in medical laboratories. The advent of portable glucometers represents a breakthrough in the monitoring of diabetes patients. Glucometers were among the first point-of-care testing (POCT) instruments and are of major medical and economic importance .
Most analytical methods use of one of three enzymatic reactions to quantify glucose: glucose oxidase (GO), glucose dehydrogenase (GDH) or hexokinase/glucose-6-phosphate dehydrogenase (HK). Enzymatic activity produces an electrical current or a color change proportional to glucose concentration. Isotope dilution gas chromatography mass spectrometry serves as higher-order reference procedure in reference laboratories, whereas the hexokinase method is widely accepted for routine calibration and accuracy evaluation . Analytical performance goals have been derived from expert opinions or through computer simulations , , , . In central laboratories (CL), glycolysis is a major source of error. Centrifuging samples with minimal delay or adding sodium fluoride and a citrate buffer to the sample prevent glycolysis , . Portable glucometers measure glucose immediately and close to the patient from capillary blood. However, their compact instrument design and the use of the less specific enzymes GO or GDH results in a greater susceptibility to interferences , , . While specialized training can usually be assumed for operators in CLs, operator qualification varies for POCT measurement and represents a major medical risk .
External quality assessments (EQAs) are crucial to ensure continuous high quality in medical laboratories worldwide. In Germany, the “Guideline of the German Medical Association on Quality Assurance in Medical Laboratory” (Rili-BAEK)  requires medical laboratories performing glucose testing to pass a glucose EQA at least twice a year. Only two organizations, “INSTAND Gesellschaft zur Förderung der Qualitätssicherung in medizinischen Laboratorien e.V.” (Instand) and “Reference Institute for Bioanalytics” (RfB), are licensed to offer glucose EQAs in Germany. Therefore, results of these organizations allow a comprehensive overview of glucose measurements in Germany. Especially for POCT EQAs, a lack of commutability of the applied samples often impedes interpretation , , . The reports of the EQA organizers nevertheless provide valuable information to participants , making EQA an important educational tool. On a higher level, data from EQAs can form the basis for improvements in laboratory medicine as a whole .
In this study, we analyzed glucose EQAs of Instand and RfB to identify factors affecting participant performance. We estimated the imprecision and bias of glucose measurements and assessed clinical risks of inaccurate measurements.
Materials and methods
Glucose data for the years 2012–2016 were obtained from the EQA schemes RfB KS, “Clinical chemical analytes in serum – wet chemistry” (CL-RfB) and Instand 100, “Clinical Chemistry – Wet Chemistry” (CL-Instand). These schemes were designed for automated analyzers in CL where glucose measurements were part of larger panels. In addition, measurements from RfB GL, “Glucose (wet and dry chemistry)” (POCT-RfB), and Instand 800, “Dry Chemistry 01 – POCT: Glucose” (POCT-Instand), were acquired. All data were reformatted to a common structure and analyzed using the R statistical software version 3.4.0. Wherever possible, devices were matched to a common denomination across both organizations. All other devices were subsumed into the “others” device category. Full code is available at https://github.com/acnb/Glucose-EQA. Data are released under https://doi.org/10.17605/OSF.IO/F3CD5.
Two liquid plasma samples (POCT-Instand) or two lyophilized serum samples (CL-RfB, CL-Instand, POCT-RfB) were used in each glucose EQA distribution (Supplemental Methods).
We classified deviations of measurements from the target value as follows. In line with the analytical performance specifications stipulated in the German Rili-BAEK , deviations >15% from the target value were classified as “failed” participation of the EQA distribution (which also resulted in the denial of the EQA certificate). In line with Bukve et al. , participation outcomes in an EQA distribution other than “failed” were classified as “poor” if a result exceeded the target interval (target value±2 mg/dL [0.1 mmol/L]) by >10%, as “acceptable” for deviation of 10%–5% from the target interval, and as “good” for deviation ≤5%. The sample with the largest deviation from the target interval was used for the final classification unless the EQA organization retracted one sample (e.g. due to insufficient stability).
Odds ratios (OR) were calculated for the probability to reach a “good” result and for the probability to not reach a “failed” result. Independent variables were the device employed, experience from previous participations in the same EQA scheme and simultaneous participations in other glucose EQAs from the same organization. Missing experience data were substituted with plausible values using multiple imputation by chained equations  (Supplemental Methods).
Consequences of a year with or without failed participations were analyzed (Supplemental Methods).
For each sample, robust device-specific central location and scale were calculated using the Huber M-Estimator implemented in the R software package “robustbase” . CVs were established by dividing the scale through the central location. For the overall CVs of devices, the weighted mean of the individual CVs was determined with the inverse of the squared standard error of the estimation as weight.
Additionally, imprecision was estimated using the characteristic function:
The characteristic function (Eq. 1) allows specifying an absolute imprecision α in the lower measuring range close to the limit of detection. The imprecision β for higher values depends on the concentration c and resembles the traditional CV. Parameters α and β were fitted using the Nonlinear Least Squares algorithm with the inverse of the squared standard errors as weights (Supplemental Methods).
For CL EQAs, the differences between the robust central location of the measured values of a device and the reference method value were regarded as bias. Biases were not estimated in POCT EQAs to avoid distortions by possible non-commutability of samples.
Comparison of lots
In POCT-Instand, participants could voluntarily specify the lot number of their test strip. Differences between the robust central location of the results of different lots for the same device and sample were calculated. The 95% confidence interval of these differences was determined using bootstrapping  (Supplemental Methods).
We introduce the “bias budget”, which denotes the maximum allowable bias that is still clinically acceptable for a laboratory test with a given imprecision. Clinical risk assessment was based on the surveillance error grid (SEG), a tool to assess the degree of clinical risk for diabetes patients from inaccurate blood glucose monitors . For imprecision, the previously calculated CVs as well as the fitted parameters from the characteristic function were used. For each true glucose concentration up to 500 mg/dL (27.7 mmol/L), the bias had to be small enough such that for 99.7% (three standard deviations) of all measurements with the given imprecision the total error posed less than a “moderate” SEG risk. The largest bias still meeting these constraints was termed “bias budget”. Similarly, a bias budget was also calculated using insulin dosage errors during a tight glycemic control (TGC) simulation according to Karon et al. . Here, 99.7% of all measurements were permitted to result in a maximum “2-category” (not “dangerous”) dosing error.
POCT measurements failed four times as often in EQAs compared to CL automated analyzer measurements
Between 2012 and 2016, overall more than 1000 individual laboratories participated in each of the four EQA schemes. For schemes run by Instand, there were six distributions per year. POCT-RfB had four and CL-RfB eight distributions per year (Table 1). Two samples were supplied in each distribution. Sixty-five and 54 unique POCT devices were employed in POCT-Instand and POCT-RfB, respectively. Across EQA organizations, nine and 11 devices were often employed in CL and POCT EQAs, respectively, and therefore were used as common device categories.
For evaluation of POCT EQAs, reference method values and device specific consensus values were both used as target values. Instand gradually increased the percentage of device subgroups evaluated according to the reference method value from 1.3% in 2012 to 40.0% in 2016 (Supplemental Figure 1). In POCT-RfB, device-specific consensus values have always been used as target values if the device subgroup was large enough.
In general, success rates were higher in the EQAs for CL automated analyzers. On average, 1%–2% of participants failed in CL schemes, compared to 9%–10% of participants in POCT schemes (Table 1). In CL schemes, the central 95% of all measurements deviated less than 10% from the assigned target values after exclusion of outliers, whereas in POCT schemes, the central 95% slightly exceeded 15% deviation from the target value (Figure 1).
The device had the highest influence on performance
We investigated the influence of different factors on the percentage of “failed”, “poor”, “acceptable” and “good” participations (Supplemental Figures 2–7). Few devices accounted for the majority of all employed devices (Supplemental Figures 2–5). The “others” group subsumes devices that were rarely used or could not be coded otherwise. Especially in the POCT EQAs, this group had a high percentage of failed results. Similarly, we investigated the effect of additional participation in other EQAs (Supplemental Figure 6). A substantial number of laboratories participated in EQAs for both CL and POCT glucose measurements. These laboratories had a higher fraction of “good” participations than laboratories that took part in only one EQA. To assess the influence of a laboratory’s experience, participants were classified as “new” (first participation), “intermediate” (2–10 participations) and “experienced” (;10 participations) . Most participations were submitted by “experienced” participants (Supplemental Figure 7). In all examined EQAs, the fraction of “good” participations was lowest for new participants.
Next, we calculated univariable OR for the probability to reach a “good” result (Supplemental Figures 8, 9). Although ORs for the same devices differed between EQAs, the employed device had the strongest influence in all EQAs.
To compare effect sizes and analyze interactions, multivariable ORs to reach a “good” result were calculated for CL automated analyzers (Figure 2). The choice of a device not in the “others” category increased the probability for a “good” result up to 5.4 times (for “Abbott” device). Large 95% confidence intervals for some devices (up to 1.9–4.5 “Roche Diagnostics” with “other” method) indicate a high degree of uncertainty, at least partially caused by a small sample size. “Experienced” participants had significantly higher odds (OR 1.5) to reach a “good” result than “new” participants. The effect of an additional participation in a POCT EQA was negligible (OR 1.02).
In POCT EQAs, the device had again the greatest influence on the probability to reach a “good” result (Figure 2). ORs ranged from 0.6 (“Bayer Vital Contour”) to 6.2 (“Roche Accu-Chek Inform II”). The effect of additional participation in a CL EQA scheme was greater (OR 1.2) than the reciprocal influence of POCT EQAs on CL EQAs. All ORs denoting past experience included 1 in their confidence interval.
To verify these results, we recalculated multivariable ORs for the likelihood not to fail (Supplemental Figure 10). The spread of ORs was larger but showed comparable tendencies. For 29,561 (out of 67,295) participations, the level of experience could not be determined with certainty as the first participation likely occurred before 2012. These participations were excluded from multivariable ORs calculations. To evaluate their contribution, missing data were imputed and multivariable ORs to reach a “good” result ratios recalculated (Supplemental Figure 11) . The recalculated ORs slightly exceeded the 95% confidence interval of the ORs without imputed data for three and four devices in the CL and POCT EQA, respectively.
After “failed” EQA distributions participants switched devices more often and improved
The worse the result of the previous EQA participation, the larger was the fraction of failing participations (Supplemental Figure 12). After having failed the previous distribution, 11% (CL-Instand), 13% (CL-RfB), 33% (POCT-Instand) and 35% (POCT-RfB) of participants failed again.
For a better understanding of ways to improve performance after failing an EQA participation, a pathway analysis was conducted (Figure 3). After a year with failed participations, laboratories left the EQA or employed a new device more often than laboratories that did not fail in the previous year. Although small, this effect could be observed in all EQAs. Of the participants who failed but continued in the EQA, 14% (CL-Instand), 15% (CL-RfB), 41% (POCT-Instand) and 50% (POCT-RfB) failed again at least once in the following year. For participants who changed their failed POCT device (or the declaration of the same), the percentage of “good” results was higher and the percentage of “failed” results lower than for those who did not change in the following year.
Certain POCT devices approached the analytical performance of central laboratory instruments
To estimate imprecision, the weighted mean of the coefficients of variation (mCV) was determined for each device (Table 2). The mCV of CL automated analyzers ranged from 0.022 to 0.031 (median 0.026). Some POCT devices reached comparable imprecision. The mCVs of all POCT devices ranged from 0.025 to 0.084 (median 0.055).
The characteristic function was fitted to measured values of each device to determine changes of imprecision over the measuring range (Table 2, Figure 4, Supplemental Figure 13). Most CL devices exhibited small fitted values for parameter α denoting low absolute imprecision in the lower measuring range (overall median 1.15, IQR, 0.69–1.62). For POCT devices, values for α were higher but often exhibited wide confidence intervals (overall median 2.12, IQR 0.20–4.68) (Supplemental Table 1). Parameter β expresses a relative imprecision in the higher measuring range. The smaller the α, the more closely the β matched the respective mCVs for most devices. We used the characteristic function and fitted parameters to calculate CVs at 80 and 300 mg/dL (4.4 and 16.8 mmol/L). In the lower measuring range, CVs were up to 1.6 (CL) and 2.4 (POCT) times larger than in the higher range.
Biases were determined for CL EQAs (Supplemental Figure 14). The same device exhibited the highest median bias in CL-Instand and in CL-RfB (Siemens Advia, 2.2% and 1.8%).
Lot-to-lot differences are a serious source of variation for POCT glucose testing
In POCT-Instand, 80 samples were measured with the same device but with at least two different test strip lots. Maximum differences between lots exceeded 5% of the assigned target value in nine samples (11%) (Figure 5). These differences were estimated with wide confidence intervals.
The smallest allowable bias, the bias budget, occurs in the lower measurement range for most devices
The bias budget, the maximum allowable bias that is still clinically acceptable for a laboratory test with a given imprecision, was calculated with the previously determined mCVs and parameters from the characteristic function (Figures 6–8). Bias budgets calculated using the TGC simulation as risk assessments were slightly smaller than the respective bias budgets derived from the SEG. They were symmetric for most devices implying that a positive bias carries nearly the same risk as a negative bias. Regardless of whether imprecision was formulated as mCV or as a characteristic function, glucose concentrations in the range 60–115 mg/dL (3.3–6.4 mmol/L) were likely to display an unacceptable bias and determined the bias budget for nearly all devices. The absolute bias budgets of CL automated analyzers were largely homogeneous and ranged from 20 to 27 mg/dL (1.1–1.5 mmol/L) (median 24, IQR: 23–25 mg/dL [1.3, 1.2–1.4 mmol/L]). For two POCT devices (Dytrex (Infopia) Easy Gluco; Lifescan One Touch Vita), the imprecision alone was too high to reach an acceptable risk. Overall, bias budgets for POCT devices were smaller and differences between devices greater (median: 17, IQR: 13–21 mg/dL [0.9, 0.7–1.2 mmol/L]) than for CL devices.
In this work, we analyzed glucose measurements of over 130,000 samples from the German EQA organizations Instand and RfB. As regular participation in EQAs is mandatory for all medical laboratories performing analysis of glucose, these data provide a comprehensive overview of glucose measurements in Germany. The high number of analyzed samples and mandatory laboratory participation constitute a strength of this study. On the other hand, especially the results of POCT EQAs have to be interpreted with caution as the lack of commutability may have contributed to different behaviors of stabilized EQA samples as compared to patient samples. Participants’ mistakes during data entry, different encodings, different assessments and different samples are further possible sources of variations. In this work, analytical performance specifications for failing a distribution have been derived from Rili-BAEK, which is legally binding in Germany , but they might be determined differently in other EQAs . To increase comparability with other studies, we reused existing classifications for non-failed participations . As far as possible, we used more than one method to analyze the same question to increase validity. Robust statistics were employed to avoid an oversized influence of outliers.
To identify factors associated with good analytical quality, we calculated uni- and multivariable ORs. Considerable differences between ORs for different devices indicate that the device itself had the highest influence on analytical quality for POCT and CL automated analyzers. Especially for POCT, concurrent participation in other glucose EQAs and in particular for CL, a higher number of previous participations is associated with a higher chance for a “good” result in an EQA distribution. Similar observations were made in studies with other EQAs , , . However, EQA studies cannot differentiate between the experience in conducting the specific EQA and the experience in conducting the actual analytical test. It therefore remains an open question if the improvement represents a real improvement in analytical quality or if this effect is merely the result of increased experience with EQAs. Nevertheless, educational guidance to healthcare professionals with only limited experience in laboratory medicine seems worthwhile . Given the importance of the device, this could include an evaluation of POCT devices according to standards such as DIN EN ISO 15197, e.g. for primary health care , .
A participant who had failed is more likely to fail again. A failed participation in an EQA led to higher rates of participants leaving the EQA (and likely stop measuring glucose as participation in EQAs is mandatory). Also, more laboratories switched their device and achieved better results. Both actions likely improve overall analytical performances in Germany.
EQAs of glucose measurements from CL automated analyzers exhibited excellent agreement with reference method values. Single POCT devices approached the imprecision of CL automated analyzers. However, considerable variation among POCT devices was observed. Our data suggest that for some POCT devices, imprecision increases in the low concentration range. Similar considerable differences in performance in the low-glucose range were also observed in other studies using unaltered capillary blood samples . The large fraction of lots (11%) with a difference of >5% of the target value constitute another substantial source of variation. Lot-to-lot differences are known to significantly affect measurements of native patient samples , . Results from control material, however, may again not be readily transferable . Of note, the stated imprecisions represent long-term reproducibility imprecisions. They already include short-term biases such as the lot-to-lot-variations or differences in daily calibrations . In line with the guide to the expression of uncertainty in measurement, long-term biases affecting all samples should be corrected . However, for some devices, a consistent bias over many EQA distributions has been found.
To evaluate the impact of imprecision on medical decision making, we propose the concept of the “bias budget” to express a maximum permissible bias that still avoids unacceptable risks from erroneous glucose measurements. The bias budget relies primarily on detailed clinical risk assessments, such as the SEG  or the TGC simulation , available for glucose measurements. Because of this direct approach, the influence of errors in the very low end of the measurement range that are large relative to the true value but small in absolute terms is limited to a clinically justifiable influence . The bias budget can avoid mathematical assumptions such as a linear relationship between bias and imprecision , or a predetermined distribution of glucose values . The bias budget covers biases arising in all steps of the total testing process and therefore exceeds the analytical bias determined in performance specifications.
For almost all CL and POCT devices, the most critical glucose values are in the lower measuring range from 60–115 mg/dL (3.3–6.4 mmol/L). EQA organizers should focus on this range and design their glucose schemes accordingly. The bias budget can be depleted by random bias caused by various, mostly sample-specific factors such as unimpeded glycolysis , abnormal hematocrit values  or interfering factors , . Especially with POCT, laboratory medicine does not aim for perfect measurements but has to balance turnaround time, costs and accuracy based on medical needs. The estimate through the bias budget could help the clinician decide which sample-specific error factors are still permissible. For example, a negative bias of 10 mg/dL (0.6 mmol/L) occurring during 1–2 h of uninhibited glycolysis is still acceptable for most CL automated analyzers. Interfering factors, e.g. from special medications, often induce biases exceeding the budget for most POCT devices. These risks need to be mitigated, e.g. by operator training when implementing a POCT testing program .
EQAs can ensure a continuous high analytical quality of glucose measurements. Their data must be made available to facilitate learning in the laboratory healthcare system. To further increase the informative value of EQAs for science and quality control, commutable materials for POCT glucose EQAs or alternative designs for proficiency testing schemes are urgently needed , , .
The authors would like to thank Dirk Illigen for technical assistance and Evangeline Thaler for reviewing the manuscript. We would also like to thank David C. Klonoff and Michael A. Kohn for providing the individual data points of the Surveillance Error Grid.
Andreis E, Küllmer K, Appel M. Application of the reference method isotope dilution gas chromatography mass spectrometry (ID/GC/MS) to establish metrological traceability for calibration and control of blood glucose test systems. J Diabetes Sci Technol 2014;8:508–15. PubMedCrossrefGoogle Scholar
Parkes JL, Slatin SL, Pardo S, Ginsberg BH. A new consensus error grid to evaluate the clinical significance of inaccuracies in the measurement of blood glucose. Diabetes Care 2000;23:1143–8. CrossrefPubMedGoogle Scholar
Gambino R, Piscitelli J, Ackattupathil TA, Theriault JL, Andrin RD, Sanfilippo ML, et al. Acidification of blood is superior to sodium fluoride alone as an inhibitor of glycolysis. Clin Chem 2009;55:1019–21. Web of ScienceCrossrefPubMedGoogle Scholar
Erbach M, Freckmann G, Hinzmann R, Kulzer B, Ziegler R, Heinemann L, et al. Interferences and limitations in blood glucose self-testing: an overview of the current knowledge. J Diabetes Sci Technol 2016;10:1161–8. CrossrefPubMedGoogle Scholar
Ramljak S, Lock JP, Schipper C, Musholt PB, Forst T, Lyon M, et al. Hematocrit interference of blood glucose meters for patient self-measurement. J Diabetes Sci Technol 2013;7:179–89. CrossrefPubMedGoogle Scholar
Schifman RB, Howanitz PJ, Souers RJ. Point-of-care glucose critical values: a Q-probes study involving 50 health care facilities and 2349 critical results. Arch Pathol Lab Med 2016;140:119–24. CrossrefWeb of SciencePubMedGoogle Scholar
German Medical Association. Revision of the “Guideline of the German Medical Association on Quality Assurance in Medical Laboratory Examinations–RiliBAEK”. J Lab Med 2015;39:26–69. Google Scholar
Jacobs J, Fokkert M, Slingerland R, De Schrijver P, Van Hoovels L. A further cautionary tale for interpretation of external quality assurance results (EQA): commutability of EQA materials for point-of-care glucose meters. Clin Chim Acta 2016;462:146–7. CrossrefWeb of SciencePubMedGoogle Scholar
Petersmann A, Luppa P, Michelsen A, Sonntag O, Nauck M. Gemeinsame Stellungnahme zur Situation der Bewertung von Ringversuchen für Glucose mittels Systemen für die patientennahe Sofortdiagnostik (POCT)/Joint statement on the situation of external quality control for glucose in POCT systems. Laboratoriumsmedizin. 2012;36:165–8. Google Scholar
Wood WG. Problems and practical solutions in the external quality control of point of care devices with respect to the measurement of blood glucose. J Diabetes Sci Technol 2007;1:158–63. PubMedCrossrefGoogle Scholar
Bukve T, Stavelin A, Sandberg S. Effect of participating in a quality improvement system over time for point-of-care C-reactive protein, glucose, and hemoglobin testing. Clin Chem 2016;62:1474–81. Web of ScienceCrossrefGoogle Scholar
Buuren SV, Groothuis-Oudshoorn K. mice: Multivariate imputation by chained equations in R. J Stat Softw 2011;45:67. Google Scholar
Coucke W, Charlier C, Lambert W, Martens F, Neels H, Tytgat J, et al. Application of the characteristic function to evaluate and compare analytical variability in an external quality assessment scheme for serum ethanol. Clin Chem 2015;61:948–54. CrossrefWeb of ScienceGoogle Scholar
Maechler M, Rousseeuw P, Croux C, Todorov V, Ruckstuhl A, Salibian-Barrera M, et al. robustbase: Basic Robust Statistics R package version 0.92-7. 2016. http://CRAN.R-project.org/package=robustbase.
Davison AC, Hinkley DV. Bootstrap methods and their applications. Cambridge: Cambridge University Press, 1997. Google Scholar
Jones GRD, Albarede S, Kesseler D, MacKenzie F, Mammen J, Pedersen M, et al. Analytical performance specifications for external quality assessment – definitions and descriptions. Clin Chem Lab Med 2017;55:949–55. PubMedWeb of ScienceGoogle Scholar
Howerton D, Krolak JM, Manasterski A, Handsfield JH. Proficiency testing performance in US laboratories: results reported to the Centers for Medicare & Medicaid Services, 1994 through 2006. Arch Pathol Lab Med 2010;134:751–8. PubMedGoogle Scholar
Morandi PA, Deom A, Kesseler D, Cohen R. Retrospective analysis of 88,429 serum and urine glucose EQA results obtained from professional laboratories and medical offices participating in surveys organized by three European EQA centers between 1996 and 2007. Clin Chem Lab Med 2010;48:1255–62. PubMedWeb of ScienceGoogle Scholar
Heinemann L, Zijlstra E, Pleus S, Freckmann G. Performance of blood glucose meters in the low-glucose range: current evaluations indicate that it is not sufficient from a clinical point of view. Diabetes Care 2015;38:e139–40. Web of ScienceCrossrefPubMedGoogle Scholar
Baumstark A, Pleus S, Schmid C, Link M, Haug C, Freckmann G. Lot-to-lot variability of test strips and accuracy assessment of systems for self-monitoring of blood glucose according to ISO 15197. J Diabetes Sci Technol 2012;6:1076–86. CrossrefPubMedGoogle Scholar
Stavelin A, Riksheim BO, Christensen NG, Sandberg S. The importance of reagent lot registration in external quality assurance/proficiency testing schemes. Clin Chem 2016;62:708–15. CrossrefPubMedWeb of ScienceGoogle Scholar
Kristensen GB, Christensen NG, Thue G, Sandberg S. Between-lot variation in external quality assessment of glucose: clinical importance and effect on participant performance evaluation. Clin Chem 2005;51:1632–6. CrossrefPubMedGoogle Scholar
Freckmann G, Schmid C, Baumstark A, Rutschmann M, Haug C, Heinemann L. Analytical performance requirements for systems for self-monitoring of blood glucose with focus on system accuracy: relevant differences among ISO 15197:2003, ISO 15197:2013, and current FDA recommendations. J Diabetes Sci Technol 2015;9:885–94. CrossrefPubMedGoogle Scholar
Barabas N, Bietenbeck A. Application guide: training of professional users of devices for near-patient testing. J Lab Med. 2017;41:215–8. Google Scholar
Delatour V, Lalere B, Saint-Albin K, Peignaux M, Hattchouel J-M, Dumont G, et al. Continuous improvement of medical test reliability using reference methods and matrix-corrected target values in proficiency testing schemes: application to glucose assay. Clin Chim Acta 2012;413:1872–8. CrossrefPubMedWeb of ScienceGoogle Scholar
The online version of this article offers supplementary material (https://doi.org/10.1515/cclm-2017-1142).
About the article
Published Online: 2018-05-01
Published in Print: 2018-07-26
Author contributions: All the authors have accepted responsibility for the entire content of this submitted manuscript and approved submission.
Research funding: None declared.
Employment or leadership: None declared.
Honorarium: None declared.
Competing interests: The funding organization(s) played no role in the study design; in the collection, analysis and interpretation of data; in the writing of the report; or in the decision to submit the report for publication.