Skip to content
BY 4.0 license Open Access Published by De Gruyter November 2, 2020

Are hemoglobin A1c point-of-care analyzers fit for purpose? The story continues

  • Erna Lenters-Westra EMAIL logo and Emma English

Abstract

Objectives

Point-of-care (POC) analyzers are playing an increasingly important role in diabetes management but it is essential that we know the performance of these analyzers in order to make appropriate clinical decisions. Whilst there is a growing body of evidence around the more well-known analyzers, there are many ‘new kids on the block’ with new features, such as displaying the presence of potential Hb-variants, which do not yet have a proven track record.

Methods

The study is a comprehensive analytical and usability study of six POC analyzers for HbA1c using Clinical and Laboratory Standards Institute (CLSI) protocols, international quality targets and certified International Federation of Clinical Chemistry and Laboratory Medicine (IFCC) and National Glycohemoglobin Standardization Program (NGSP) Secondary Reference Measurement Procedures (SRMP). The study includes precision (EP-5 and EP-15), trueness (EP-9), linearity (EP-6), sample commutability (fresh, frozen and lyophilized), interference of Hb-variants (fresh and frozen samples).

Results

Only two of the six analyzers performed to acceptable levels over the range of performance criteria. Hb-variant interference, imprecision or variability between lot numbers are still poor in four of the analyzers.

Conclusions

This unique and comprehensive study shows that out of six POC analyzers studied only two (The Lab 001 and Cobas B101) met international quality criteria (IFCC and NGSP), two (A1Care and Innovastar) were borderline and two (QuikReadgo and Allegro) were unacceptable. It is essential that the scientific and clinical community are equipped with this knowledge in order to make sound decisions on the use of these analyzers.

Introduction

Diabetes is a global health burden and a leading cause of morbidity and mortality worldwide. It is estimated that up to 50% of people with diabetes are currently undiagnosed, and there is an urgent need for rapid, accurate and timely diagnostic testing to identify those both with and at risk of the disease [1].

Point-of-care (POC) analyzers play an increasingly important role in wide range of clinical settings and there is increasing desire from clinicians to have access to more POC tests and a wider range of tests [2]. There is belief that POC enables faster clinical decision making, increased rapport with patients and reduced referrals to secondary care and subsequent healthcare costs. Over 50% of primary care physicians surveyed by Horwick et al. (2014) wanted increased access to HbA1c POC testing, bespeaking a clear demand for HbA1c POC [3]. However, Jones et al. (2013) also highlighted an apparent nervousness amongst primary care physicians around the accuracy of POC testing [4]. Coupling the desire for increased HbA1c POC availability and the prudent concerns on quality it is essential that we understand how well POC HbA1c analyzers perform.

Understanding the quality of POC testing has been a topic of key interest for over a decade with the stark message of six out of eight analyzers not meeting the accepted quality criteria in 2010 [5]. Since this seminal study, there have been numerous evaluations of POC HbA1c performance with a focus around the more common analyzers [6]. External quality assessment (EQA) provides a snap shot of ‘real world’ data on the performance of POC analyzers, although only a fraction of analyzers in use are currently enrolled in EQA schemes [7], [8].

Whilst there is a growing body of evidence around the more well-known analyzers, there are many ‘new kids on the block’ with new features, such as displaying the presence of potential Hb-variants, which do not have a proven track record. Clinicians and laboratory scientists need robust and rigorous evaluation data on performance, acceptability and usability of POC analyzers to support informed decision making around use of POC testing analyzers.

The evaluation of POC analyzers is not without issue. Whilst many HbA1c POC analyzers are scaled down versions of laboratory analyzers they have their own unique differences which require adaptations in order to complete a comprehensive evaluation. One key issue is that several POC analyzers are not compatible with frozen or lyophilized blood samples, meaning conventional evaluation protocols cannot be directly applied.

Whilst there are numerous method comparisons published, it is important to note that these are often single comparisons to routine laboratory methods (which will have their own imprecision and bias to consider), which provide insight into local performance but do not provide a robust picture of performance against internationally accepted secondary reference measurement procedures (SRMPs) or international quality criteria [9], [10].

This study aims to understand the performance of a range of POC HbA1c analyzers using a rigorous evaluation protocols which examines issues such as; interference from Hb-variants with fresh and frozen samples, sample compatibility (fresh, frozen and lyophilised) and system usability whilst comparing analytical performance to International Federation of Clinical Chemistry and Laboratory Medicine (IFCC) and National Glycohemoglobin Standardization Program (NGSP) SRMPs.

Materials and methods

Analyzers evaluated

The six POC analyzers included in this study and their key characteristics are summarized in Table 1. The choice of analyzer was both manufacturer led (Allegro, QuikReadgo and The lab 001) and investigator led (Cobas B101, InnovaStar and A1Care). The manufacturers of the last three analyzers were approached by the authors as previous evaluations had highlighted some performance issues. An initial familiarization protocol was undertaken with each instrument and the results were shared with the manufacturers to enable them to decide if they wished to continue to a full evaluation. This supports a collaborative approach to working with manufacturers with the aim to improve quality. In some cases when a product is new in development, feedback at an early stage will enable further development before a product is brought to full evaluation, saving time and resources [11]. Six analyzers were fully evaluated, however two new analyzers were not ready yet for a full evaluation.

Table 1:

Analyzers included in the study and their key characteristics.

Analyzer Manufacturer Sample volume, µL Analysis time, min Time from a cold start to first result, min Method Principle Weight, kg Dimensions, W × H × D, mm Operating temperature Storage temperature cartridges Hb-variant visible? Mean SUS score n=2
A1Care I-Sens 2.5 4.2 ≈13.7 Enzymatic 3.8 290 × 250 × 130 10–32 °C 1–30 °C No 88
Cobas B101 Roche Diagnostics 2 6.0 ≈9.5 Immuno-assay 2.0 135 × 184 × 234 15–32 °C 2–30 °C No 95
InnovaStar DiaSys 10 6.5 ≈16.5 Immuno-assay 4.0 200 × 150 × 170 15–35 °C 2–8 °C No 80
The Lab 001 Arkray 1.5 1.5 ≈3.0 Capillary electrophoresis 10 220 × 298 × 330 10–30 °C 2–30 °C Yes 91
QuikReadgo Aidian 1 6.0 ≈7.4 Immuno-assay 1.7 145 × 155 × 270 15–35 °C 2–8 °C,

2 months at 18–25 °C
No 60
Allegro Nova Biomedical 1.5 6.5 ≈9.2 Immuno-assay 9.1 203 × 381 × 381 15–32 °C 2–8 °C No 84
  1. SUS, System Usability Scale.

Imprecision study (EP-5 and EP-15)

The Clinical and Laboratory Standards Institute (CLSI) EP-5 protocol was used to investigate assay imprecision. It is known that some POC methods have a bias with frozen material and as it is not known if frozen samples have an impact on the imprecision, also EP-15 was performed with two fresh patient samples (HbA1c values of 48 and 75 mmol/mol). Both samples were analyzed five-fold for five days. Imprecision was also calculated on the basis of the duplicates of the fresh patient samples in the EP-9 protocol.

Method comparison (trueness; EP-9)

The CLSI EP-9 protocol was performed with 40 fresh patient samples and the data was used to investigate the bias between each instrument and four SRMPs (n=40, eight samples per day for five days, duplicate measurements). Values were assigned with four IFCC and NGSP Certified SRMPs [12], [13]:

  1. Roche Tina-quant Gen.3 HbA1con Cobas c513, immunoassay, IFCC and NGSP certified (Roche Diagnostics);

  2. Premier Hb9210, affinity chromatography HPLC, IFCC and NGSP certified (Trinity Biotech);

  3. Tosoh G8, cation-exchange HPLC, IFCC certified (Tosoh Bioscience);

  4. Abbott Enzymatic method on Alinity, IFCC and NGSP certified (Abbott Diagnostics).

Linearity (EP-6)

Linearity was assessed using the CLSI EP-6 protocol. After adjustment for Hb concentration, patient samples with a low HbA1c value and a high HbA1c value were mixed in incremental amounts to generate a series of equally spaced samples over a broad HbA1c concentration range. Eleven samples were analyzed in duplicate in one day. The samples were made fresh and then frozen at −80 °C degrees until analysis. Whilst some analyzers display a bias with frozen samples this is generally a consistent bias and therefore these can still be used to assess linearity.

The difference between the fitted values of the best polynomial line and the regression line for the 11 samples were compared. CLSI states for EP-6 that goals for linearity should be derived from goals for bias, and should be less than or equal to these goals [14]. The IFCC Task Force on Implementation of HbA1c Standardization has set an TAE of 10% at an HbA1c concentration of 50 mmol/mol (19). Taking into account the whole clinical relevant range, we have set a TAE of 6 mmol/mol with a nonlinearity budget of 50% (=3 mmol/mol). If the deviation exceeds allowable nonlinearity (3 mmol/mol) the data was considered nonlinear.

Hemoglobin variants AS, AC, AE, AD, elevated A2 and HbF

Twenty patient samples of each heterozygous Hb variant, from our frozen whole blood biobank, were measured on each of the different POC analyzers. Values were assigned using an IFCC calibrated boronate affinity HPLC (Premier Hb9210). For samples with increased HbF, HbA1c values were assigned using an IFCC calibrated cation-exchange HPLC (Menarini HA8180V, Diabetes Mode, (frozen) and Tosoh G8 (fresh)). Percentage HbF (3.5–42.0%) was determined using the Sebia Capillarys 2 Flex Piercing Hemoglobin program.

In addition to the frozen samples, 16 HbAS, seven HbAC, five HbAD, nine HbAE and four HbF (9.1, 20.5, 20.8 and 27.5%) fresh Hb-variant samples were also analyzed on each analyzer as two of the analyzers (InnovaStar and QuikReadgo) showed a bias with frozen samples.

Any bias observed due to the presence of variants is a compound of both the bias in normal samples (identified by the EP-9 protocol) and the bias associated with the variant. In order to account for this, the results were adjusted for the bias found during EP-9 (Premier Hb9210 for fresh and frozen Hb-variant), thus any residual bias would be due to the Hb-variant. For the two analyzers that also display a bias with frozen samples, the bias correction was done using the 24 frozen EQA samples rather than the EP-9 data. Whilst this is not a perfect solution it avoids a two-step correction.

For an Hb variant to be considered as not causing a clinically relevant interference, the results of the Hb variant should fall within a defined scatter line of ±10% (SI units) of the regression line derived from the comparison of the test instrument and the Premier Hb9210 with the nonvariant samples (HbAA).

Schiff Base, icteric samples, different hemoglobin concentrations

In order to create labile samples (Schiff Base) 12, 16 and 20 mg/mL glucose was added to aliquots of high, medium and low HbA1c, EDTA samples. Icteric samples were generated by removing the plasma of a non-icteric sample and replacing with plasma with 219, 236 and 258 μmol/L bilirubin (icteric sample), again at three different HbA1c levels. Similarly addition or removal of plasma was used to create a range of samples with varying hemoglobin levels. The samples were stored frozen at −80 °C until analysis. A mean relative difference of ± 10% (in SI units) pre and post treatment of the samples, was considered a significant interference.

EQA programs (assessing sample commutability)

In order to assess sample commutability, samples from both the IFCC Certification Program for manufacturers [15] and the European Reference Laboratory for Glycohemoglobin (ERL) EQA Program [16] were used to provide data on frozen and lyophilized samples respectively.

System usability scale (SUS)

This study included an SUS score generated by the two technicians who performed the evaluation study. SUS is a simple technology diagnostic tool consisting of 10 questions which gives a global view of subjective assessment of the usability of the device tested [17].

An SUS score >81 can be considered as excellent, between 71 and 80 as good, between 52 and 70 okay and <51 poor [18].

Defining the quality criteria

International Quality Standards: This study used the previously published global guidance on acceptable quality and performance criteria for HbA1c testing from the IFCC Task Force on Implementation of HbA1c Standardization [19].

NGSP Manufacturer Certification Criteria: Thirty six out of 40 results must be within 5% (relative) of an individual NGSP SRMP to pass certification [20].

Statistical analysis

Calculations were performed using Microsoft+® Excel 2016 (Microsoft Corporation). Statistical analyses were performed using Analyse-It®, version 5.40 (Analyse-It Software) and EP Evaluator Release 12 (Data Innovations).

Results

Imprecision (EP-5 and EP-15

Table 2 displays the CVs derived from both EP-5 and EP-15 and the duplicates from the EP-9 protocol. Only the Lab 001 and the InnovaStar achieved the performance criteria of <3% CV (SI units) (<2% in NGSP units) across all protocols and both high and low HbA1c levels [21], [22]. The Lab 001 actually achieved <2% CV (SI units) showing very low levels of imprecision. However the Allegro failed to achieve <3% CV (SI units) in any protocol at either level, with CVs as high as 4.2%, showing unacceptable levels of imprecision. The A1Care had mixed results, performing better with fresh samples and at higher HbA1c values. The B101 also had mixed results, with better performance at higher HbA1c values. The QuikReadgo met the criteria in both EP-5 and EP-15 with little difference between fresh and frozen samples. However, the performance with the duplicates from EP-9 was mixed with one lot failing and one lot passing.

Table 2:

Imprecision results based on EP-5, EP-15 and on the duplicates in EP-9.

  1. Red: fail performance targets (CV <3% in SI units and <2% in NGSP units). aBased on the duplicates in EP-9.

Method comparison (trueness; EP-9)

Table 3 details the results of the method comparison study and also NGSP pass/fail rate. From these data it is clear that all analyzers suffered some degree of bias, with some such as the Allegro showing this across multiple levels and between lot numbers. The data for the individual POC analyzers vs. the individual SRMPs in NGSP units is available in Supplemental Table 1. From this table and Table 3 it can be seen that only the Cobas B101 passed the NGSP criteria with two lot numbers compared to all four individual SRMPs and that the QuikReadgo and the Allegro failed the NGSP criteria with both lot numbers for all four individual SRMPs. Figure 1 shows the regression lines for each POC device vs. the mean of the SRMPs. All analyzers suffered some degree of bias. The Lab 001 and the Cobas B101, had the least bias across the lots and the HbA1c range and all other POC analyzers had a statistically significant difference either between the lot numbers or at different HbA1c levels or both and showed a large dispersion around the deming regression line compared to the mean of the SRMPs.

Table 3:

Bias at different HbA1c levels for each of the different POC analyzers using two different lot numbers, compared with the mean of the SRMPs. Bias between the two lot numbers. Results for NGSP show how many times the criteria were met out of a possible four comparisons.

Bias at different HbA1c levels, mmol/mol
30 mmol/mol (95% CI) 48 mmol/mol (95% CI) 75 mmol/mol (95% CI) Pass NGSP criteria?

X out of 4 SRMP
A1Care
Lot A vs. mean SRMP 31.7 (30.62–32.74)a 47.9 (47.29–48.57) 72.3 (70.92–73.69)a
Lot B vs. mean SRMP 31.6 (30.42–32.73)a 48.9 (48.15–49.60)a 74.8 (73.67–75.98)
Lot A vs. Lot B 30 (29.5–31.2) 49 (48.3–49.4)a 77 (75.5–77.5)a
Lot A 2/4
Lot B 1/4
Cobas B101
Lot A vs. mean SRMP 30.1 (28.81–31.47) 47.6 (46.82–48.34) 73.7 (72.95–74.55)a
Lot B vs. mean SRMP 29.3 (28.29–30.29) 47.2 (46.63–47.85)a 74.2 (73.41–74.93)a
Lot A vs. lot B 29.0 (28.0–30.7) 48.0 (46.9–48.4) 75.0 (74.2–76.2)
Lot A 4/4
Lot B 4/4
InnovaStar
Lot A vs. mean SRMP 30.4 (29.63–31.21) 47.3 (46.88–47.81)a 72.7 (71.57–73.87)a
Lot B vs. mean SRMP 28.6 (27.73–29.52)a 46.2 (45.68–46.80)a 72.7 (71.88–73.45)a
Lot A vs. Lot B 28.1 (27.73–28.53)a 46.7 (46.49–46.94)a 74.6 (74.04–75.13)
Lot A 4/4
Lot B 2/4
The Lab 001
Lot A vs. mean SRMP 29.7 (28.73–30.58) 47.1 (46.58–47.55)a 73.2 (72.30–74.06)a
Lot B vs. mean SRMP 30.0 (29.44–30.60) 47.4 (47.07–47.71)a 73.4 (72.75–74.13)a
Lot A vs. Lot B 29.9 (29.47–30.36) 47.8 (47.57–48.08) 74.7 (74.26–75.12)
Lot A 3/4
Lot B 4/4
QuikReadgo
Lot A vs. mean SRMP 28.4 (27.17–29.69)a 46.5 (45.84–47.21)a 73.7 (71.65–75.66)
Lot B vs. mean SRMP 30.3 (28.70–31.92) 48.3 (47.30–49.36) 75.4 (73.79–76.93)
Lot A vs. Lot B 32.0 (30.7–33.0)a 50.0 (49.1–50.5)a 77.0 (74.8–78.6)
Lot A 0/4
Lot B 0/4
Allegro
Lot A vs. mean SRMP 32.4 (30.89–34.00)a 50.0 (49.01–51.01)a 76.4 (74.27–78.45)
Lot B vs. mean SRMP 30.3 (28.48–32.03) 48.4 (47.22–49.56) 75.6 (74.06–77.12)
Lot A vs. Lot B 28.0 (26.5–29.0)a 46.0 (45.6–47.1)a 74.0 (72.8–75.5)
Lot A 0/4
Lot B 0/4
  1. POC, point-of-care; NGSP, National Glycohemoglobin Standardization Program; SRPM, Secondary Reference Measurement Procedures. aStatistically significant differences were observed for bias when 30 and/or 48 and/or 75 mmol/mol are not within 95% confidence interval limits.

Figure 1: 
EP-9 data for each of the POC analyzers when compared to the mean of the SRMPs with two lot numbers.
(A) The A1Care, (B) the Lab 001, (C) the Cobas B101, (D) the QuikReadgo, (E) the InnovaStar, (F) the Allegro.
Figure 1:

EP-9 data for each of the POC analyzers when compared to the mean of the SRMPs with two lot numbers.

(A) The A1Care, (B) the Lab 001, (C) the Cobas B101, (D) the QuikReadgo, (E) the InnovaStar, (F) the Allegro.

Linearity (EP-6)

Supplemental Table 2 details the results of the linearity study. The maximum deviation is shown between the fitted values of the best polynomial line and the regression line for the 11 samples. If the deviation exceeds allowable nonlinearity (3 mmol/mol) the data was considered nonlinear. Based on this criterion all POC analyzers were linear except for the Cobas B101 and the InnovaStar. However, the detection limit of the InnovaStar was >30 mmol/mol. Excluding the lowest sample for the calculations showed that the InnovaStar was linear. The HbA1c result of the highest sample for the Allegro was above the detection limits (>130 mmol/mol) therefore the linearity was assessed with 10 samples instead of 11.

Hemoglobin variants AS, AC, AE, AD, elevated A2 and HbF

Table 4 shows the mean relative difference of the frozen and fresh Hb-variants samples and Supplemental Figures 1–6 show the graphs of the interference of Hb-variants for frozen and fresh Hb-variants for the different POC analyzers. All methods, except for the A1Care, had an interference with one or more of the Hb-variants with frozen or fresh samples (mean relative difference was >10%). All Hb-variants were detected and correctly identified by the Lab 001 via an S-, C-, D-, E- or F-window.

Table 4:

Mean relative difference (%) of the common Hb-variants compared to the assigned value after correction for bias in non-variant samples (number of samples for frozen and fresh samples).

  1. Red: equals at or near 10% difference. Green: this variant would not be seen to affect the value if only evaluated using frozen samples. The manufacturer’s claims for are also listed. a% of HbF at which a significant negative bias results.

Schiff Base, icteric samples, different hemoglobin concentrations

None of the POC analyzers showed an interference for Schiff Base, icteric samples or different hemoglobin concentrations. Supplemental Tables 3–5 show the data.

Sample commutability

To investigate the impact of sample type in relation to different clinical applications, fresh (EP-15 and EP-9), frozen (EP-5 and IFCC certification program samples) and lyophilized (ERL EQA scheme) samples were compared. Figure 2A, B shows the data from EP15 and EP-9 (red circles) representing all fresh samples, IFCC certification samples (blue circles) representing all frozen samples and the ERL EQA (green circles) representing lyophilized samples. In addition, Figure 2B shows EP-5 and EP-9 data (purple circles) in order to assess the impact of using EP-5 vs. EP-15 for routine method evaluations. The EP-5 and EP-15 studies were both used to compare performance with fresh and frozen samples and for the majority of analyzers the EP-15 evaluation provided sufficient data to assess performance.

Figure 2: 
Sigma metrics graphs showing the impact of different sample types on the ability to meet the International quality performance criteria.

Figure (2A) shows EP-15 and EP-9 (red) values compared to frozen (blue) and lyophilized samples (green). Figure (2B) shows EP-5 (purple) data compared to frozen and lyophilized samples showing minimal difference in performance between EP-5 and EP-15 protocols. A1Care (A), Cobas B101 (B), InnovaStar (C), The Lab 001 (D), QuikReadgo (E) and Allegro (F).
Figure 2:

Sigma metrics graphs showing the impact of different sample types on the ability to meet the International quality performance criteria.

Figure (2A) shows EP-15 and EP-9 (red) values compared to frozen (blue) and lyophilized samples (green). Figure (2B) shows EP-5 (purple) data compared to frozen and lyophilized samples showing minimal difference in performance between EP-5 and EP-15 protocols. A1Care (A), Cobas B101 (B), InnovaStar (C), The Lab 001 (D), QuikReadgo (E) and Allegro (F).

The data clearly demonstrates that lyophilized material was only commutable with The Lab 001, all other POC analyzers showed a large positive bias when analyzing lyophilized material. Frozen material was not commutable with the InnovaStar and the QuikReadgo. When using fresh patient samples all, except the Allegro, passed the IFCC criteria of having a sigma >2 at an HbA1c concentration of 50 mmol/mol. Conversely the Allegro actually performed better when using frozen samples instead of fresh samples.

SUS scores

Table 1 shows the SUS scores of the different POCT analyzers. The usability of all the POCT analyzers was good to excellent (mean SUS score > 80) except for the QuikReadgo (mean SUS score was 60).

Discussion

What progress has been made?

In the study of 2014 the InnovaStar showed an interference with fresh patient samples, which was likely due to the instrument being calibrated using frozen samples [23]. The previous publication led the manufacturer to switch to fresh patient samples, which are available from the ERL, to calibrate their cartridges resulting in lower bias in fresh samples [24].

The Lab 001 device is new to the market and the sigma graphs show excellent performance however there was a small bias at higher HbA1c levels. Paradoxically, had the imprecision of the instrument been higher than the bias would not have been detected as the confidence intervals would be wider. This device shows that the field of POCT has moved on and quality improvements are possible.

There are still significant issues with the performance of some analyzers

The key issues we still see are: a) lot to lot variability, b) high imprecision and c) significant interference from variants. Four out of the six evaluated analyzers still do not demonstrate acceptable and/or consistent performance.

The InnovaStar had poor performance between lot numbers that was not acceptable. The new to the market QuikReadgo also showed a statistically significant difference between the lot numbers, and failed to meet the NGSP criteria with either lot number when compared to any of the four SRMPs. The Allegro also showed similarly poor performance. It should be noted that both the Allegro and A1Care were evaluated in a previous study (data not presented) in which the results were acceptable, to good. As the data for certain elements of the current study was acceptable it is likely that there is an inconsistency in the manufacturing chain that needs to be identified. The considerable variability in performance across the analyzers, albeit less than in earlier studies, shows that there is still work to be done [6].

A key issue with POC analyzers still appears to be interference from variant hemoglobins. A complicating factor when evaluating Hb-variants is the fact that some analyzers (QuikReadgo and Innovastar) are not compatible with frozen samples. This study addresses this issue with the use of fresh Hb-variant samples. However it was not possible to obtain as wide a range or number of fresh Hb-variant samples for investigation as would be desirable. Explaining the interferences seen in these analyzers, which are nearly all immunoassay, is difficult, why would frozen and fresh samples perform so differently? Why are they causing an interference at all when the epitopes that the antibodies bind to do not contain the mutations that cause the Hb variant? HbE for example is an inherited single base mutation at codon 26 of the beta-globin gene, leading to substitution of glutamic acid (46 amino acid of the beta chain) for lysine which should not, theoretically, interfere with the antibodies used in the POC analyzers. A possible explanation could be that the mutation causes folding of the hemoglobin molecule in such a way that position 46 are then very close to the first four amino-acids of the beta chain and interacts with the antibodies used in the immunoassay [25]. Alternatively this may be due to differences in the immobilization of the antibodies which lead to differences in the surface chemistry and thus the binding of the antibody.

The Lab 001 suffers from interference with fresh HbAS samples, but less so with frozen samples which are easier to obtain for method development. This does potentially pose a problem for patient samples, however this is mitigated by the fact that Lab 001 as a capillary electrophoresis method is capable of identifying the presence of a variant – unlike most other POC analyzers.

It is important to clarify that some of the manufacturers do clearly state that the presence of variants may alter the HbA1c results however these claims and the findings of this study do not always correlate.

It is not all about analytical performance

Whilst many evaluations focus on analytical performance, it is important to consider the wider context of the use of POC analyzers. The usability/user-friendliness of each of the analyzers was assessed and found to be variable.

A crucial factor in the practical, clinical usability of an instrument is how long it takes to generate a result, with the benefit of providing real time results often touted as a key selling point for POC analyzers. The time from a ‘cold start’ (turning the power on and warming reagents if needed) to a result was assessed. The range of time needed was wide at ̴3.0 min for The Lab 001 to ̴16.5 min for the InnovaStar. This is important information for users of the analyzers as they may have little advance warning that a test may need to be undertaken.

Key messages

This complex and detailed evaluation provides a comprehensive overview of six HbA1c POC analyzers. Whilst there are areas of excellence in performance there are still significant areas for improvement with the performance of some being unacceptable. It is possible for an analyzer to meet certification criteria for the IFCC and/or NGSP and perform well in one evaluation and then perform very poorly in subsequent evaluations. From a clinical and scientific perspective this is alarming. It is essential that performance of an analyzer is stable, especially with increased use for both monitoring and diagnosis of people with diabetes. Four of the analyzers in this study showed highly variable performance which is not acceptable.

It is unclear why such discrepant results are seen when fresh or frozen samples are used, especially as this is not commonly seen with routine laboratory analyzers. Whilst many evaluations and method development often necessitate the use of frozen samples, it is rare that a POCT would be used with anything other than fresh samples. We have shown here that there can be marked differences in performance with each sample type.

One way to identify variability in performance is through the use of EQA schemes. The authors strongly advocate the use of EQA to identify ongoing performance issues, and although POC analyzers are often exempt from the need to participate in EQA it is a valuable and powerful tool for monitoring performance. A caveat to this is that POC analyzers may not be able to utilize the frozen or lyophilized samples often used in EQA schemes. None of the manufacturers claim in their information for users that lyophilized samples can be used and this is supported by the data from the ERL-EQA samples (see Figure 2), EQA program leads need to be cognoscente of this issue and work towards providing commutable samples for POC analyzers.

As discussed earlier, the interference from Hb-variants in a number of the analyzers is perplexing. The disparity in results seen between fresh and frozen samples is of concern as many manufacturers will likely develop their methods using frozen samples but in the ‘real world’ setting where fresh samples are used, the variants pose a potential unseen problem. Not all manufacturers are accurate in their claims for Hb- variant performance.


Corresponding author: Erna Lenters-Westra, Isala, Department of Clinical Chemistry, Dr. Van Heesweg 2, 8025 AB Zwolle, The Netherlands; and European Reference Laboratory for Glycohemoglobin, Location Isala, Zwolle, The Netherlands, Phone: +31 384247803, E-mail:

Funding source: DiaSys Diagnostic Systems

Funding source: Roche Diagnostics

Funding source: Arkray Europe

Funding source: i-SENS

Funding source: Aidian

Funding source: Nova Biomedical

Acknowledgments

The authors wish to thank Agnes den Ouden for her support with the analysis of the samples. They also wish to thank DiaSys Diagnostic Systems, Holzheim, Germany, Roche Diagnostics, Almere, The Netherlands, Arkray Europe, Amsterdam, The Netherlands, i-SENS, Seoul, South Korea, Aidian, Espoo, Finland and Nova Biomedical, Waltham, US for their financial support and for donating reagents and analyzers used in this study.

  1. Research funding: This study was funded by DiaSys Diagnostic Systems, Roche Diagnostics, Arkray Europe, i-SENS, Aidian and Nova Biomedical. The manufacturers provided all analyzers and reagents at no cost and financially supported this study. The evaluating protocol was designed by the authors. The manufacturers played no role in the review and interpretation of data or preparation or approval of manuscript and had no rights of refusal for publication of the data.

  2. Author contributions: All authors have accepted responsibility for the entire content of this manuscript and approved its submission. All authors confirmed they have contributed to the intellectual content of this paper and have met the following three requirements: (a) significant contributions to the conception and design, acquisition of data, or analysis and interpretation of data; (b) drafting or revising the article for intellectual content; and (c) final approval of the published article.

  3. Competing interests: Authors state no conflict of interest.

References

1. IDF diabetes Atlas. Available from: https://www.diabetesatlas.org/en/ [Accessed Aug 2020].Search in Google Scholar

2. Schols, AMR, Dinant, G. Cals WL Point-of-care testing in general practice: just what the doctor ordered? Br J Gen Pract 2018;68:362–3. https://doi.org/10.3399/bjgp18x698033.Search in Google Scholar PubMed PubMed Central

3. Howick, J, Cals, JWL, Jones, C, Price, CP, Plüddemann, A Heneghan, C, et al.. Current and future use of point-of-care tests in primary care: an international survey in Australia, Belgium, The Netherlands, the UK and the USA. BMJ Open 2014;4:e005611. https://doi.org/10.1136/bmjopen-2014-005611.Search in Google Scholar PubMed PubMed Central

4. Jones, CH, Howick, J, Roberts, NW, Price, CP, Heneghan, C, Plüddemann, A, et al.. Primary care clinicians’ attitudes towards point-of-care blood testing: a systematic review of qualitative studies. BMC Fam Pract 2013;14:117. https://doi.org/10.1186/1471-2296-14-117.Search in Google Scholar PubMed PubMed Central

5. Lenters-Westra, E, Slingerland, RJ. Six of eight Hemoglobin A1c point-of-care analyzers do not meet the general accepted analytical performance criteria. Clin Chem 2010;56:44–52. https://doi.org/10.1373/clinchem.2009.130641.Search in Google Scholar PubMed

6. Hirst, JA, McLellan, JH, Price, PP, English, E, Feakins, BG, Stevens, RJ, et al.. Performance of point-of-care HbA1c test analyzers: implications for use in clinical practice- a systematic review and meta-analysis. Clin Chem Lab Med 2017;55:167–80. https://doi.org/10.1515/cclm-2016-0303.Search in Google Scholar PubMed

7. NGSP website note CAP survey data. Available from: https://www.diabetesatlas.org/en/ http://www.ngsp.org/CAP/CAP20a.pdf [Accessed Aug 2020].Search in Google Scholar

8. EurA1c Trial Group. EurA1c: the European HbA1c trial to investigate the performance of HbA1c assays in 2166 laboratories across 17 countries and 24 manufacturers by use of the IFCC model for quality targets. Clin Chem 2018;64:1183–92. https://doi.org/10.1373/clinchem.2018.288795.Search in Google Scholar PubMed

9. Nathan, DM, Griffin, A, Perez, FM, Basque, E, Do, L, Steiner, B. Accuracy of a point-of-care hemoglobin A1c assay. J Diabetes Sci Technol 2019;13:1149–53. https://doi.org/10.1177/1932296819836101.Search in Google Scholar PubMed PubMed Central

10. Lenters-Westra, E, English, E. Investigation of quality of POCT HbA1c analyzers: what are our next steps? J Diabetes Sci Technol 2019;13:1154–57. https://doi.org/10.1177/1932296819850838.Search in Google Scholar PubMed PubMed Central

11. Lenters-Westra, E. Independent evaluation using fresh patient samples under real clinical conditions is vital for confirming the suitability and marketability of any new HbA1c assay. An example. Clin Chem Lab Med 2018;56:e157–9. https://doi.org/10.1515/cclm-2017-0930.Search in Google Scholar PubMed

12. Approved laboratories of the IFCC network laboratories for HbA1c; 2020. Available from: https://www.diabetesatlas.org/en/ https://information.ifcchba1c.org/sustainable-implementation/reference-laboratories/list-reference-laboratories [Accessed Aug 2020].Search in Google Scholar

13. NGSP Network Laboratory Members. Available from: https://www.diabetesatlas.org/en/ www.ngsp.org/network.asp Accessed August [Accessed Aug 2020].Search in Google Scholar

14. CLSI Document EP6-A. Evaluation of the linearity of quantitative measurement procedures: a statistical approach. Approved guideline. 940 West Valley Road, Suite1400 Wayne, PA 19087-1898 USA: CLSI; 2003.Search in Google Scholar

15. Information IFCC certification program. Available from: https://www.diabetesatlas.org/en/ https://www.ifcchba1c.org/UserData/Files/Certificate%20Information.pdf [Accessed Aug 2020].Search in Google Scholar

16. Information ERL EQA program. Available from: https://www.diabetesatlas.org/en/ http://www.euroreflab.com/ [Accessed Aug 2020].Search in Google Scholar

17. Information System Usability Scale (SUS). Available from: https://www.diabetesatlas.org/en/ https://www.usability.gov/how-to-and-tools/methods/system-usability-scale.html [Accessed Aug 2020].Search in Google Scholar

18. Interpretation of SUS scores. Available from: https://www.diabetesatlas.org/en/ https://measuringu.com/interpret-sus-score/ [Accessed Aug 2020].Search in Google Scholar

19. Weykamp, CW, John, WG, Gillery, P, English, E, Ji, L, Lenters-Westra, E, et al.. Investigation of two models to set and evaluate quality targets for HbA1c: biological Variation and Sigma-metrix. Clin Chem 2015;61:752–9. https://doi.org/10.1373/clinchem.2014.235333.Search in Google Scholar PubMed PubMed Central

20. NGSP manufacturer performance criteria. Available from: https://www.diabetesatlas.org/en/ http://www.ngsp.org/critsumm.asp [Accessed Aug 2020].Search in Google Scholar

21. Sacks, DB, Arnold, M, Bakris, GL, Bruns, DE, Horvath, AR, Sue, M, et al.. Guidelines and recommendations for laboratory analysis in the diagnosis and management of diabetes mellitus. Diabetes Care 2011;34:e62–99. https://doi.org/10.2337/dc11-9998.Search in Google Scholar PubMed PubMed Central

22. Weykamp, CW, Mosca, A, Gillery, P, Panteghini, M. The analytical goals for hemoglobin A(1c) measurement in IFCC units and National Glycohemoglobin Standardization Program Units are different. Clin Chem 2011;57:1204–6. https://doi.org/10.1373/clinchem.2011.162719.Search in Google Scholar PubMed

23. Lenters-Westra, E, Slingerland, RJ. Three of seven hemoglobin A1c point-of-care analyzers do not meet the generally accepted analytical performance criteria. Clin Chem 2014;60:1062–72. https://doi.org/10.1373/clinchem.2014.224311.Search in Google Scholar PubMed

24. Information for manufacturers for support and services by ERL. Available from: https://www.diabetesatlas.org/en/ https://information.ifcchba1c.org/sustainable-implementation/manufacturers/support-and-services.aspx [Accessed Aug 2020].Search in Google Scholar

25. Lenters-Westra, E, Strunk, A, Campbell, P, Slingerland, RJ. Can the Afinion HbA1c point-of-care instrument be an alternative method for the Tosoh G8 in the case of Hb-tacoma? Scan J Clin and Lab Invest 2016;18:1–6. https://doi.org/10.1080/00365513.2016.1183261.Search in Google Scholar PubMed


Supplementary Material

The online version of this article offers supplementary material (https://doi.org/10.1515/cclm-2020-1308).


Received: 2020-08-27
Accepted: 2020-10-20
Published Online: 2020-11-02
Published in Print: 2021-03-26

© 2020 Erna Lenters-Westra and Emma English, published by De Gruyter, Berlin/Boston

This work is licensed under the Creative Commons Attribution 4.0 International License.

Downloaded on 5.12.2023 from https://www.degruyter.com/document/doi/10.1515/cclm-2020-1308/html
Scroll to top button