Skip to content
BY 4.0 license Open Access Published online by De Gruyter May 23, 2022

Internal quality control – past, present and future trends

Carmen Ricós, Pilar Fernandez-Calle, Carmen Perich and James O. Westgard



This paper offers an historical view, through a summary of the internal quality control (IQC) models used from second half of twentyth century to those performed today and wants to give a projection on how the future should be addressed.


The material used in this work study are all papers collected referring IQC procedures. The method used is the critical analysis of the different IQC models with a discussion on the weak and the strong points of each model.


First models were based on testing control materials and using multiples of the analytical procedure standard deviation as control limits. Later, these limits were substituted by values related with the intended use of test, mainly derived from biological variation. For measurands with no available control material methods based on replicate analysis of patient’ samples were developed and have been improved recently; also, the sigma metrics that relates the quality desired with the laboratory performance has resulted in a highly efficient quality control model. Present tendency is to modulate IQC considering the workload and the impact of analytical failure in the patent harm.


This paper remarks the strong points of IQC models, indicates the weak points that should be eliminated from practice and gives a future projection on how to promote patient safety through laboratory examinations.


Laboratory medicine is a discipline that provides information on patient health status and there is no doubt that many medical decisions are based on laboratory tests results; however, it is not objectively measured how they could impact on every patient healthcare [1].

In any case, laboratory professional should assure reliability of the information produced so that clinicians can avoid erroneous decisions that would adversely affect patient healthcare.

Daily laboratory work includes examination and extra-examination activities, whose quality has been extensively studied in our setting [2], [3], [4], [5], [6], [7]. The Spanish Society of Laboratory Medicine (SEQCML) dedicates three working groups, namely Analytical Quality Commission and Extra-analytical Quality Commission, together with the External Quality Assurance Programs Committee, to implement best practices. (

The examination phase of the global laboratory process is focused on the measurement of the biological substances that constitute human fluids. In this article a biological quantity appraised in a human fluid is termed as “measurand”. To implement a quality assurance system of the examination procedures used in the laboratory is an essential task. Two key activities are the implementation of internal quality control strategies and participation in external quality control programs to ensure the reliability of results [accurate and precise]. Other elements of the quality system, such as monitoring key performance indicators, internal audits, etc., are not considered in this paper.

The aim of internal quality control is to monitor the examination process in order to avoid producing erroneous information concerning patient health status.

This paper offers an historical view, through a summary of the internal quality control (IQC) models used from second half of 20th century to those performed today and wants to give a projection on how the future should be addressed.

Materials and methods

The material used in this study are all papers collected referring IQC procedures.

The method used is the critical analysis of the different models used with a discussion on the weak and the strong points of each model.

Results and discussion

Bases from the past

The oldest model, developed in the 50s was based on a statistical criterion, from calculating the mean of a number of results from a single measurand in the same control sample. It was supported by the use of a control chart that showed the results of control samples plotted in the x-axis vs. time or day on the x-axis. The mean and standard deviation were marked on the y-axis. Control limits were defined as ±2 standard deviations from the mean, which meant that 95% of results were expected to fall within these limits and be “in-control” and the other 5% were considered to identify “out-of-control” situations [8, 9].

In the 80s James Westgard evaluated this approach in the context of multitest automated systems for multiple levels of control results at different concentration levels (usually within and outside the biological reference interval) for each measurand. Use of 2SD control limits was seen to cause a high level of false rejections. Alternate control rules were proposed to limit false rejections while at the same time maximizing error detection through application of multiple rules. Control rules were identified in shorthand abbreviations, such as 13s which indicates a run rejection when 1 control measurement exceeded control limits defined as the mean ± 3 SD. A series of rules 13s/22s/R4s/41s/10 x were introduced as an example of a multirule [10, 11]. The individual rules were selected to maintain a low probability for false rejection (pfr) while the additive effect of the rules increased the probability for error detection (ped). This multirole algorithm was implemented on many automatic analyzers, allowing the laboratory professional to select the operative control rules to be applied.

At this time, all IQC applications would be properly described as statistical process control (SPC). The quality required for the intended medical use of test results was not considered in the design or planning of SPC applications. Control limits were based on the observed variability of the testing process, without any connection to the clinical use of the test. Establishing that a testing method was acceptable for intended medical use depended on performing a proper “method evaluation” prior to implementation of the method. The concept of “allowable Total Error” had been introduced as the form of quality requirement for method evaluation and development [12].

The main weakness of this approach was that, by relying on a purely statistical criterion, the margins of tolerance were in line with the analytical performance inherent in the method. Additionally, it was not possible to make an assessment of the need to improve the performance of the different analytical methods because it was not possible to know to what extent the analytical performance of the different methods met the clinical needs.

Another weakness was that laboratories with different analytical performance used different control limits, which did not promote harmonization.

During the 80s to mid-90s, efforts to define quality requirements and link them to quality control were led by Professor C.-H. de Verdier from Sweden and Professor Mogen Horder from Denmark under the auspices of NORDKEM, Nordic Clinical Chemistry Projects that involved all the Scandinavian countries [13], [14], [15], [16].

In the second half of 90s a multinational group of experts, under the auspices of the Standards Measurement and Testing Program proposed a series of recommendations for IQC [17] that are summarized as the following:

  1. IQC should be integrated into the quality management system of the laboratory.

  2. IQC should be combined with an external program with target values traceable to higher order reference standards.

  3. IQC should assure that performance specifications based on biological variation were attached, whenever possible.

  4. For a measurement procedure with high error frequency, the IQC protocol should have the maximum pde together with the lowest pfr possible, to stimulate a good resolution and prevention of problems. The general tendency should be to use a relaxed operative rule with the minimum pfr possible

  5. Patient results of all measurands requested should be reviewed before releasing tests results, but in any case, this revision substitutes control sample testing.

  6. It is better to prevent errors instead than to correct them.

The recommendations from Hyltoft et al. [17] introduced the concept of quality specifications based on biology and clinical use of laboratory reports instead of based on statistics, as had been done before. This approach had in mind the final use of medical laboratory measurements for patient care.

A hierarchic strategy concerning performance specifications was established in the Stockholm international consensus conference, in 1999; so, from that moment the quality specifications were calculated accordingly [18]. In this conference a Biologic Variation database, prepared by the SEQCML Analytical Quality Commission was presented [19]. It had been prepared by a compilation of published papers, excluding those with poor reliability and calculating the median value for each measurand. A list of quality specifications for bias, imprecision and total an analytical error was also included. This data base was biannually updated until 2014 and was universally well-known by its inclusion on the Westgard’ website [20].

A different concern that was raised in the 2000s was the commutability of control materials. Commercial controls generated by the In Vitro Diagnostic (IVD) providers, required stabilization, additives to adjust the desired concentrations, etc., so the materials could behave differently than real patients’ specimens when analyze by the routine laboratory methods. Soon, certain organizations used control samples of human origin simply frozen, maintained at −80 °C and distributed in aliquots without any other manipulation, which avoided the mentioned limitation [21]. The problem in this case was (and still is today) the difficulty to prepare the necessary amount of control material for daily IQC and the cost of maintenance at −80 °C.

For those measurands without stable control material available, it was recommended to develop patient-based QC algorithms, such as average of normals (AoN). The idea, first described in the 1960s, was to calculate the mean (or median) of a number of daily patient results and to monitor how this mean varied with time (moving average) [22]. The condition to apply this model was that measurands had to tight biologic control (such as electrolytes, calcium), or stable patient populations such as red cell indices [23]. AoN was not as useful for those analytes having wide population distributions. To provide guidance for when to use AoN algorithms, Cembrowski [24] utilized the ratio of the population standard deviation (SD) divided by the measurement SD in a nomogram to estimate the number of patient samples that would be needed for effective error detection.

Present situation

Once in the XXI century IQC, models based on the analytical procedure performances were developed; the rationale was the sigma metrics, based on the relationship between the performance specification that should be attained, expressed in terms of total analytical allowable error (TEa, %) and the real performance, expressed in terms of systematic error in absolute values (abs SE, %) and imprecision (CV, %):

Sigma  = ( TE a absSE ) /CV

The higher the Sigma the more relaxed the IQC procedure can be, and vice versa [25].

If the IQC is focused on a single analytical procedure in a single laboratory, the term SE can be set to zero if the aim is to detect changes in the laboratory’s regular performance of the measurement procedure that could imply an inappropriate medical impact. However, when two or more analytical systems are used in the same or various laboratories of a single health care service, the SE has to be considered [26]. Moreover, even on a single analytical procedure, for long-term processes the use of different calibrator or reagent lots could produce SE, and this could cause problems in patient monitoring (i.e., PSA testing after radiotherapy as treatment of prostatic cancer eradication).

The tree key points of an IQC protocol are described in the following sections.

Key point 1. Analytical performance specifications

Total allowable error should be based on one of the three proposals from the Milan 1st IFCC Strategic Conference [27]. Ideally analytical performance specifications should be based on the impact of analytical error on patient care; however, it is difficult to establish the direct relationship between these two facts [26].

This is the reason why biological variation is more widely used basis to establish analytical performance specifications, because it assures the correct clinical use of lab tests. For measurands without data on biological variation or with weak physiological regulation, the state of the art (highest level of analytical performance technically achievable) is another option accepted in the Milan conference [27, 28].

Biological variation (BV) data are presently available at the EFLM website [29]. This new database has been created from an exhaustive revision of papers included in the first database [19, 20] as well as from the application of an effective bibliographic search, able to recover as many as BV papers possible. The EFLM BV Working Group has developed an evaluation tool with very strict criteria for evaluating the reliability of methodology used in the published papers and with a further meta-analysis to improve the robustness and reliability of the new BV estimates [30].

State of the art can be obtained from the external quality assessment program in which the laboratory participates, on the basis of the 20 or 30 percentiles of participants’ deviations [31]. The annual revision of the SEQCML programs shows 20, 30, 50, 70 and 90 percentiles for each measurand [].

Key point 2. Performance of examination procedure

The basic performance of the examination procedure is measured in each laboratory under routine conditions, in terms of CV and systematic error (SE) that is the percentage deviation to the target value of control material, for each measurand. When using control materials provided by the manufacturer of the analytical system, it is recommended to calculate CV and target value with a minimum of 10–20 measurements of control material during 10–20 consecutive days, in analytical stable conditions (same reagents, same technician, etc.) also it is recommended to update this information after a longer period (6–12 months) [26]. It is good practice to assess the agreement of observed means with the range of target values defined by the provider on the same lot of control materials over the time of use.

Concerning control material, using stabilized controls independent of the analytical system provided by another manufacturer, the so-called third-party controls, is highly suggested to verify that the metrological procedure do not deviates from the target value of control material; the ISO 15189 [32] standard specifically recommends this type of controls. The target value is considered to be the average of results obtained by the peer group of laboratories [formed by those using same analytical principle, same method, same instrument and same control material lot].In this way, the central value to which the individual deviation is compared is statistically robust. This method helps evaluate the harmonization among labs, although not standardization because there is no traceability to any higher order reference standards [33, 34].

Indirectly this material gives a general view of deviation with respect to other laboratories if results from all labs are available, in same way that happens in an external program. A tendency exists to call this type of control as “internal-external control”; however, this term cannot be considered correct because they are not blind control samples, which is one of the main characteristics of external control. So, if this type of control is used as IQC, it has to be complemented with an external program.

Due to the fact that IQC should verify not only the measurement procedure reproducibility but also the bias with respect to target value, some authors insist to incorporate the concept of traceability to higher order standards by using commutable control materials with values assigned by certified reference methods [35, 36]. However, this idea may be quite difficult to do in practice because of the poor availability of these traceable and commutable controls materials for daily work.

For this reason, it is very important to participate in an external quality assessment control program using this type of control materials. Being a discontinuous task, it is logistically and economically affordable. These materials can be considered as reference standards and allows evaluating the accuracy of laboratory tests results [37, 38] and also the standardization among laboratories.

Key point 3. IQC based on patients’ results

An alternative way to avoid the commutability problem is using patients’ results as a complementary IQC protocol.

It has been previously mentioned the AoN with the use of a moving average calculation (Hoffman, Bull) [22, 23], this model has been very much advanced in the last years, using the presently available laboratory informatics systems [39, 40]. Recommendations on verification and validation of an IQC protocol using the moving average before being implemented in the laboratory, has been recently published by the IFCC [41].

With the aim to increase the error detection power of this model, it is important to define the patients’ population included in the average; to do this, a previous multivariate regression analysis that includes the mean of previous 2000 patient results and other independent variables such as age, sex, outpatient vs. inpatient status, as well as the patients’ diagnostics and hospital service has been studied [42, 43].

On January 2022 the SEQCML Analytical Quality Commission has given a Continuous Educational course with a depth revision of this strategy, with emphasis on the factors that optimize its implementation [44]. These are:

  1. Definition of truncation limits to select the patient’s results to be averaged.

  2. Description of the algorithm to be used [mean, median, weighted mean, etc.

  3. Designation of the control limits, which have to be in accordance with the analytical performance specifications pretended by the laboratory.

A recent paper from Bayat et al. [45] compares the classic IQC model based on control materials (multirules) with the one based on patients’ results. They noticed that classic method is very practical in high sigma (sigma>4) procedures (simpler and quick to release patient’ results), whereas a multirule coupled with a simple moving average can improve error detection in low sigma procedures (sigma<4), in spite of its complexity.

The weak point of this approach is that it does not directly estimate analytical bias because patients’ results are unknown although it detects a change of bias, which is useful to monitor the procedure performance. This limitation would be obviated by combining the moving average IQC with an external quality control adequate for verifying laboratory bias.

It is possible to make daily duplicated analysis of various patient samples, to calculate the differences between duplicates and to obtain the so-called Patient Delta for each measurand. The Average of Delta ideally is zero and when it moves, a change of analytical bias is denoted [46]. Cervinski and colleagues applied this model to the laboratory informatics system for inpatients and they have investigated the number of the optimum patient sample pairs to attain maximum power detection for measurement bias [42].

Analytical imprecision, expressed in terms of standard deviation (SD), can be calculated from differences between duplicates using the formula:

SD = ( ( x 2 x 1 ) 2 / 2 n ) 1 / 2

where x 2 and x 1 are second and first result of each patient pair of results and n is the number of duplicates tested.

An excellent example of a large focus IQC based on patients’ results are the Empower Project of Linda Thienpont and coworkers [47]. A number of patient’s results coming from different laboratories are compiled together with records of analytical method, instrument, as well as calibrator and reagent lots used. Robust statistic parameters are obtained to give manufacturers and laboratories a realistic view on assay quality and comparability, as well as stability of performance or reasons for increased variation.

From the 2011 the concept of patient risk management is being integrated into the IQC protocol. The Clinical and Laboratory Standards Institute (CLSI) indicates that three risk factors have to be considered [48, 49]. These are:

  1. How likely the measurement procedure it is for the failure to occurs (probability).

  2. How severe the potential harm to the patient is if the failure goes undetected (severity).

  3. How reliably the IQC strategy can detect the failure if it occurs (detectability).

For this reason, the IQC protocol has to establish the frequency of control material testing and the number of controls per run. This depends on the number of patient samples to be tested, the control levels available, the measurement procedure imprecision and the maximum expected number of erroneous patient results reported owing to the occurrence of an out-of-control event [it is not the same to assume a 10% of misclassified patients for cardiac troponin, that’s is the key diagnostic test for myocardial infarction than for triglycerides or ALT, for example].That is, it is important to take into account the frequency of control testing to avoid delivering patient results from incorrect analytical runs [50], [51], [52].There are some informatics applications that could help medical laboratories to implement this type of IQC system.

If there is not possible to have informatics support, it is extremely practical to use the nomogram proposed by Westgard et al. (Figure 1), where an operative control rule can be selected from the σ value and the number of patient samples to be tested [workload].The control rule can be translated to a power function graphs to identify its probability for error detection (pde) and for false rejection (pfr) (Figure 2) [52].

Figure 1: 
Nomogram based on σ metrics and work load to select the operative control rule.
Obtained from Westgard JO et al. Clin Chem 2018;64:259–296 [52]. MR N4: multi-rule with 4 controls per run (13s/22s/R4s/41s). MR N2: multi-rule with 2 controls per run (13s/22s/R4s). N: number of control results.

Figure 1:

Nomogram based on σ metrics and work load to select the operative control rule.

Obtained from Westgard JO et al. Clin Chem 2018;64:259–296 [52]. MR N4: multi-rule with 4 controls per run (13s/22s/R4s/41s). MR N2: multi-rule with 2 controls per run (13s/22s/R4s). N: number of control results.

Figure 2: 
Power functions of the operative control rules included in the nomogram.
Obtained from Westgard JO et al. Clin Chem 2018;64:259–296 [52]. pfr: probability for false rejection. ped: probability for error detection. N: number of controls processed. R: number of analytical runs in which the operative control run is applied.

Figure 2:

Power functions of the operative control rules included in the nomogram.

Obtained from Westgard JO et al. Clin Chem 2018;64:259–296 [52]. pfr: probability for false rejection. ped: probability for error detection. N: number of controls processed. R: number of analytical runs in which the operative control run is applied.

This model facilitates the use of different operative rules during the work day: strict rules for critical moments, such as calibration or instrument adjustment with maximum pde and a more relaxed rule for monitoring analytical runs during the rest of the day with minimum pfr.

The strong point of this model, based on multi-rules amplified with estimating the frequency of control tests, is that it easy to be applied today because multi-rules are incorporated to most automatic analyzers and the mentioned Westgard nomogram is published [52].

Considerations about POCT

Point of care testing (POCT) requires particular attention, because there are managed by non-laboratory personnel.

If qualitative or semi-quantitative tests are produced in simple devices [single strips, cassettes, cartridges], such as pregnancy tests or in house tests, simply positive and negative controls have to be tested and results should be in accordance.

When quantitative results are produced in more complex devices, such as HbA1C, blood glucose, as well as blood gases and complete blood count, it is necessary to use control materials in same way as it is done in the central laboratory [26] and the interchangeability of POCT results with the central laboratory should be assessed (ISO 22870) [53]. Also, testing patient’ samples in parallel with central lab is a good practice, when POCT location makes it possible. Venner et al. recommend testing a minimum of 10 positive patients and 10 negative patients in the initial verification of a POCT device; and five patients of each type in monitoring devices [54].

Various performance indicators for POCT have been proposed, which include extra-analytical steps, such difference between the number of tests considering the consumables used and the tests performed in POCT devices, percentage of the tests reported in LIS over the tests performed in POCT analyzers, percentage of non-identified operators. For the POCT analytical phase an example of indicator is the percentage or number of measurands with coefficient of variation within the analytical performance specification [55].

To evaluate staff training and competency and the fact that only qualified people use the POCT devices, a POCT indicator may be the percentage of tests performed by the POCT operator with the highest activity over all tests performed in every clinical setting.

When a relationship with central lab exists, it is important to implement the connectivity between laboratory informatics system and POCT data manager system, which allows to block or to impede releasing results from outside the laboratory when necessary.

Present global view

Two surveys concerning IQC protocols were conducted by Sten Westgard to 700–900 laboratories from the five continents in 2017 and 2021. Questions were focused on two aspects: the IQC model used and the immediate management of control results control [56, 57]. Table 1 show the main results obtained in the two surveys.

Table 1:

Worldwide surveys concerning the IQC model applied. Answers expressed in terms of percentage of laboratories.

IQC model 2017 2021
Control limits

Laboratory standard deviation 63 58
Manufacturer control insert 43 57
Peer group standard deviation 20 24

Operative control run

12s 55 59
Multi-rule for all measurands 23
Multi-rule for some measurands 64

Control sample origin

Instrument manufacturer 64 67
Third party, liquid, assayed 44 43
Third party, lyophilized 35 31
Third party, liquid, unassayed 30 20
Average of normals 11 14


Once per day 49 54
Several times per day 41 46
Staff criterion 38 38
According to patient risk 14 NR
Beginning and end work day 13 NR
Patient groups (i.e., every 100patients) 9 NR

  1. NR, not requested.

Table 2:

Worldwide surveys concerning the immediate management after an out-of-control warning. Answers expressed in terms of percentage of laboratories.

Immediate management 2017 2021
Out-of-control measurement procedure

To search for reasons before repeating patient samples 78 79
To repeat control sample 78 68
To prepare a new control sample 64 55
To recalibrate the instrument 16 20
To immediately notify the manufacturer 2 4

Retesting patient samples

Only determined groups 33 31
All daily patients 32 33
Only abnormal results 20 24
Only those near the control failed 13 14

Releasing patient results when control failed

Never 54 48
Few times (<10/month) 30 30

IQC model used implemented:

  1. IQC procedures should be planned to verify attainment of the quality required for intended use, accounting for the observed bias and precision of the method and the rejection characteristics of the SQC procedure.

  2. All control limits are statistical limits, but the particular decision rules and number of measurements can be selected to ensure the desired quality is achieved. If the QC selection/planning activity neglects to define the quality required for intended use, then the IQC procedure only ensures an unknown arbitrary level of quality.

  3. The most widely control material used is that coming from the instrument manufacturer, whereas using third-party control is the recommended by the accreditation norm ISO 15189 [32].

  4. The operative control rule most frequently used is still the 12s, which generally has high probability for false rejection thus being considered inefficient.

  5. It does not seem evident that many laboratories adjust the frequency of control testing to its workload, not being aware to the risk management of their patients. This item should be considered in the immediate future.

  6. About 45% of laboratories are aware of the ISO 15189 norm or of the country regulation, which is considered to be a very positive aspect.

The immediate management after an out-of-control measurement procedure, shown in Table 2, illustrates the following situation:

  1. A general tendency to look at the failure reasons before repeating the control test or to prepare a new control sample is observed, being positive attitude. Fortunately, very few laboratories simply call the manufacturer immediately, which is considered to be few adequate.

  2. Repeating patient samples nearer to the control that failed or patient results with abnormal results is the most widely used option and, also, the most adequate. It is not considered to be appropriate to repeat all patient samples of the affected run, at least if no more control results had failed too; there is still 30% of lab anchored in this usage.

  3. It is a surprise that half of the surveyed laboratories release results when the control failed, a practice that should not be used anymore.

Concerning performance specifications, a survey was made in Spain in 2015 in Spain to 1738 laboratories [58]. Half of the participants [47%] used specifications derived from biological variation, 33% from the state of the art stated by the Spanish consensus for minimum quality specifications [59], 5% from clinicians’ opinions, 3% from regulations of other countries such as CLIA of USA [60] and Rilibäck of Germany [61] and the remaining laboratories used other criteria.

On the other hand, it is necessary to implement a long-term management of data given by IQC, consistent in to periodically (i.e. monthly) evaluate performance and to compare with the specifications admitted by the laboratory. If they are not attached, it has to verify if the IQC has been rigorously followed, if out-of-control analytical runs have been erroneously accepted, if maintenance of equipment has been correct, if technician has been properly instructed, etc. [62, 63]

As Sten Westgard concludes in his 2021 survey, labs should change and adopt better IQC protocols; otherwise, they put on risk its own viability and, worse, put patients on risk because of their poor lab reports.


The strong points revealed from the light of this revision are:

  1. Well established IQC protocols do exist, to monitor analytical performance and to detect changes of systematic error, using control samples and also sing patient samples.

  2. Third-party non commutable control material exists, to produce good IQC and to provide information from other labs using same measurement method and same control material.

  3. Data on biological variation to establish control limits are highly reliable and exist for a great number of measurands.

  4. The workload and the impact of an analytical failure on the information concerning patient status is easy to be known and should be applied in the IQC protocol.

The weak points to be eliminated are:

  1. Using the 12s operative control rule because its high pfr.

  2. Using control limits simply based on statistics rather than control rules derived from quality specifications.

  3. Repeating control tests without have first searched the reasons for failure.

  4. Repeating all patient samples without to check where the failure was produced.

  5. Releasing patient results when a failure was produced.

The projection for future is

  1. To use third-party control materials.

  2. To implement IQC based on patient samples when no stable control material is available.

  3. To use analytical performance specifications based on biological variation.

  4. To encourage external quality assessment organizers to provide commutable controls and to push laboratories to have freezers to maintain them at −80 °C.

Corresponding author: Carmen Ricós, External Quality Programs Committee and Analytical Quality Commission, Spanish Society of Laboratory Medicine (SEQCML), Padilla, 323, Barcelona, Spain, E-mail:

  1. Research funding: None declared.

  2. Author contributions: All authors have accepted responsibility for the entire content of this manuscript and approved its submission.

  3. Competing interests: Authors state no conflict of interest.

  4. Informed consent: Not applicable.

  5. Ethical approval: The local Institutional Review Board deemed the study exempt from review.


1. Rifai, N, Horvath, AT, Wittwer, CT. Clinical chemistry and molecular diagnostics. In: Tietz textbook of clinical Chemistry and molecular diagnostics, 6th ed. St. Louis, Missouri: Elsevier; 2018.Search in Google Scholar

2. Ricós, C, García-Vitoria, M, De la Fuente. Quality indicators and specifications for the extra-analytic phases in clinical laboratory management. Clin Chem Lab Med 2004;42:578–82. in Google Scholar PubMed

3. Alsina, MJ, Alvarez, V, Biosca, C, Domenech, MV, Ibarz, M. Minchinela J, et al. Quality indicators and specifications for key processes in clinical laboratories: a preliminary experience. Clin Chem Lab Med 2007;45:672–7. in Google Scholar PubMed

4. LLopis, MA, Trujillo, G, Llovet, MI, Tarrés, E, Ibarz, M, Biosca, C, et al.. Quality indicators and specifications for key analytical and extra-analytical processes in the clinical laboratory. Five years’ experience using the Six Sigma concept. Clinchem Lab Med 2011;49:463–70. in Google Scholar PubMed

5. Lippi, G, Simundic, AM. On behalf of the European federation for clinical chemistry and laboratory medicine [EFLM] working group for preanalytical phase [WG-PRE]. The EFLM strategy for harmonization of the preanalytical phase. ClinChemLabMed 2018;56:1660–6. in Google Scholar PubMed

6. Gómez Rioja, R, Martínez Espartosa, D, Segovia, M, Ibarz, M, Llopis, MA, Bauça, JM, et al.. Laboratory sample stability. Is it possible to define a consensus stability function? An example of five blood magnitudes. Clin Chem Lab Med 2018;56:1806–18. in Google Scholar PubMed

7. Llovet, MI, Biosca, C, Martínez-Iribarren, A, Blanco, A, Busquets, G, Castro, MJ, et al.. Clin Chem Lab Med 2018;56:403–12. in Google Scholar PubMed

8. Barry, P. QC: the levey-jennings control chart. Available from: [Accessed 05 Mar 2022].Search in Google Scholar

9. Büttner, J, Broth, R, Broughton, PM, Bowyer, RC. International Federation of Clinical Chemistry. Committee on standards. Expert panel on nomenclature and principles of quality control in clinical chemistry. Quality control in clinical chemistry. Part 4.Internal quality control. J Clin Chem Clin Biochem 1980;18:535–41.Search in Google Scholar

10. Westgard, JO, Barry, PL, Hunt, MR, Groth, T. A multi-rule Shewhart chart for quality control in clinical chemistry. Clin Chem 1981;27:493–501.10.1093/clinchem/27.3.493Search in Google Scholar

11. Westgard, JO. Westgard rules and multirules. Available from: [Accessed 10 Jan 2022].Search in Google Scholar

12. Westgard, JO, Carey, RN, Wold, S. Criteria for judging precision and accuracy in method development and evaluation. Clin Chem 1974;20:825–33.10.1093/clinchem/20.7.825Search in Google Scholar

13. Horder, M. Assessing quality requirements in clinical chemistry. Scand J Clin Lab Invest 1980;40(suppl 155):1–144.Search in Google Scholar

14. De Verdier, C-H, Aronsson, T, Nyberg, A. Quality control in clinical chemistry – efforts to find an efficient strategy. Scand J Clin Lab Invest 1984;44(suppl 172):1–241.Search in Google Scholar

15. De Verdier, C-H. Medical need for quality specifications in laboratory medicine. Upsala J Med Sci 1990;93:162–309.Search in Google Scholar

16. De Verdier, C-H, Groth, T, Hyltoft Petersen, P. Medical need for quality specifications in clinical laboratories. Upsala J Med Sci 1993;98:189–490.10.3109/03009739309179314Search in Google Scholar

17. Hyltoft Petersen, P, Ricós, C, Stöckl, D, Libeer, JC, Baadenhuijsen, H, Fraser, CG, et al.. Proposed guidelines for the internal quality control of analytical results in the medical laboratory. Eur J Clin Chem Clin Biochem 1996;34:983–99.Search in Google Scholar

18. Kenny, D, Fraser, CG, Hyltoft Petersen PandKallner, A. Strategies to set global analytical quality specifications in laboratory medicine – consensus agreement. Scan J Clin Lab Invest 1999;59:585. in Google Scholar

19. Ricós, C, Alvarez, V, Cava, F, García-Lario, JV, Hernández, A, Jiménez, CV, et al.. Current databases on biological variation: pros, cons and progress. Scand J Clinlabinvest 1999;59:491–500. in Google Scholar PubMed

20. Minchinela, J, Ricós, C, Perich, C, Fernández-Calle, P, Álvarez, V, Doménech, MV, et al.. Biological variation database and quality specifications for imprecision, bias and total error. 2014. Available from: [Accessed 18 Mar 2022].Search in Google Scholar

21. Baadenhuijsen, H, Kuipers, A, WeyKamp, C, Cobbaert, C, Jansen, R. External quality assessment in the Netherlands: time to introduce commutable survey specimens. Lessons from the Dutch Calibration 2000 project. Clin Chemlab Med 2005;43:304–7. in Google Scholar PubMed

22. Hoffmann, RG, Waid, ME. The “average of normals” method of quality control. Am J Clin Pathol 1965;43:134–41. in Google Scholar PubMed

23. Bull, BS, Elashoff, RM, Heilbron, DC, Couperus, J. A study of various estimators for the derivation of quality control procedures from patient erythrocyte indices. Am J Clin Pathol 1974;61:473–81. in Google Scholar PubMed

24. Cembrowski, GE, Chandler, E, Westgard, JO. Assessment of “average of normals” quality control procedures and guidelines for implementation. Am J Clin Pathol 1984;81:492–9. in Google Scholar PubMed

25. Westgard, JO, Westgard, SA. Six sigma quality management system and design of risk-based statistic quality control. Clin Lab Med 2017;37:85–96. in Google Scholar PubMed

26. Miller, WG, Sandberg, S. Quality control of the analytical examination process. In: Tietz textbook of clinical Chemistry and molecular diagnostics, 6th ed. Berlin, Germany: Elsevier; 2018.Search in Google Scholar

27. Sandberg, S, Fraser, CG, Horvath, AR, Jansen, R, Jones, G, Oosterhuis, W, et al.. Defining analytical performance specifications: consensus statement from the 1st strategic conference of the European federation of clinical chemistry and laboratory medicine. Clin Chem Lab Med 2015;53:833–5. in Google Scholar PubMed

28. Ceriotti, F, Fernandez-Calle, P, Klee, GG, Nordin, G, Sandberg, S, Streichert, T, et al.. Criteria for assigning laboratory measurands to models for analytical performance specifications defined in the 1st EFLM Strategic Conference. Clin Chem Lab Med 2017;55:189–94. in Google Scholar PubMed

29. Aarsand, AK, Fernandez-Calle, P, Webster, C, Coskun, A, Gonzalez-Lao, E, Diaz-Garzón, J, et al.. The EFLM biological variation database. Available from: [Accessed 20 Jan 2022].Search in Google Scholar

30. Aarsand, AK, Roraas, T, Fernández-Calle, P, Ricós, C, Diaz-GarzónJ, Jonker, N, et al.. The biological variation data critical appraisal checklist: a standard for evaluating studies on biological variation. Clin Chem 2018;64:501–14. in Google Scholar PubMed

31. Jones, GRD. Analytical performance specifications for EQA schemes – need for harmonization. Clin Chem Lab Med 2015;53:919–24. in Google Scholar PubMed

32. ISO/TC 212. Clinical laboratory testing and in vitro diagnostic test systems. ISO 15189 Medical laboratories – requirements for quality and competence. Geneva: Clinical Chemistry and Laboratory Medicine (De Gruyter); 2012.Search in Google Scholar

33. Miller, WG. Harmonization: its time has come. Clin Chem 2017;63:1184–6. in Google Scholar PubMed

34. Myers, GL, Miller, WG. The roadmap for harmonization: status of the International consortium for harmonization of clinical laboratory results. Clin Chem Lab Med 2018;56:1667–72. in Google Scholar PubMed

35. Braga, F, Panteghini, M. Commutability of reference and control materials: an essential factor for assuring the quality of measurements in Laboratory Medicine. Clin Chem Lab Med 2019;57:967–73. in Google Scholar PubMed

36. Braga, F, Pasqualetti, S, Aloisio, E and Panteghini, M.The internal quality control in the traceability era. Clin Chem Lab Med 2021;59:291–300. in Google Scholar PubMed

37. Miller, WG, Jones, GRD, Horowitz, GL, Weykamp, C. Proficiency testing/External quality assessment: current challenges and future directions. Clin Chem 2011;57:1670–80. in Google Scholar PubMed

38. Perich, C, Ricós, C, Marqués, F, Minchinela, J, Salas, A, Martínez-Bru, C, et al.. Spanish Society of Laboratory Medicine external quality assurance programmes: evolution of the analytical performance of clinical laboratories over 30 years and comparison with other programmes. Adv Lab Med 2020;1. in Google Scholar

39. Ng, D, PolitoandCervinski, FAMA. Optimization of a moving averages program using a simulated annealing algorithm: the goal is to monitor the process not the patients. Clin Chem 2016;62:1361–71. in Google Scholar PubMed

40. Van Rossum, HH. Moving average quality control: principles, practical application and future perspectives. Clinc Hem Lab Med 2019;57:773–82. in Google Scholar PubMed

41. Badrick, T, Bietenbeck, A, Katayev, A, van Rossum, HH, Cervinski, MA, Ping Lohf, T. On behalf of the International federation of clinical chemistry and laboratory medicine committee on analytical quality. Patient-based real time QC. Clin Chem 2020;66:1140–5. in Google Scholar PubMed

42. Cervinski, MA. Pushing patient-based quality control forward through regression. ClinChem 2021;67:1299–300. in Google Scholar

43. Duan, X, Wang, B, Zhu, J, Zhang, C, Jiang, W, Zhou, J. Regression-adjusted real-time quality control. Clin Chem 2021;67:1342–51. in Google Scholar PubMed

44. Muñoz-Calero, M, Martinez-Sanchez, L. Control de calidad con datos de pacientes. Cont Lab Clin 2022;58:66–78.Search in Google Scholar

45. Bayat, H, Westgard, SA, Westgard, JO. Multirule procedures vs moving average algorithms for IQC: an appropriate comparison reveals how best to combine their strengths. Clin Biochem 2022;102:50–5. in Google Scholar PubMed

46. Cembrowski, GS, XU, Q, Cervinski, MA. Average of patient deltas: patient-based quality control utilizing the mean within-patient analyte variation. Clin Chem 2021;67:1019–29. in Google Scholar PubMed

47. De Grande, LAC, Goossens, K, Van Uytfanghe, K, Stöckl, D, Thienpont, LM. The Empower project - a new way of assessing and monitoring test comparability and stability. Clin Chem Lab Med 2015;53:1197–204. in Google Scholar PubMed

48. Clinical Laboratory Standards Institute. Laboratory quality control based on risk management; approved guideline. CLSI EP-23. Wayne, PA: Clinical and Laboratory Standards Institute; 2011.Search in Google Scholar

49. Clinical and Laboratory Standards Institute. Statistical quality control for quantitative measurement procedures: principles and definitions. PA. USA: CLSI-C24-4th Wayne; 2016.Search in Google Scholar

50. Parvin, C. Planning statistical quality control to minimize patient risk: it’s about time. Clin Chem 2018;64:249–50. in Google Scholar PubMed

51. Parvin, C. What’s new in laboratory statistical QC guidance? JALM 2017;1:581–4. in Google Scholar PubMed

52. WestgardJO, Bayat, H, Westgard, S. Planning risk-based SQC schedules for bracked operation of continuous production analyzers. Clin Chem 2018;64:259–96. in Google Scholar

53. ISO/TC 212Clinical laboratory testing and in vitro diagnostic test systems. ISO 22870. Point-of-care testing (POCT) - requirements for quality and competence. Geneva: International Organization for Standardization; 2016.Search in Google Scholar

54. Venner, AA, Beach, LA, SheaJL, Knauer, MJ, Huang, Y, Fung, AWS, et al.. Quality assurance practices for point of care testing programs: recommendations by the Canadian society of clinical chemists point of care testing interest group. Clin Biochem 2020;88:15–7. in Google Scholar PubMed

55. Oliver, P, Fernandez-Calle, P, Mora, R, Diaz-Garzon, J, Prieto, D, Manzano, M, et al.. Real-world use of key performance indicators for point of care testing network accredited by ISO 22780. Prac Lab Med 2020;22:e00188. in Google Scholar PubMed PubMed Central

56. Westgard, S. The great global QC survey; 2017. Available from: [Accessed 10 Jan 2022].Search in Google Scholar

57. Westgard, S. The 2021 great global QC survey results. Available from: [Accessed 10 Jan 2022].Search in Google Scholar

58. Morancho, J, Prada, E, Gutierrez-Bassini, G, Blazquez, R, Salas, A, Ramón, F, et al.. Grado de implantación de especificaciones de la calidad analítica en España. Rev Lab Clin 2015;8:19–28. in Google Scholar

59. Ricós, C, Ramón, F, Salas, A, Buño, A, Calafell, R, Morancho, J, et al.. Minimum analytical quality specifications of inter-laboratory comparisons: agreement among Spanish EQAP organizers. Clin Chem Lab Med 2012;50:455–61.10.1515/cclm.2011.787Search in Google Scholar

60. CMS gov. Clinical laboratory improvement amendments (CLIA-88). Available from: [Accessed 17 Feb 2022].Search in Google Scholar

61. Richtlinie der Bundesärztekammer zur Qualitätssicherung laboratoriums medizinischer Undtersuchunge; 2014.Available from:[Accessed 17 Feb 2022].Search in Google Scholar

62. Ricós, C, Álvarez, V, Cava, F, García-Lario, JV, Hernández, A, Jiménez, CV, et al.. Integration of data derived from biological variation into the quality management system. Clin Chim Acta 2004;346:13–8. in Google Scholar PubMed

63. Westgard, JO. Six-sigma risk analysis. Designing analytic QC plans for the medical laboratory Madison, Wisconsin; 2011.Search in Google Scholar

Received: 2022-03-27
Accepted: 2022-04-01
Published Online: 2022-05-23

© 2022 Carmen Ricós et al., published by De Gruyter, Berlin/Boston

This work is licensed under the Creative Commons Attribution 4.0 International License.