Pure and Applied Chemistry, 2016, Volume 88, Issue 5, pp. 477–515; online 22 June 2016
http://dx.doi.org/10.1515/pac-2015-1101
Human error in chemical analysis is any action or lack thereof that leads to exceeding the tolerances of the conditions required for the normative work of the measuring/testing (chemical analytical) system with which the human interacts. When the measuring system is dealing with sampling, the human may be the sampling inspector. On other steps of chemical analysis the human is the analyst/operator of the measuring system. The tolerances of the conditions are, for example, intervals of temperature and pressure values for sample decomposition, purity of reagents, pH values for an analyte extraction and separation, etc. They are formulated in a standard operation procedure (SOP) of the analysis describing the normative work, based on results of the analytical method validation study.
Human errors in a routine analytical laboratory may lead to atypical test results of questionable reliability. An important group of atypical results is out-of-specification test results—those that fall outside the established specifications in the pharmaceutical industry, or do not comply with regulatory legislation, or specification limits in other industries and fields, e.g., environmental and food analysis.
Risk of human error is the combination of the likelihood of occurrence of the error and the severity of that error for quality of analytical results. Prevention, avoidance, or blocking of human error by a laboratory quality system is not easy, since errare humanum est (to err is human). Both correct performance and error follow from the same cognitive processes allowing us to be fast, to respond flexibly to new situations, and to juggle several tasks at once. Both are “two sides of the same theoretical coin”. An example is the “syndrome” of certified reference material (CRM), when an analyst reports an analyte concentration value close to that in a CRM certificate (applied as a control sample), which is subsequently found to be incorrect. There are a number of other human errors which may occur for various reasons. A part of them seem trivial for professionals in the analysis. However, people make trivial errors in their day-to-day life. Nobody is able to change the nature of human being. Thus, protection of the analytical result quality by managing risk of human error for reduction of the error likelihood and mitigation of its severity (the risk reduction) is an important task for the quality system of any analytical laboratory. Residual risk of human error, not prevented or blocked by the laboratory quality system, decreases quality of analytical results and can be interpreted as a source of measurement uncertainty.
There is no currently available data bank (database) containing empirical values of likelihood/frequencies of occurrence of human errors in analytical chemistry, derived from relevant operating experience, experimental research or simulation studies. On the other hand, any expert in a specific chemical analysis has necessary information accumulated during his/her work. That is why this Guide discussed classification, modeling and quantification of human errors in chemical analysis using expert judgments.
Classification
The classification includes the following nine kinds of human errors, k = 1, 2, …, K (K = 9): seven kinds of commission errors of a sampling inspector and/or an analyst/operator (knowledge-, rule- and skill-based mistakes and routine, reasoned, reckless and malicious violations) and two kinds of omission errors (lapses and slips).
The errors may happen at any step of chemical analytical measurement/testing process, m = 1, 2, …, M (location of the error). The main steps, for example, are: 1) choice of the chemical analytical method and corresponding SOP, 2) sampling, 3) analysis of a test portion, and 4) calculation of test results and reporting. However, after sampling, a sample preparation is required in many chemical analytical methods, including the sample freezing, milling and/or decomposition. The chemical analysis may start from an analyte extraction from a test portion and separation of the analyte from other components of the extract. The analyte identification and confirmation are important in some cases. Then only calibration of the measuring system and quantification of the analyte concentration are relevant. On the other hand, choosing of an analytical method and SOP may not be necessary in a laboratory where only one method and corresponding SOP are applied for a specific task. Many chemical analytical laboratories are not responsible for sampling, etc.
The kind of human error and the step of the analysis, in which the error may happen, form the event scenario, = 1, 2, …, I. There are maximum I = K × M scenarios of human errors. Since K = 9 here, I = 9M. These scenarios put together generate a map of human errors in chemical analysis. Mapping human errors is necessary for quality risk management of analytical results. Examples of mapping human errors in pH measurement of groundwater, multi-residue pesticide analysis of fruits and vegetables, and ICP-MS analysis of geological samples are provided in Annex A of the Guide.
Modeling
A Swiss cheese model shown in Fig. 1 is used for characterizing the errors interaction with a laboratory quality system. This model considers the quality system components j = 1, 2, ..., J as protective layers against human errors. For example, the main system components are: 1) validation of the measurement/analytical method and formulation of standard operation procedures (SOP); 2) training of analysts and proficiency testing; 3) quality control using statistical charts and/or other means; and 4) supervision. Each of such components has weak points, whereby errors are not prevented, similar to holes in slices of the cheese. The presence of holes in a layer will not lead to system failure, as a rule, since other layers are able to prevent a bad outcome. That is shown in Fig. 1 as the pointers blocked by the layers. In order for an incident to occur and an atypical test result to appear, the holes in the layers must line up at the same time to permit a trajectory of incident opportunity to pass the system (through its defect), as depicted in Fig. 1 by the longest pointer. Examples of modeling human errors are available in Annex A of the Guide.
Quantification
A technique for quantifying human errors in chemical analysis using expert judgments was formulated based on the Swiss cheese model and the house-of-security approach. According to this approach, an expert may estimate likelihood pi of scenario i, by the following scale: likelihood of an unfeasible scenario as pi = 0, weak likelihood as pi = 1, medium as pi = 3, and strong (maximal) likelihood as pi = 9. The expert estimates/judgments on the severity of an error by scenario i, interpreted as the expected loss li of quality of the analysis result, are performed on the same scale (0, 1, 3, 9). Estimates of the possible reduction rij of the likelihood and the severity of human error scenario i as a result of the error blocking by quality system layer j (degree of interaction) are made by the same expert(s) using again the same scale. The interrelationship matrix of rij has I rows and J columns, as shown in Fig. 1.
Blocking human error according to scenario i by a quality system component j can be more effective in presence of another component j’ (j’ ≠ j) because of the synergy Δ(i)jj’ between the two components. The synergy may be equal to 0 or 1 whenever the effect is absent or present, respectively. Estimates qj of importance/effectiveness of quality system component j in human error reduction are calculated as qj = Σ Ii=1pilirijsij, where the synergy factor is

Fig. 1. A laboratory quality system against human errors in the house of security. Adapted from I. Kuselman et al., Accred. Qual. Assur. 18:459 (2013)
sij = 1 + ΣJj'≠1 Δ(i)jj’ /(J−1).
Taking into account the synergy factor, the interrelationship matrix is to be transformed replacing rij by r~ = rijsij in every cell ij of the matrix.
This technique allows to convert the semi-intuitive expert judgments on human errors and on the laboratory quality system into the following quantitative scores expressed in %:
likelihood score of human error in the analysis P* = (100 %/9) Σ Ii=1 pi /I;
severity (loss) score of human error L* = (100 %/9) Σ Ii=1 li /I ;
effectiveness score of a component of the laboratory quality system q*j= (100 %) qj / ΣJj=1qj ; and
effectiveness score of the quality system, as a whole, against human error E* = (100 %/9) ΣJj=1 qj / ΣJj=1 Σ Ii=1pi li sij .
The effectiveness score of the quality system at different steps of the analysis can be evaluated also. Examples of the quantification are available in Annex A of the Guide.
Risk Evaluation of Human Errors
Since the risk of human error is a combination of the likelihood and the severity of that error, their reduction r~ij is the risk reduction. A score characterizing the risk reduction by the laboratory quality system in whole, expressed in %, is
r* = (100 %/18IJ) ΣJj=1 Σ Ii=1r~ij
Then, a score of residual risk of human errors (%) which are not prevented/blocked or reduced/mitigated by the quality system, is R* = 100 % −r*. The fraction (%) of the quality of the analytical results which may be lost due to residual risk of human errors is f HE = (P*/100 %)( L*/100 %)R*.
In practice, a quality system is not able to prevent or block human errors completely, i.e., 0 % < f HE < 100 %, and residual risk of human errors can be interpreted as a source of measurement uncertainty when human being is involved in the measurement process and human interaction with the measuring system is taken into account. Such interpretation is discussed in Annex B of the Guide. Examples of calculation of the risks, their consequences for the quality of the analytical results and corresponding contributions to the uncertainty budget are available in Annex A.

Dr. Francesca Pennecchi (at the computer) and her colleagues in their chemical lab of INRIM.
Limitations
Any expert is also a human being, and the elicitation process (by which the expert is prompted to provide error likelihood, severity and other estimates) is influenced by epistemic uncertainty, intrapersonal conflicts, etc. Therefore, evaluation of variability of the error quantification scores and relative risks due to the expert’s inherent hesitancy, is also important. A detailed analysis of the score variability, as well as the variability of the corresponding loss of quality f HE, based on Monte Carlo simulations, is presented in Annex C.
Changes in any quality system component require a re-evaluation of the quality fraction f HE of the analytical results, which may be lost due to the residual risk of human errors. Either an f HE increase (e.g., due to the retirement of an experienced supervisor) or a decrease (e.g., due to the acquisition of a new, more accurate, and more automated measuring system) is possible.
Latent errors due to a poor laboratory design, a defect in the equipment, an unsuccessful management decision not depending on the sampling inspector and/or the analyst/operator, or positive human factors are not considered in the Guide.
Implementation Remarks
Classification, modeling, and quantification of human errors in a routine laboratory show the ways for increasing the quality system effectiveness and subsequently reducing the risk of these errors in the laboratory. In particular, results of the human error study would be useful for validating the analytical method and formulation of the SOP, as well as for training and for supervision. The map of possible human error scenarios, included in the validation report, may also be useful as a checklist for the prior assessment of an analyst before assigning the task, etc.
References and Acknowledgments
The Guide of the International Union of Pure and Applied Chemistry (IUPAC) and the Cooperation on International Traceability (CITAC) was developed by the task group: I. Kuselman (Chair, Israel), F. Pennecchi (Italy), A. Fajgelj (Austria), S.L.R. Ellison (UK), Y. Karpov (Russia) and M. Epstein (Israel). The sponsoring bodies: IUPAC Analytical Chemistry Division, IUPAC Interdivisional Working Party on Harmonization of Quality Assurance, and CITAC.
The Guide includes 52 references to basic publications on human errors, ISO and JCGM documents, and articles of the task group on the topic.
The task group thanks E. Bashkansky, E. Kardash and P. Goldshlag (Israel) and W. Bich (Italy) for their help; Springer Science+Business Media (www.springer.com), Elsevier (www.elsevier.com), Bureau International des Poids et Mesures (BIPM) and IOP Publishing (www.ioppublishing.org) for permissions to use material from the published papers cited in the Guide.
©2016 by Walter de Gruyter Berlin/Boston