Skip to content
Publicly Available Published by De Gruyter December 9, 2014

Preanalytical quality improvement. In pursuit of harmony, on behalf of European Federation for Clinical Chemistry and Laboratory Medicine (EFLM) Working group for Preanalytical Phase (WG-PRE)

  • Giuseppe Lippi ORCID logo EMAIL logo , Giuseppe Banfi , Stephen Church , Michael Cornes , Gabriella De Carli , Kjell Grankvist , Gunn B. Kristensen , Mercedes Ibarz , Mauro Panteghini , Mario Plebani , Mads Nybo , Stuart Smellie , Martina Zaninotto , Ana-Maria Simundic and on behalf of the European Federation for Clinical Chemistry and Laboratory Medicine Working Group for Preanalytical Phase

Abstract

Laboratory diagnostics develop through different phases that span from test ordering (pre-preanalytical phase), collection of diagnostic specimens (preanalytical phase), sample analysis (analytical phase), results reporting (postanalytical phase) and interpretation (post-postanalytical phase). Although laboratory medicine seems less vulnerable than other clinical and diagnostic areas, the chance of errors is not negligible and may adversely impact on quality of testing and patient safety. This article, which continues a biennial tradition of collective papers on preanalytical quality improvement, is aimed to provide further contributions for pursuing quality and harmony in the preanalytical phase, and is a synopsis of lectures of the third European Federation of Clinical Chemistry and Laboratory Medicine (EFLM)-Becton Dickinson (BD) European Conference on Preanalytical Phase meeting entitled ‘Preanalytical quality improvement. In pursuit of harmony’ (Porto, 20–21 March 2015). The leading topics that will be discussed include unnecessary laboratory testing, management of test request, implementation of the European Union (EU) Directive on needlestick injury prevention, harmonization of fasting requirements for blood sampling, influence of physical activity and medical contrast media on in vitro diagnostic testing, recent evidence about the possible lack of necessity of the order of draw, the best practice for monitoring conditions of time and temperature during sample transportation, along with description of problems emerging from inappropriate sample centrifugation. In the final part, the article includes recent updates about preanalytical quality indicators, the feasibility of an External Quality Assessment Scheme (EQAS) for the preanalytical phase, the results of the 2nd EFLM WG-PRE survey, as well as specific notions about the evidence-based quality management of the preanalytical phase.

Introduction

Laboratory diagnostics, a crucial part of the clinical decision-making, is articulated in various phases that span from test ordering (pre-preanalytical phase), collection of diagnostic specimens (preanalytical phase), sample analysis (analytical phase), results reporting (postanalytical phase) and interpretation (post-postanalytical phase). Although laboratory medicine seems overall less vulnerable to slips, lapses, mistakes and violations than other clinical and diagnostic areas, the chance of errors is not negligible and may generate adverse consequences on both the quality of testing and patient safety [1, 2]. Several lines of evidence now attest that the vast majority of laboratory errors emerge from the manually intensive activities of the preanalytical phase, especially those related to collection, handling, transportation, preparation and storage of diagnostic specimens [3]. The frequency of analytical errors is consistently lower, and mainly attributable to instrument malfunctioning, inappropriate calibration, violation of quality control rules and analytical interference [4]. Postanalytical errors have an intermediate frequency between preanalytical and analytical mistakes, and mostly entail misinterpretation of test results and delay in reporting of critical data [5] (Figure 1). Most of the problems that arise throughout the testing process are preventable, by adoption of a multifaceted strategy based on a policy of quality, which should entail continuous education, standardization of activities, implementation of technological advances that are effective to prevent or timely identify preventable mistakes, along with effective communication with all the stakeholders of laboratory services [6].

Figure 1 The iceberg of laboratory errors.
Figure 1

The iceberg of laboratory errors.

This article, which continues a biennial tradition of collective papers on preanalytical quality improvement [7, 8], is aimed to provide further contributions for pursuing quality and harmony in the preanalytical phase, and is a synopsis of lectures of the third European Federation of Clinical Chemistry and Laboratory Medicine (EFLM)-Becton Dickinson (BD) European Conference on Preanalytical Phase meeting entitled ‘Preanalytical quality improvement. In pursuit of harmony’ (Porto, 20–21 March 2015) (http://www.preanalytical-phase.org/node/1). The leading topics that will be discussed include unnecessary laboratory testing, management of test request, implementation of the European Union (EU) Directive on needlestick injury prevention, harmonization of fasting requirements for blood sampling, influence of physical activity and medical contrast media on in vitro diagnostic testing, recent evidence about the possible lack of necessity of the order of draw, the best practice for monitoring conditions of time and temperature during sample transportation, and description of problems emerging from inappropriate sample centrifugation. In the final part, the article provides some recent updates about preanalytical quality indicators, the feasibility of an External Quality Assessment Scheme (EQAS) for the preanalytical phase, the results of the second survey of the EFLM Working Group on preanalytical variability (WG-PRE), as well as specific notions about the evidence-based quality management of the preanalytical phase. We hope that the readership of Clinical Chemistry and Laboratory Medicine will find interest in the contents of his article.

The leading role of the EFLM in harmonizing the preanalytical phase of laboratory testing

Although laboratory medicine has implemented some extraordinary developments over the past decade, the overall benefit of those changes to the quality of the healthcare will not reach its full potential if both the pre- and postanalytical phases (in addition to the analytical phase) of the total testing process are not harmonized. In addressing harmonization of preanalytical phase in laboratory testing, a recent report states that this is currently not coordinated on an international basis [9]. To overcome this problem, the EFLM and its WG-PRE have decided to take the lead in catalyzing various international projects in the field [10, 11]. In addition, EFLM has raised awareness about the need to harmonize the postanalytical phase of testing, and the Federation has recently established a new WG for Harmonization of the Total Testing Process (WG-H) to fulfill this goal, and with the specific aim to become the facilitator and coordinator for existing initiatives at national level in various countries.

With the European Conference on Pre-analytical Phase, the EFLM through its WG-PRE is specifically addressing preanalytical issues, such as appropriate test selection and test profile requesting, optimization of training, sample handling and application of quality indicators. The EFLM strongly believes that harmonization of each of these issues may markedly reduce the potential risk of preanalytical errors and substantially improve patient safety. The EFLM is also calling for a joint action by laboratory professionals, healthcare practitioners, manufacturers and standard writing bodies to support the definition of universally applicable standards for the preanalytical phase and their worldwide implementation. The EFLM is finally willing to take responsibility to act as a convener for a dialog between all interested parties. All stakeholders working in the field should be invited to join a dialogue to establish standardized procedures for preanalytical processes that, in turn, standard writing bodies should take into account in updating the existing recommendations.

Unnecessary laboratory tests – a matter of concern?

In a systematic review of laboratory clinical audits examining the inappropriateness of laboratory testing published nearly 15 years ago, van Walraven and Naylor found rates comprised between 5% and 95%, thus clearly demonstrating the difficulty of accurately estimate the burden of inappropriateness [12]. A more recent analysis exploring the iceberg of laboratory inappropriateness has concluded that overuse or inappropriate utilization of laboratory resources may span from 23% to 67%, the largest part being attributable to medical liability concerns [13].

Inappropriateness in the context of laboratory diagnostics is deemed to be tests which could be avoided with no detriment to patient care. The cost of these tests to a healthcare system can be estimated, although it is important to consider the financial context of a healthcare economy. In integrated healthcare economies, only the marginal (i.e., reagent) cost is relevant. Total billing costs are not relevant to institutions, such as the UK National Health Service, as they include laboratory overheads which would continue to be charged unless the laboratory itself became unnecessary. Definitions of appropriateness vary from tests which are manifestly not necessary, to those producing normal results which nevertheless may be entirely appropriate in the clinical context. However, comparative benchmarking of activity shows differences between primary care test submissions of up to 2000% between top and bottom deciles of requesting activity for some tests, which suggests that something more should be done [14]. Even when taking into account assorted patient demographics, specific practice subspecialist interests and social depravation indices, these differences still remain, thus suggesting that the main driving force is clinical decision-making. Whilst individual test costs are relatively low, the cumulative impact of multiple inappropriate testing is significant. Moreover, most cost estimates do not include the ‘on-costs’ of further referrals and investigations, nor indeed the personal harm caused by abnormal tests (i.e., tumor markers) which may have been requested unnecessarily in the first instance and produce false positive results [15].

Various initiatives, such as the UK Quality and Outcomes Framework (QOF), have endeavored to set certain minimum standards for some testing activity with a financial incentive, which have helped to avoid undertesting, although few initiatives have been enacted to address the issue of overtesting. The appropriateness of testing seems better in well defined areas, such as diabetes or lipid management, although significant differences continue to exist. Testing in less well defined areas remains far less consistent. Therefore, inappropriate use of testing (both under and overtesting) remains a problem, and initiatives are needed to address this issue.

Managing test requesting – practical experience

Improvements in public health care have resulted in enhanced life expectancy and increased health expenditure, which are mainly attributable to a frequently unjustified intensity of services. Health spending has grown faster than our ability to generate resources, and the ongoing financial crisis has exacerbated this effect. The reaction to this has been the need to ‘cut back’ healthcare costs. Accordingly, health managers have identified laboratory diagnostics as an easy and attractive opportunity to reduce the overall healthcare expenditure [16, 17], which is however minimal (i.e., <2%) [18]. In this evolving scenario, evidence-based (laboratory) medicine plays a crucial role, as it would contribute to generate a paradigm shift, from the concept of ‘demand restriction’ to that of ‘demand adequacy’. It is undeniable that this strategy generates economical benefits both in the short- and long-term, especially regarding the leading healthcare indicators (i.e., efficiency and effectiveness).

The group of laboratories belonging to the public network of the Catalonian Health Service has recently developed a local project with the aim of investigating demand variability across different facilities, based upon the premise that information on this source of variability may be regarded as the first step to improve the clinical usefulness of diagnostic testing. Practical examples of implementing improvement strategies obtained by this group are being collected and classified according to a reliable scheme describe elsewhere [19, 20]. In brief, these entail general and/or specific strategies guided by studies of variability and/or application of evidence-based medicine. Prelaboratory strategies include education of stakeholders (especially patients) by means of written information and web sites edited or reviewed by health technicians and laboratories professionals. The cooperation and involvement of clinicians is achieved by introducing some key aspects of utilization of laboratory resources in medical and nursing university core curricula. The participation in interdisciplinary groups is promoted, with dissemination of information on laboratory tests and involvement in clinical tests selection. Other important strategies that are adopted include those related to the software used by clinicians to prescribe testing (i.e., facilitation of access to information and training, communication of test cost at the time of request, prescription guided by expert systems based on specific protocols or profiles, limits to repeat testing practice, elimination of obsolete or redundant testing). The quality indicators of test prescription and cost are reported to the clinicians. Additional within-laboratory strategies include deletion or generation of tests. Finally, a paradigmatic example of postlaboratory strategy put into action by the public network of the Catalonian Health Service entails the clinical impact evaluation of laboratory data. Quality indicators of test request used by the group are considered as strategic. Some examples include number of requests per 1000 inhabitants (Primary Health Care), number of tests per request (stratified by patient type), and the ratios between interrelated tests.

Implementing the EU Directive on needlestick injury prevention – 2 years of experience

The purpose of Directive 2010/32/EU is to protect workers in healthcare settings from injuries caused by all medical sharp devices, and from their consequences such as occupational human immunodeficiency virus (HIV), hepatitis C virus (HCV) and hepatitis B virus (HBV) infection, by setting up integrated policies in risk assessment, risk prevention, training, information, and monitoring [21]. The deadline for its transposition into national law by the 28 Member States has expired in May 2013. As of February 2014, 24 Member States had communicated national transposing measures to the Commission, whose conformity is currently being assessed [22].

An online survey conducted in October 2013 by the European Federation of Nurses Associations with almost 7000 respondents from the 28 Member States (87% nurses), showed that the Directive had a positive impact in the daily practice and clinical environment of the health professionals, with 70% reporting availability of safety-engineered devices (SED) (blood collection 44%, infusion 31%, injection 39%), 78% having received basic information at the workplace, and 95% feeling a clear responsibility in reporting. However, respondents identified areas being less covered, particularly specific education on sharp injuries prevention (53%), performance of risk assessment at the workplace (40%), and awareness campaigns (37%). Moreover, 30% reported needing more instructions on postexposure management. Even more importantly, 41% of the respondents had already suffered a needlestick incident (NSI) [23].

In November 2013, a European Federation of Public Service Unions (EPSU) and European Hospital and Healthcare Employers’ Association (HOSPEEM) survey reported that the main alterations to existing legislative texts revolved around issues, such as the ban on recapping, requirements for more specific risk assessment and provision of preventative vaccinations. The more widespread introduction of SED was also considered to be a likely consequence of the new legislation in a number of countries (albeit based on risk assessment). The SED cost was considered to be a potential challenge in some countries, particularly in smaller Member States with significant budgetary restrictions. However, the implementation of all required preventive interventions, and not only of SED, may be affected by budgetary cuts. As an example, Romania reported the cessation of mandatory HBV vaccination of healthcare staff, now only offered in areas considered at highest risk [24].

SED play an important role in decreasing injuries, when implemented within an integrated approach to risk prevention. A recent study in Europe showed a significant (i.e., –60%) reduction in the NSI rate from blood-collection devices even in hospitals already using a safety device, when a new generation, semi-automated device with in-vein activation was adopted. Design and ease-of-use have been demonstrated to strongly influence SED efficacy and increase their acceptance. As such, the joint EPSU-HOSPEEM Project conclusions include a recommendation for SED to be developed with the assistance of practitioners.

Long lasting experiences in Italy, France and Spain show that an integrated approach is the most effective means to work towards a sustainable reduction of sharps injuries. The Directive has brought an important step forward towards ensuring the implementation of such an integrated approach, but to ensure its success, all healthcare personnel should be aware of, and comply with, the legislation that has come into force as a result, with a strong support from healthcare administration.

Harmonization of fasting requirements for blood sampling

Fasting is a well known term implying that the patient must refrain from certain items (e.g., food, alcohol, coffee, smoking, perhaps even medication). Unfortunately, however, these items are not well described or harmonized, either internationally or nationally [25]. Furthermore, the duration of fasting is not well defined despite the knowledge that many parameters change over time (e.g., the triglycerides, which actually increase after a certain time period of fasting as results of fatty acid metabolism). In general, many clinicians erroneously think that fasting is only needed for a very small pallet of analyses, but it can also have a clinically significant effect on several hematological [26], hemostatic [27], as well as biochemical parameters [28]. Finally, patients tend to be misinformed about the fasting requirements for laboratory blood testing [29], which very likely can be due to the lack of a fasting definition and misleading information from their requesting physician.

The lack of a general fasting definition is hence a clinically significant problem – in daily routine as well as in research studies – and the WG-PRE has put forward a number of recommendations, one of which includes a harmonized and more precise definition of fasting [30]. Another of these recommendations concerns the professional biochemistry associations and the laboratory professionals, whom are called upon in order to take the responsibility for this harmonization process (e.g., by having more rigid acceptance criteria to the fasting samples and by spreading the information regarding a harmonized fasting definition to their clinicians).

Physical activity as an important preanalytical variable

Sports and exercise medicine is broadly dependent upon physiology and laboratory medicine data. The biochemical and hematological parameters are mainly used in sports medicine for evaluating the health status of recreational and professional athletes, for preventing infectious diseases and injuries, for evaluating performances and, finally, for detecting the use of illicit and unethical substances or methods [31].

The analytical process and the global quality of laboratory diagnostics are both strongly influenced by several aspects of the preanalytical phase and, among these, a particular source of preanalytical variation is indeed represented by physical exercise [32]. This variable impacts on laboratory testing either directly (i.e., by modifying human biology and metabolism) or indirectly (i.e., for intake of food and beverages, drugs or food supplements). Interestingly, the effect of exercise extends far beyond the typical boundaries of diagnostic testing in blood, to embrace different body fluids, such urine and saliva, as these biological matrices are widely used for obtaining data for antidoping testing and monitoring exercise performances, especially when a high number of drawings is necessary [33].

The preanalytical phase became particularly crucial in antidoping controls after the introduction of the so-called athlete biological passport (ABP). This algorithm is based on values of hemoglobin and reticulocytes, evaluated over time in the single athlete. In this setting, transportation, refrigeration and stability of hematological values are essential to obtain correct data, thus representing a reliable ground for appropriate statistical interpretation [34].

The stability of hematological parameters is particularly crucial to guarantee accurate and reliable data for implementing and interpreting the ABP. In this model, the values of hemoglobin, reticulocytes and out-of-doping period (OFF)-score (hemoglobin-60√reticulocytes) are used to monitor the possible variations of these parameters, as well as for comparing the thresholds developed by the statistical model for the single athlete on the basis of its personal values and the variance of parameters in the modal group. The stability of hematological parameters might be improved independently from the analytical methodology, by refrigeration of specimens [35].

It is noteworthy that a mishandled preanalytical management of athletes’ samples has adjunctive implications in sports medicine over those of conventional laboratory testing, as data collected for antidoping controls are also specifically used to identify cheating and then determine sport or civil sanctioning [36]. For example, in sport and court trials, plasma removal from EDTA tubes before cell counting and hemoglobin measurement, stability of mean corpuscular volume before hemoglobin measurement, and influence of diet and exercise on total growth hormone (GH) (i.e., for definition of hormone variant 22K and 20K), have been used as arguments of the final judgment.

Interference of medical contrast media on laboratory testing

The use of medical contrast media is very frequent in diagnostic imaging, with the aim to enhance the contrast of body organs or fluids, thus ultimately improving the visibility of internal structures with imaging techniques, such as X-ray, computer tomography (CT), magnetic resonance imaging (MRI) or ultrasounds. These pharmacologic compounds conventionally include barium sulfate, organic iodine molecules, such as iohexol, iodixanol and ioversol, or gadolinium contrast agents which can be ionic, neutral, albumin-bound or even polymeric [37].

Since their introduction in clinical practice, the potential side effects and the interactions with drugs have been regarded as the leading medical concerns of contrast media. Nevertheless, several lines of evidence attest that these agents may also jeopardize patient safety by impairing the quality of in vitro diagnostic testing, as a number of potential interferences have been reported with some laboratory tests [38]. In particular, iodinate contrast media have been reported as a source of incomplete gel barrier formation and serum or plasma separation in primary blood tubes [39], of abnormalities in electrophoresis of serum proteins (e.g., emergence of extra and unusual peaks), as well as of positive bias in measurement of cardiospecific troponin I with certain immunoassays. Interference has also been reported in patients receiving gadolinium contrast agents [38]. These specifically include a negative bias in the measurement of serum or plasma calcium with some colorimetric assays (i.e., those based on ortho-cresolphthalein) along with a positive bias in the assessment of the same analyte with Arsenazo reagents, a negative bias in the measurement of angiotensin converting enzyme (ACE) and zinc (especially using colorimetric assays), along with a positive bias in creatinine measured with Jaffe reagents, total iron binding capacity (TIBC) using the ferrozine method, magnesium using calmagite reagent and selenium by mass spectrometry techniques [38]. In patients receiving Patent Blue V (i.e., a synthetic inert compound that is conventionally employed during cancer surgery for detecting potential lymph node localization), some degree of interference has been observed when measuring serum indices and methemoglobin [38].

It is noteworthy that a comprehensive description about the potential interference in laboratory testing is frequently absent from information supplied by the manufacturer of medical contrast agents (or only limited to certain type of reagents and/or analytes, at best). As such, a specific assessment of potential bias may be advisable, in order to define whether a certain type of contrast medium may interfere with reagents locally used for testing by the single facilities. Moreover, due to the fact that the elimination half-life of medical contrast media is usually comprised between 1 and 3 h, blood drawing after such period of time may be advisable in patients receiving these agents [38].

The order of draw – myth or science?

National and international guidelines, such as those issued by the Clinical Laboratory Standards Institute (CLSI) [40] or the World Health Organization (WHO) [41], recommend that an order of tubes should be followed during blood drawing, i.e., blood culture/sterile tubes first, followed by plain tubes/gel tubes, and then tubes containing additives. This specific strategy is aimed to prevent contamination of sample tubes with additives from previous tubes, such as sodium citrate or more commonly potassium-EDTA (K-EDTA).

These recommendations are mostly based on a case report published by Sun in 1977 [42], and a follow-up study by Calam and Cooper in 1982 [43], which reported that an incorrect order of draw caused hyperkalemia and hypocalcemia, two surrogate markers of in vitro K-EDTA sample contamination. The authors did acknowledge, however, that contamination with additives only occurred during difficult venipunctures and could not be replicated under ideal phlebotomy conditions. It has been definitively demonstrated, by measuring EDTA, that reverse order of draw of blood samples using closed loop phlebotomy systems does not cause EDTA contamination [44]. This has been subsequently confirmed in another study [45]. Although it seems difficult to reconcile the conflicting results emerged from different studies, it may be that a random order of draw using poor sample collection techniques and/or during difficult venipunctures may result in cross-contamination of sample tubes, thus ultimately jeopardizing the quality of testing [46, 47]. This idea is supported by a study of Berg et al. [48], which showed that only 6% of blood collections were performed using the conventional manufacturer prescribed closed loop system in a major emergency department in the UK. Lima-Oliveira et al. also recently described a patient case in which deviation from the standard blood sampling procedure and recommended order of draw resulted in sample EDTA contamination with subsequent increase in potassium and decrease in calcium concentration [49].

In general, a significant bias may be typically observed in the serum values of calcium, chloride, lactate dehydrogenase (LDH), magnesium and potassium starting from 5% contamination with K-EDTA blood, whereas the serum values of sodium, phosphate and iron may be biased starting from 29% contamination with K-EDTA blood [47].

It has been earlier shown that contamination with EDTA (and, to a lesser extent, with sodium citrate) is still relatively frequent and may be difficult to identify [50]. As this is not probably due to the use of a random order of draw of blood samples in a closed loop system, it seems plausible that in vitro K-EDTA and citrate contamination may occur with open blood collection systems by syringe needle or syringe tip contamination when delivering collected blood into K-EDTA or citrate sample tubes before other tubes, and by direct transfer of blood from K-EDTA or citrate containing tubes to other sample tubes [51]. The latter circumstance can be easily detected by the laboratory staff, because it would generate gross abnormalities in surrogate markers. Nevertheless, more subtle contamination is possible with the former condition, which is less easily identifiable using these markers and may also cause misdiagnosis and/or mismanagement of patients. In summary, 1) there is currently not enough evidence to support the recommended order of draw (if closed loop venipuncture systems are used); 2) evidence is lacking to confirm that the recommended order of draw helps avoid sample cross-contamination; and 3) sample cross-contamination is not rare, and further studies are needed to investigate and confirm possible mechanisms of sample cross-contamination in order to implement focused and appropriate preventive measures.

Monitoring the time and temperature conditions of sample transport

The increasing pressure to cut costs in healthcare organizations has affected the laboratory activities and workflows, wherein consolidation processes have lead to transportation of large numbers of specimens from peripheral collection sites to the core laboratory [52]. As a consequence, there is an increasing need for systems able to assure quality and safety in biological sample transportation, as well as to monitor the risk of errors in this step. In fact, this part of the preanalytical process is widely recognized as a major factor that contributes to delays in returning high quality clinical laboratory results for both inpatients and outpatients testing, although scarce evidence is available in the current literature on this issue [53].

International standards for accreditation emphasize the importance to check and assess the most critical phases in sample transportation by using specific procedures for verification of each step, thus including: 1) time between blood collection and specimen analysis; 2) temperature and time of samples storage from collection to analysis; 3) packaging criteria and sample positioning during transport; and 4) identification and documentation of acceptability/rejection criteria [54].

The Department of Laboratory Medicine of the University Hospital of Padua, which provides inpatient and outpatient services for samples collected from 21 centers in a broad area in North East Italy, has adopted an integrated system consisting of secondary and tertiary containers, a device for temperature and time recording, and a system manager that allows to accept or reject biological samples through immediate visualization of recorded data that are compared to accurately defined conditions [55, 56].

The results collected in >5 years of experience demonstrated the efficacy of the system in standardizing the conditions of sample transportation, allowing a significant decrease of variations recorded in samples transported from long- and short-term peripheral centers, particularly for some critical tests, such as potassium, calcium, activated partial thromboplastin time (APTT) and hemoglobin. It is also noteworthy that, along with technological facilities, it is of essential importance to accurately train the personnel involved by setting specific standard operating procedures (SOPs), which would enable the adoption of objective criteria in evaluating transport conditions and monitoring compliance in the daily routine practice. Some specific quality indicators were finally introduced in the quality system in order to produce data, which allowed monitoring and improvement of performance of the implemented integrated system.

Centrifugation – is there room for improvement?

The purpose of centrifugation is to separate the components of a sample according to their density, to ensure that analytes and cells of interest can be accurately assessed. Inappropriate centrifugation conditions may as a minimum necessitate re-centrifugation of sample, or worse potentially lead to inappropriate results [57]. The quality of sample separation and its impact on laboratory workflow is mainly influenced by sample preparation (sample clotting, time before centrifugation, temperature), sample type (serum or plasma, with or without separation media), centrifugation equipment (swing bucket vs. fixed angle), and centrifugation conditions (speed, time, temperature, acceleration and deceleration). Nevertheless, there is often a need to balance these important considerations against the throughput and turnaround time targets of the laboratory.

Centrifugation requirements vary depending on sample type. For coagulation, centrifugation is the key factor in minimizing the levels of cells in the plasma, and recommendations for the creation of platelet-poor plasma and platelet-free plasma exist [58]. For chemistry samples, the separation of the cells from the supernatant will be impacted by whether it is a plasma or serum sample. Serum samples are essentially ‘non-cellular’ after the centrifugation process, whereas plasma contains varying levels of cells that in part explain the analytical differences that are observed between serum and plasma analytes [59, 60]. The impact of centrifugation on serum and plasma is further evident with the introduction of a separation medium (e.g., gel or other inert separators), defining how the gel moves to its position of equilibrium and the level of residual cells that are trapped in the supernatant. Stringent centrifugation criteria are also mandatory for hemostasis testing, wherein the use (or non-use) of the centrifuge brake [61] or different centrifugation forces [62] have a substantial impact on sample quality. Recent developments, such as the use of mechanically based separators that can ensure the sedimentation of cells continues throughout the centrifugation process, further increasing sample quality and its potential use for a broader array of applications.

As laboratories become more automated, managing an efficient sample processing step is a key requirement in order to maximize the return on investment with front end automation systems. There are a number of workflow processes that can be employed to improve sample processing. The use of plasma samples for chemistry analysis avoids the need to ensure the specimen is completely clotted prior to centrifugation. The centrifugation process is often the rate-limiting step in a laboratory, so that manufacturers of blood collection tubes are providing broad centrifugation conditions that maximize the use of high speed and, therefore, short duration centrifugation conditions that can be achieved using some platforms [63]. However, for all the different sample types there are rather diverse recommended centrifugation conditions, thus making standardization challenging. A recent study showed how the centrifugation conditions for chemistry samples can be utilized for coagulation parameters in order to maximize the use of their automated workflow and avoid inefficient parallel workflows [64].

In order to achieve the appropriate centrifugation with the best sample quality, meet the laboratories turnaround time targets and maximize workflow efficiencies, careful consideration of sample preparation, sample type, centrifugation equipment and centrifugation conditions is advisable.

Preanalytical quality indicators

Clinical quality indicators (QIs) are intended to measure the extent to which set targets are achieved, and also provide a quantitative basis to achieve improvement in care and, in particular, in laboratory services [65–67]. QIs are hence essential requirements for medical laboratory accreditation according to the International Standard (ISO 15189: 2012). The current lack of attention to extra-laboratory factors and related quality indicators prevent clinical laboratories from effective improvement in total quality and error reduction projects. Errors in the preanalytical phase may account for 60%–75% of all laboratory errors, and have been traditionally classified as those pertaining to sample or patient identification and to unsuitable specimens. However, according to the International Standard for Medical Laboratory Accreditation and the need for a patient-centered view, some innovative QIs are needed. In particular, measurement of the appropriateness of test request and request forms, as well as the quality of specimen transportation, is urgently needed. The model of QIs developed by a working group of the International Federation of Clinical Chemistry and Laboratory Medicine (IFCC) is a valuable starting point to promote the harmonization of available QIs [68, 69], but further efforts have been done to reach a consensus on the roadmap for harmonization. In particular, a preliminary consensus on the list of available QIs and on the reporting system has been recently achieved, and recently published in this journal [70]. Further activities shall be aimed towards raising the awareness of all stakeholders and to highlight the importance of QIs for improving the quality of laboratory services and patient safety. Simplification of the current model of QI by identifying a selection of several ‘mandatory’ indicators seems to be the reasonable compromise for laboratories worldwide [71].

External Quality Assessment Schemes for preanalytical phase

Several studies have described the most frequent errors in the different phases of the total testing process of laboratory diagnostics, and a number of schemes for registration of errors and subsequent feedback to participants have also been conducted for decades by External Quality Assessment (EQA) organizations operating in most countries. In ISO 15189 [72], the accreditation standard for medical laboratories, it is stated that ‘External quality assessment programmes should check the entire examination process, including pre- and post-examination procedures’. So far, EQA organizations have focused on the analytical phase, and most of them do not offer preanalytical EQAS, as it is inherently more challenging to perform and standardize programs targeting the preanalytical phase. However, some ongoing EQA programs for the preanalytical phase do exist, and a trend is also emerging among the EQA organizers to place major focus on this area [73]. Basically, the methods can be divided into three different types. Type 1: Registration of procedures could be done by circulation questionnaires, aimed at collecting information on how the laboratories handle different parts of the preanalytical phase, e.g., which criteria are used for sample rejection. Type 2: These schemes are similar to usual analytical EQAS, but the circulated material simulates some kind of preanalytical error (e.g., hemolyzed serum) [74]. Case histories can be distributed together with the EQA samples to elucidate how these samples are dealt with, and how the results are communicated to the physicians. Type 3: Register actual preanalytical errors and relate these to QIs. The EQA organization suggests QIs related to preanalytical errors/adverse events and develops a common registration system that the laboratories should use to report their data regularly over a given period. The different types of approach have different focus and different challenges regarding implementation, and a combination of the three is probably necessary to effectively detect and monitor the broad range of errors occurring in the preanalytical phase. The feedback report for all the different types should also include a comparison of laboratory result to those of all participants, along with an overview of existing guidelines/recommendations and recent publications and advice on how to minimize errors.

Results of the second EFLM WG-PRE survey – compliance to the CLSI H3-A6 guidelines

Laboratory results following venous blood sample collection and analysis are important in the clinical diagnosis and treatment of patients [75]. Errors during phlebotomy are a common contributor to diagnostic errors in the total testing process [76]. Venous blood specimen collection is in addition most often not under the supervision and control of the laboratory, but is performed elsewhere in the healthcare organization. Therefore, lower sample quality may potentially affect results, so that the measured value does not represent the patient condition in vivo.

Guidelines on correct venous blood specimen collection practice, such as the commonly used H3-A6 guideline issued by the CLSI in 2007 [40], have many discrete steps, all of which can be subject to error and are to a large extent focused on patient and collectors safety at the collection moment and not on the overall patient safety effects of a bad sample collection or sample handling following analysis. The guidelines in addition does not contain risk evaluation of the different steps and also lacks advice on how to best implement and sustain guideline practices.

The test requests along with the blood drawing procedures should always adhere to medical guidelines. However, in practice, venous blood sample collection does rarely fully conform to the published guidelines, and so interventions may be needed to reduce patient safety risks. Individual [77], as well as organizational external factors [78], have an impact on guideline non-conformity. Guideline adherence may be improved by education and training [79], whereas accreditation of venous blood specimen collection has only marginal effects.

A first WG-PRE European survey assessed the presence and compliance with national guidelines and the educational level and staff category by which phlebotomy was performed [80]. It identified a continuing need to assess compliance with guidelines, to adapt the existing CLSI H3-A6 document to make it more suitable for use in specific countries and to institute training programs for phlebotomy practitioners. Therefore, the WG-PRE conducted an observational study of phlebotomy procedures using a defined checklist to better understand the practices and procedures that take place in clinical institutions.

Key issues were chosen from the CLSI guideline by all WG members and addressed in a 29-items observational study checklist with yes/no answers. Experienced staff members in 12 European countries (mean audits, n=33) audited as many as 336 venous blood sample collections in emergency, outpatient and clinical ward settings. A risk-occurrence analysis of the individual phlebotomy steps was created from observed error occurrence and WG members grading of harm severity. A risk-occurrence chart was created, with an acceptable ‘green’ risk region, as low as reasonable practicable ‘yellow’ risk region, and an intolerable ‘red’ risk region demanding corrective action(s).

In the observation study, the key issues in the ‘red region’ which had the highest combination of impact and probability were questions Q4 (patient identification), Q25 and Q26 (test tube labeling). Identification errors (Q4) were more frequent in emergency and outpatient settings, compared to clinical wards. The identification errors were observed to be less frequent, but were assessed as causing the major patient safety risk, due to a potential high risk of harm severity. The Q25 and Q26 were also in the ‘red zone’ due to their substantially high frequency and degree of potential harm to the patient. Labeling blood tubes after sampling and not in the presence of the patient was a moderately frequent error in the study, but was assessed as being possibly life threatening. This issue is therefore of critical importance, highly relevant and obviously shows room for improvement.

Modifying staff behavior to conform more closely to practice guidelines and other recommended practices has proved to be a challenging enterprise [81]. One reason is that efficient and accurate methods of measuring adherence are missing as they are essential for policies and programs aiming to improve adherence. Questionnaires have successfully been used to monitor venous blood specimen collection adherence to guidelines [82]. Observational studies are seldom used, but have the advantage of direct observation of specimen collection errors and are also able to assess the error frequency for each key issue when performed in a larger scale as in this study. A severity grading to the observed error frequency was also included, to get an overall risk assessment and indication on the most critical practice steps, as well as when corrections should be implemented.

In the risk analysis, patient misidentification fell out as an intolerable risk. Misidentification is not easily detectable, and reporting of identification errors may cause blame for the personnel. Improving patient identification is an ongoing challenge in all types of blood collection procedures and also a critical issue in other healthcare areas [83]. Another intolerable risk was the practice of labeling the test tube at a later occasion away from the patient.

Recent studies on clinical practice guideline adherence have mainly focused on the organizational aspect. Investigations aimed to identify reasons for individual hazard behavior that might explain habitual choices to ignore important safety rules are scarce. It seems hence important to balance organizational and individual factors to ensure the best possible conditions for a culture that promotes safe care.

The adoption of clinical practice guidelines is affected by several issues, including the way they are implemented. Important factors for improving guidelines adoption include the evidence that the context is accessible to change, the appropriate monitoring and feedback mechanisms, and the available time for personnel to discuss research findings. Repeated local observational studies with error frequency assessment and risk analysis of venous blood specimen collection errors combined with feedback, discussions and reflection amongst phlebotomy personnel seems to be an efficient strategy to implement and sustain guideline practice.

Evidence-based quality management of preanalytical phase

The effective management of preanalytical phase is only possible through consistently and continuously applying the evidence-based approach in everyday routine laboratory activity. Evidence-based approach means that: 1) laboratory processes are closely monitored; 2) there is an operational and functional error detection system in place; and 3) root-cause analysis is done whenever there is an increase in error frequency, as a part of the continuous quality improvement. Evidence-based approach presumes that all preanalytical steps are scrutinized and challenged by some of the below questions:

  • Is this procedure in accordance with the recommended, i.e., the best possible practice?

  • Is there evidence to support the use of a given procedure?

  • Do I know the limitations of this procedure?

  • Do I know how this procedure affects sample quality and test results?

  • Do I know how to control potential sources of variability related to this procedure?

  • How is this procedure contributing to the patient care and how does it affect patient outcome?

The management of preanalytical phase should encompass all steps of the total testing process which take place before the analytical part, and hence include test requesting, patient preparation, sample collection, transport, delivery to the laboratory and handling. Each of those steps is potentially associated with numerous sources of variability and some chance of error. By effective evidence-based management of preanalytical phase, the laboratory can reduce the error rate and improve care for patient as well as clinical outcome [84]. For example, an evidence based approach to test requesting means that test requesting patterns are assessed for their appropriateness for each particular patient population and patient condition, by both reducing the rate of unnecessary test requests and ensuring that the right test is requested for the right patient (i.e., adequate utilization of tests which are necessary/useful in a specific patient population) [85]. To properly manage the test demand, a laboratory should, as already discussed in depth above (under section: ‘Unnecessary laboratory tests – a matter of concern?’), challenge the current test panel used for a certain condition by questioning whether such panel is in accordance with the recommended diagnostic algorithm and how this testing panel affects patient outcome. Some paradigmatic examples are: ‘Is procalcitonin a useful diagnostic marker for the diagnosis of sepsis?’; ‘What is the best biomarker for diagnosis of acute kidney failure?’; and ‘What is the best strategy to diagnose urinary tract infection?’

If diagnostic algorithm and guidelines for a certain condition are unavailable, the laboratory should search for the evidence supporting the use of a certain test or a panel of tests in a particular patient group. As already discussed, numerous interventions have been proposed to address and manage appropriate test utilization. Such interventions are effective tools aimed to reduce costs and waste and improve the patient outcome. It has been demonstrated that through the active intervention by the laboratory staff and bi-directional communication with clinicians a significant savings and reduction in the use of tests can be achieved [86].

Another good example of the evidence-based quality management approach to the preanalytical phase is the implementation and use of sample acceptance criteria in a laboratory. Many laboratories have established their criteria for sample acceptance or rejection. Instead of being evidence based, those criteria are unfortunately quite often based on manufacturer’s declarations, expert opinion or historical reasons. They are only limited examples of sharing acceptance criteria on a national basis [87, 88]. Therefore, the crucial question is to establish whether those criteria are correct or not, and if they really fit for the purpose. Another good point is to find what each laboratory can do to improve the policy for assessing sample quality. Again, the laboratory should challenge its current policy by examining if the procedure in use is recommended by some authority, or whether there is evidence to support the use of that particular procedure. Most importantly, the laboratory should investigate how the procedure in use affects the patient outcome. Not a single step should be taken for granted. Not a single decision should be made in the lack of proper evidence.

Unfortunately, the laboratory often faces the lack of evidence in cases when there is a need to address a certain preanalytical issue or problem. When evidence does not exist, the laboratory should perform its own validation or verification study to address the issues of interest. This consumes time, money and other resources. Obviously, there is a need for a global joint effort of laboratory professionals in sharing experiences and addressing some common preanalytical issues and problems, to mutually benefit from each other and overcome this problem.

Conclusions

The management of quality in preanalytical laboratory practices is a challenging enterprise, which requires coordinated efforts from both a universal and local perspective [89]. After several years of research in the field of quality of laboratory diagnostics, recognizing the iceberg of laboratory errors and acknowledging that extra-analytical quality is at least as important as analytical quality are vital to achieve substantial improvement of laboratory diagnostics and patients safety (Figure 1) [90, 91]. Therefore, we sincerely hope that this collective paper would enable the exchange of ideas and knowledge related to some most common issues and everyday problems, and ultimately enhance harmonization [92] and quality in the preanalytical phase.

Author contributions: All the authors have accepted responsibility for the entire content of this submitted manuscript and approved submission.

Financial support: None declared.

Employment or leadership: None declared.

Honorarium: None declared.

Competing interests: The funding organization(s) played no role in the study design; in the collection, analysis, and interpretation of data; in the writing of the report; or in the decision to submit the report for publication.


Corresponding author: Prof. Giuseppe Lippi, U.O. Diagnostica Ematochimica, Azienda Ospedaliero-Universitaria di Parma, Via Gramsci, 14, 43126 Parma, Italy, Phone: +39 0521 703050/ +39 0521 703791, E-mail: ; . http://orcid.org/0000-0001-9523-9054; Laboratory of Clinical Chemistry and Hematology, Diagnostic Department, Academic Hospital of Parma, Parma, Italy
aAuthors belong to the European Federation for Clinical Chemistry and Laboratory Medicine Working Group for Preanalytical Phase.

References

1. Lippi G, Simundic AM, Mattiuzzi C. Overview on patient safety in healthcare and laboratory diagnostics. Biochem Med (Zagreb) 2010;20:131–43.10.11613/BM.2010.015Search in Google Scholar

2. Plebani M, Lippi G. Closing the brain-to-brain loop in laboratory testing. Clin Chem Lab Med 2011;49:1131–3.10.1515/CCLM.2011.617Search in Google Scholar PubMed

3. Simundic AM, Lippi G. Preanalytical phase – a continuous challenge for laboratory professionals. Biochem Med (Zagreb) 2012;22:145–9.Search in Google Scholar

4. Sonntag O. Quality in the analytical phase. Biochem Med (Zagreb) 2010;20:147–53.10.11613/BM.2010.017Search in Google Scholar

5. Plebani M, Lippi G. Improving the post-analytical phase. Clin Chem Lab Med 2010;48:435–6.10.1515/CCLM.2010.113Search in Google Scholar PubMed

6. Lippi G, Guidi GC. Risk management in the preanalytical phase of laboratory testing. Clin Chem Lab Med 2007;45:720–7.10.1515/CCLM.2007.167Search in Google Scholar PubMed

7. Lippi G, Chance JJ, Church S, Dazzi P, Fontana R, Giavarina D, et al. Preanalytical quality improvement: from dream to reality. Clin Chem Lab Med 2011;49:1113–26.10.1515/CCLM.2011.600Search in Google Scholar PubMed

8. Lippi G, Becan-McBride K, Behúlová D, Bowen RA, Church S, Delanghe J, et al. Preanalytical quality improvement: in quality we trust. Clin Chem Lab Med 2013;51:229–41.10.1515/cclm-2012-0597Search in Google Scholar PubMed

9. Miller WG, Tate JR, Barth JH, Jones GR. Harmonization: the sample, the measurement, and the report. Ann Lab Med 2014;34:187–97.10.3343/alm.2014.34.3.187Search in Google Scholar PubMed PubMed Central

10. Simundic AM, Cornes MP, Grankvist K, Lippi G, Nybo M, Ceriotti F, et al. Colour coding for blood collection tube closures – a call for harmonisation. Clin Chem Lab Med [Epub ahead of print 2014 Oct 14]. doi: 10.1515/cclm-2014-0927.10.1515/cclm-2014-0927Search in Google Scholar PubMed

11. Tate JR, Johnson R, Barth JH, Panteghini M. Harmonization of laboratory testing – current achievements and future strategies. Clin Chim Acta 2014;432:4–7.10.1016/j.cca.2013.08.021Search in Google Scholar PubMed

12. van Walraven C, Naylor CD. Do we know what inappropriate laboratory utilization is? A systematic review of laboratory clinical audits. J Am Med Assoc 1998;280:550–8.10.1001/jama.280.6.550Search in Google Scholar PubMed

13. Lippi G, Favaloro EJ, Franchini M. Dangers in the practice of defensive medicine in hemostasis testing for investigation of bleeding or thrombosis: part I – routine coagulation testing. Semin Thromb Hemost 2014;40:812–24.10.1055/s-0034-1394108Search in Google Scholar PubMed

14. Smellie WS, Galloway MJ, Chinn D. Is clinical practice variability the major reason for differences in pathology requesting patterns in general practice? J Clin Pathol 2002;55:312–4.10.1136/jcp.55.4.312Search in Google Scholar PubMed PubMed Central

15. Smellie WS. Appropriateness in pathology. A new era or re-inventing the wheel? Ann Clin Biochem 2003;40:585–92.10.1258/000456303770367180Search in Google Scholar PubMed

16. Young DS, Sachais BS, Jefferies LC. Laboratory costs in the context of disease. Clin Chem 2000;46:967–75.10.1093/clinchem/46.7.967Search in Google Scholar

17. Plebani M, Zaninotto M, Faggian D. Utilization management: a European perspective. Clin Chim Acta 2014;427:137–41.10.1016/j.cca.2013.03.002Search in Google Scholar PubMed

18. Lippi G, Mattiuzzi C. Testing volume is not synonymous of cost, value and efficacy in laboratory diagnostics. Clin Chem Lab Med 2013;51:243–5.10.1515/cclm-2012-0502Search in Google Scholar PubMed

19. Fryer AA, Hanna FW. Managing demand for pathology tests: financial imperative or duty of care? Ann Clin Biochem 2009;46(Pt 6):435–7.10.1258/acb.2009.009186Search in Google Scholar PubMed

20. Fryer AA, Smellie WS. Managing demand for laboratory tests: a laboratory toolkit. J Clin Pathol 2013;66:62–72.10.1136/jclinpath-2011-200524Search in Google Scholar PubMed

21. De Carli G, Abiteboul D, Puro V. The importance of implementing safe sharps practices in the laboratory setting in Europe. Biochem Med (Zagreb) 2014;24:45–56.10.11613/BM.2014.007Search in Google Scholar PubMed PubMed Central

22. Joint EPSU-HOSPEEM Project ‘Promotion and support of the implementation of Directive 2010/32/EU on the prevention of sharps injuries in the hospital and health care sector’. Final Project Report, 15 November 2013. Available from: http://hospeem.org/wordpress/wp-content/uploads/2013/10/Final-Report-ICF-GHK-15.11.13-EN+TW.pdf. Accessed 10 October, 2014.Search in Google Scholar

23. Written questions by Members of the European Parliament and their answers given by a European Union institution. E-014392/13 by Marian Harkin to the Commission. Answer given by Mr Andor on behalf of the Commission (18 February 2014). Official Journal of the European Union C 275/1, 21.08.2014. Available from: http://eur-lex.europa.eu/legal-content/EN/TXT/?qid=1412176568796&uri=OJ:JOC_2014_275_R_0001. Accessed 10 October, 2014.Search in Google Scholar

24. Gomez S, De Raeve P. EFN Report on the Implementation of Directive 2010/32/EU on the prevention of sharps injuries in the healthcare sector. Descriptive and Explorative Cluster Analysis of Directive 2010/32/EU Implementation into Clinical Practice Data. Presented at 4th European Biosafety Summit, Warsaw, Poland, 2/12/2013. Available from: http://www.efnweb.be/wp-content/uploads/EFN-Report-on-Sharps-Injuries-DIR32-Implementation-forwebsite1.pdf. Accessed 10 October, 2014.Search in Google Scholar

25. Nybo M, Grinsted P, Jørgensen PE. Blood sampling: is fasting properly defined? Clin Chem 2005;51:1563–4.10.1373/clinchem.2005.051789Search in Google Scholar PubMed

26. Lippi G, Lima-Oliveira G, Salvagno GL, Montagnana M, Gelati M, Picheth G, et al. Influence of a light meal on routine haematological tests. Blood Transfus 2010;8:94–9.Search in Google Scholar

27. Lima-Oliveira G, Salvagno GL, Lippi G, Danese E, Gelati M, Montagnana M, et al. Could light meal jeopardize laboratory coagulation tests? Biochem Med (Zagreb) 2014;24:343–9.10.11613/BM.2014.036Search in Google Scholar PubMed PubMed Central

28. Lima-Oliveira G, Salvagno GL, Lippi G, Gelati M, Montagnana M, Danese E, et al. Influence of a regular, standardized meal on clinical chemistry analytes. Ann Lab Med 2012;32:250–6.10.3343/alm.2012.32.4.250Search in Google Scholar PubMed PubMed Central

29. Kackov S, Simundic AM, Gatti-Drnic A. Are patients well informed about the fasting requirements for laboratory blood testing? Biochem Med (Zagreb) 2013;23:326–31.10.11613/BM.2013.040Search in Google Scholar PubMed PubMed Central

30. Simundic AM, Cornes M, Grankvist K, Lippi G, Nybo M. Standardization of collection requirements for fasting samples: for the Working Group on Preanalytical Phase (WG-PA) of the European Federation of Clinical Chemistry and Laboratory Medicine (EFLM). Clin Chim Acta 2014;432:33–7.10.1016/j.cca.2013.11.008Search in Google Scholar PubMed

31. Lippi G, Banfi G, Botrè F, de la Torre X, De Vita F, Gomez-Cabrera MC, et al. Laboratory medicine and sports: between Scylla and Charybdis. Clin Chem Lab Med 2012;50:1309–16.10.1515/cclm-2012-0062Search in Google Scholar PubMed

32. Sanchis-Gomar F, Lippi G. Physical activity – an important preanalytical variable. Biochem Med (Zagreb) 2014;24:68–79.10.11613/BM.2014.009Search in Google Scholar PubMed PubMed Central

33. Banfi G, Dolci A. Preanalytical phase of sport biochemistry and haematology. J Sports Med Phys Fitness 2003;43:223–30.Search in Google Scholar

34. Lippi G, Mattiuzzi C, Banfi G. Controlling sources of preanalytical variability in doping samples: challenges and solutions. Bioanalysis 2013;5:1571–82.10.4155/bio.13.110Search in Google Scholar PubMed

35. Lombardi G, Lanteri P, Colombini A, Lippi G, Banfi G. Stability of haematological parameters and its relevance on the athlete’s biological passport model. Sports Med 2011;41:1033–42.10.2165/11591460-000000000-00000Search in Google Scholar PubMed

36. Lippi G, Plebani M, Sanchis-Gomar F, Banfi G. Current limitations and future perspectives of the Athlete Blood Passport. Eur J Appl Physiol 2012;112:3693–4.10.1007/s00421-012-2386-9Search in Google Scholar PubMed

37. Bickham P, Golembiewski J. Contrast media use in the operating room. J Perianesth Nurs 2010;25:94–103.10.1016/j.jopan.2010.01.013Search in Google Scholar PubMed

38. Lippi G, Daves M, Mattiuzzi C. Interference of medical contrast media on laboratory testing. Biochem Med (Zagreb) 2014;24:80–8.10.11613/BM.2014.010Search in Google Scholar PubMed PubMed Central

39. Daves M, Lippi G, Cosio G, Raffagnini A, Peer E, Dangella A, et al. An unusual case of a primary blood collection tube with floating separator gel. J Clin Lab Anal 2012;26:246–7.10.1002/jcla.21512Search in Google Scholar PubMed PubMed Central

40. Clinical Laboratory Standards Institute. Procedures for collection of diagnostic blood specimens by venipuncture; approved guideline – 6th ed. CLSI document H3-A6. CLSI: Wayne, PA, 2007.Search in Google Scholar

41. World Health Organization. WHO guidelines on drawing blood: best practices in phlebotomy. WHO Press: Geneva, 2010.Search in Google Scholar

42. Sun N, Knauf R. Cross contamination solved by technique. ASCP Summary Report. 1977;14:3.Search in Google Scholar

43. Calam RR, Cooper MH. Recommended ‘order of Draw’ for collecting blood specimens into additive-containing tubes. Clin Chem 1982;28:1399.10.1093/clinchem/28.6.1399Search in Google Scholar

44. Sulaiman RA, Cornes MP, Whitehead S, Othonos N, Ford C, Gama R. Effect of order of draw of blood samples during phlebotomy on routine biochemistry results. J Clin Pathol 2011;64:1019–20.10.1136/jclinpath-2011-200206Search in Google Scholar PubMed

45. Salvagno G, Lima-Oliveira G, Brocco G, Danese E, Guidi GC, Lippi G. The order of draw: myth or science? Clin Chem Lab Med 2013;51:2281–5.10.1515/cclm-2013-0412Search in Google Scholar PubMed

46. Lima-Oliveira G, Salvagno GL, Danese E, Favaloro EJ, Guidi GC, Lippi G. Sodium citrate blood contamination by K2-ethylenediaminetetraacetic acid (EDTA): impact on routine coagulation testing. Int J Lab Hematol [Epub ahead of print 2014 Oct 13]. doi: 10.1111/ijlh.12301.10.1111/ijlh.12301Search in Google Scholar PubMed

47. Lima-Oliveira G, Salvagno GL, Danese E, Brocco G, Guidi GC, Lippi G. Contamination of lithium heparin blood by K2-ethylenediaminetetraacetic acid (EDTA): an experimental evaluation. Biochem Med (Zagreb) 2014;24:359–67.10.11613/BM.2014.038Search in Google Scholar PubMed PubMed Central

48. Berg JE, Ahee P, Berg JD. Variation in phlebotomy techniques in emergency medicine and the incidence of haemolysed samples. Ann Clin Biochem 2011;48:562–5.10.1258/acb.2011.011099Search in Google Scholar PubMed

49. Lima-Oliveira G, Lippi G, Salvagno GL, Montagnana M, Picheth G, Guidi GC. Incorrect order of draw could be mitigate the patient safety: a phlebotomy management case report. Biochem Med (Zagreb) 2013;23:218–23.10.11613/BM.2013.026Search in Google Scholar

50. Cornes MP, Ford C, Gama R. Spurious hyperkalaemia due to EDTA contamination: common and not always easy to identify. Ann Clin Biochem 2008;45:601–3.10.1258/acb.2008.007241Search in Google Scholar PubMed

51. Sharratt CL, Gilbert CJ, Cornes MP, Ford C, Gama R. EDTA sample contamination is common and often undetected, putting patients at unnecessary risk of harm. Int J Clin Prac 2009;63:1259–62.10.1111/j.1742-1241.2008.01981.xSearch in Google Scholar PubMed

52. Lippi G, Simundic AM. Laboratory networking and sample quality: a still relevant issue for patient safety. Clin Chem Lab Med 2012;50:1703–5.10.1515/cclm-2012-0245Search in Google Scholar PubMed

53. Lippi G, Lima-Oliveira G, Nazer SC, Moreira ML, Souza RF, Salvagno GL. Suitability of a transport box for blood sample shipment over a long period. Clin Biochem 2011;44:1028–9.10.1016/j.clinbiochem.2011.05.028Search in Google Scholar PubMed

54. Clinical and Laboratory Standards Institute. Procedures for handling and processing of blood specimens for common laboratory tests. H18-A4; approved guideline – 4th ed. CLSI: Wayne, PA, 2010.Search in Google Scholar

55. Zaninotto M, Tasinato A, Padoan A, Vecchiato G, Pinato A, Sciacovelli L, et al. An integrated system for monitoring the quality of sample transportation. Clin Biochem 2012;45:688–90.10.1016/j.clinbiochem.2012.02.013Search in Google Scholar PubMed

56. Zaninotto M, Tasinato A, Padoan A, Vecchiato G, Pinato A, Sciacovelli L, et al. Effects of sample transportation on commonly requested laboratory tests. Clin Chem Lab Med 2012;10: 1755–60.10.1515/cclm-2012-0150Search in Google Scholar PubMed

57. Lippi G, Salvagno GL, Montagnana M, Guidi GC. Preparation of a quality sample: effect of centrifugation time on stat clinical chemistry testing. Lab Med 2007;38:172–6.10.1309/D8TJCARUW575CXYHSearch in Google Scholar

58. Clinical and Laboratory Standards Institute. Collection, transport, and processing of blood specimens for coagulation testing and general performance of coagulation assays. Approved guideline H21–A5 – 5th ed. CLSI: Wayne, PA, 2008.Search in Google Scholar

59. Lippi G, Salvagno GL, Danese E, Lima-Oliveira G, Brocco G, Guidi GC. Inversion of lithium heparin gel tubes after centrifugation is a significant source of bias in clinical chemistry testing. Clin Chim Acta 2014;436:183–7.10.1016/j.cca.2014.05.028Search in Google Scholar PubMed

60. Da Rin G, Lippi G. The quality of diagnostic testing may be impaired during shipment of lithium-heparin gel tubes. Clin Chem Lab Med 2014;52:1633–7.Search in Google Scholar

61. Daves M, Giacomuzzi K, Tagnin E, Jani E, Adcock Funk DM, Favaloro EJ, et al. Influence of centrifuge brake on residual platelet count and routine coagulation tests in citrated plasma. Blood Coagul Fibrinolysis 2014;25:292–5.10.1097/MBC.0000000000000026Search in Google Scholar PubMed

62. Lippi G, Rossi R, Ippolito L, Zobbi V, Azzi D, Pipitone S, et al. Influence of residual platelet count on routine coagulation, factor VIII, and factor IX testing in postfreeze-thaw samples. Semin Thromb Hemost 2013;39:834–9.10.1055/s-0033-1356572Search in Google Scholar PubMed

63. Mensel B, Wenzel U, Roser M, Ludemann J, Nauck M. Considerably reduced centrifugation time without increased hemolysis: evaluation of the new BD Vacutainer SSTTMII Advance. Clin Chem 2007;53:794–5.10.1373/clinchem.2006.079582Search in Google Scholar PubMed

64. Suchsland J, Friedrich N, Grotevendt A, Kallner A, Lüdemann J, Nauck M, et al. Optimizing centrifugation of coagulation samples in laboratory automation. Clin Chem Lab Med 2014;52:1187–91.10.1515/cclm-2014-0038Search in Google Scholar PubMed

65. Plebani M, Sciacovelli L, Lippi G. Quality indicators for laboratory diagnostics: consensus is needed. Ann Clin Biochem 2011;48:479.10.1258/acb.2011.011088Search in Google Scholar PubMed

66. Plebani M, Sciacovelli L, Marinova M, Marcuccitti J, Chiozza ML. Quality indicators in laboratory medicine: a fundamental tool for quality and patient safety. Clin Biochem 2013;46:1170–4.10.1016/j.clinbiochem.2012.11.028Search in Google Scholar PubMed

67. Plebani M, Sciacovelli L, Aita A, Padoan A, Chiozza ML. Quality indicators to detect pre-analytical errors in laboratory testing. Clin Chim Acta 2014;432:44–8.10.1016/j.cca.2013.07.033Search in Google Scholar PubMed

68. Sciacovelli L, Plebani M. The IFCC Working Group on laboratory errors and patient safety. Clin Chim Acta 2009;404:79–85.10.1016/j.cca.2009.03.025Search in Google Scholar PubMed

69. Sciacovelli L, O’Kane M, Skaik YA, Caciagli P, Pellegrini C, Da Rin G, et al. IFCC WG-LEPS. Quality indicators in laboratory medicine: from theory to practice. Preliminary data from the IFCC Working Group Project ‘Laboratory Errors and Patient Safety’. Clin Chem Lab Med 2011;49:835–44.Search in Google Scholar

70. Plebani M, Astion ML, Barth JH, Chen W, de Oliveira Galoro CA, Escuer MI, et al. Harmonization of quality indicators in laboratory medicine. A preliminary consensus. Clin Chem Lab Med 2014;52:951–8.Search in Google Scholar

71. Plebani M, Sciacovelli L, Aita A, Chiozza ML. Harmonization of pre-analytical quality indicators. Biochem Med (Zagreb) 2014;24:105–13.10.11613/BM.2014.012Search in Google Scholar PubMed PubMed Central

72. International Organization for Standardization. ISO 15189: medical laboratories: particular requirements for quality and competence. IOS: Geneva, 2012.Search in Google Scholar

73. Kristensen GB, Aakre KM, Kristoffersen AH, Sandberg S. How to conduct external quality assessment schemes for the pre-analytical phase? Biochem Med (Zagreb) 2014;24:114–22.10.11613/BM.2014.013Search in Google Scholar PubMed PubMed Central

74. Lippi G, Luca Salvagno G, Blanckaert N, Giavarina D, Green S, Kitchen S, et al. Multicenter evaluation of the hemolysis index in automated clinical chemistry systems. Clin Chem Lab Med 2009;47:934–9.10.1515/CCLM.2009.218Search in Google Scholar PubMed

75. Plebani M. Errors in clinical laboratories or errors in laboratory medicine? Clin Chem Lab Med 2006;44:750–9.10.1515/CCLM.2006.123Search in Google Scholar PubMed

76. Lippi G, Salvagno GL, Montagnana M, Franchini M, Guidi GC. Phlebotomy issues and quality improvement in results of laboratory testing. Clin Lab 2006;52:217–30.Search in Google Scholar

77. Amon E. Communication strategies for reducing hospital error and professional liability. Obstet Gynecol Surv 2002;57:713–4.10.1097/00006254-200211000-00001Search in Google Scholar

78. Francke AL, Smit MC, de Veer AJ, Mistiaen P. Factors influencing the implementation of clinical guidelines for health care professionals: a systematic meta-review. BMC Med Inform Decision Making 2008;8:38.10.1186/1472-6947-8-38Search in Google Scholar

79. Bölenius K, Söderberg J, Hultdin J, Lindkvist M, Brulin C, Grankvist K. Minor improvement of venous blood specimen collection practices in primary health care after a large-scale educational intervention. Clin Chem Lab Med 2013;51:303–10.10.1515/cclm-2012-0159Search in Google Scholar

80. Simundic AM, Cornes M, Grankvist K, Lippi G, Nybo M, Kovalevskaya S, et al. Survey of national guidelines, education and training on phlebotomy in 28 European countries: an original report by the European Federation of Clinical Chemistry and Laboratory Medicine (EFLM) working group for the preanalytical phase (WG-PA). Clin Chem Lab Med 2013;51:1585–93.10.1515/cclm-2013-0283Search in Google Scholar

81. Mittman BS, Tonesk X, Jacobson PD. Implementing clinical practice guidelines: social influence strategies and practitioner behavior change. Qual Rev Bull 1992;18:413.10.1016/S0097-5990(16)30567-XSearch in Google Scholar

82. Bölenius K, Brulin C, Grankvist K, Lindkvist M, Söderberg J. A content validated questionnaire for self-reported venous blood sampling practices. BMC Research Notes 2012;5:39.10.1186/1756-0500-5-39Search in Google Scholar PubMed PubMed Central

83. Lippi G, Blanckaert N, Bonini P, Green S, Kitchen S, Palicka V, et al. Causes, consequences, detection, and prevention of identification errors in laboratory diagnostics. Clin Chem Lab Med 2009;47:143–53.10.1515/CCLM.2009.045Search in Google Scholar PubMed

84. Giménez-Marín A, Rivas-Ruiz F, del Mar Pérez-Hidalgo M, Molina-Mendoza P. Pre-analytical errors management in the clinical laboratory: a five-year study. Biochem Med (Zagreb) 2014;24:248–57.10.11613/BM.2014.027Search in Google Scholar PubMed PubMed Central

85. Baird G. The laboratory test utilization management toolbox. Biochem Med (Zagreb) 2014;24:223–34.10.11613/BM.2014.025Search in Google Scholar PubMed PubMed Central

86. Janssens PM, Staring W, Winkelman K, Krist G. Active intervention in hospital test request panels pays. Clin Chem Lab Med [Epub ahead of print 2014 Oct 10]. doi: 10.1515/cclm-2014-0575.10.1515/cclm-2014-0575Search in Google Scholar PubMed

87. Lippi G, Banfi G, Buttarello M, Ceriotti F, Daves M, Dolci A, et al. Recommendations for detection and management of unsuitable samples in clinical laboratories. Clin Chem Lab Med 2007;45:728–36.10.1515/CCLM.2007.174Search in Google Scholar PubMed

88. Nikolac N, Supak-Smolcić V, Simundić AM, Celap I; Croatian Society of Medical Biochemistry and Laboratory Medicine. Croatian Society of Medical Biochemistry and Laboratory Medicine: national recommendations for venous blood sampling. Biochem Med (Zagreb) 2013;23:242–54.10.11613/BM.2013.031Search in Google Scholar PubMed PubMed Central

89. Plebani M, Panteghini M. Promoting clinical and laboratory interaction by harmonization. Clin Chim Acta 2014;432:15–21.10.1016/j.cca.2013.09.051Search in Google Scholar PubMed

90. Plebani M. Exploring the iceberg of errors in laboratory medicine. Clin Chim Acta 2009;404:16–23.10.1016/j.cca.2009.03.022Search in Google Scholar PubMed

91. Piva E, Pelloso M, Penello L, Plebani M. Laboratory critical values: automated notification supports effective clinical decision making. Clin Biochem 2014;47:1163–8.10.1016/j.clinbiochem.2014.05.056Search in Google Scholar PubMed

92. Plebani M. Harmonization in laboratory medicine: the complete picture. Clin Chem Lab Med 2013;51:741–51.10.1515/cclm-2013-0075Search in Google Scholar PubMed

Received: 2014-10-26
Accepted: 2014-10-28
Published Online: 2014-12-9
Published in Print: 2015-2-1

©2015 by De Gruyter

Downloaded on 8.12.2023 from https://www.degruyter.com/document/doi/10.1515/cclm-2014-1051/html
Scroll to top button