The dried blood spot (DBS) method allows patients and researchers to collect blood on a sampling card using a skin-prick. An important issue in the application of DBSs is that samples for therapeutic drug monitoring are frequently rejected because of poor spot quality, leading to delayed monitoring or missing data. We describe the development and performance of a web-based application (app), accessible on smartphones, tablets or desktops, capable of assessing DBS quality at the time of sampling by means of analyzing a picture of the DBS.
The performance of the app was compared to the judgment of experienced laboratory technicians for samples obtained in a trained and untrained setting. A robustness- and user test were performed.
In a trained setting the app yielded an adequate decision in 90.0% of the cases with 4.1% false negatives (insufficient quality DBSs incorrectly not rejected) and 5.9% false positives (sufficient quality DBSs incorrectly rejected). In an untrained setting this was 87.4% with 5.5% false negatives and 7.1% false positives. A patient user test resulted in a system usability score of 74 out of 100 with a median time of 1 min and 45 s to use the app. Robustness testing showed a repeatability of 84%. Using the app in a trained and untrained setting improves the amount of sufficient quality samples from 80% to 95.9% and 42.2% to 87.9%, respectively.
The app can be used in trained and untrained setting to decrease the amount of insufficient quality DBS samples.
Dried blood spot (DBS) sampling is a technique that finds its application in clinical research and routine patient care as part of therapeutic drug monitoring (TDM) , , . Using a skin-prick, capillary blood is applied to a sampling card that is allowed to dry. From these DBSs, blood drug concentrations, clinical chemical parameters such as creatinine or titers of antiviral antibodies can be measured , , . The advantages of DBSs include increased sample stability and ease of sample storage, more convenient and simple sampling procedure with reduced risk of infection, no phlebotomist required for sampling and the possibility of sending samples by regular mail without special precautions , . Therefore, DBSs are used to facilitate sampling for TDM in remote areas and patient home sampling .
One of the major issues in DBS sampling is the quality of the produced blood spots. In short, a good quality blood spot is round, consists of one droplet, does not touch other droplets and is large enough for punching a 3, 5 or 8 mm disc , , . However, even in controlled environments, where trained phlebotomists obtain the DBS samples, 4–5% of the samples are rejected because of insufficient quality . When patients sample at home as part of routine care, 80% of obtained blood spots are of sufficient quality . In clinical research in developing countries, where DBS sampling is performed by untrained researchers, rejection rates can even be as high as 52% . Rejection of DBS samples can lead to delayed monitoring of patients or missing data in clinical research. Other factors impacting DBS sample quality are the choice of filter paper, analyte stability, storage and transport conditions, exposure to direct sunlight, drying time and humidity .
Currently, quality inspection of the DBSs is performed at the laboratory by experienced laboratory personnel (ELP) based on available World Health Organization (WHO) and Clinical and Laboratory Standards Institute (CLSI) guidelines and quality standards that are set by the individual laboratory , , . The issue with this workflow is that quality inspection is performed upon arrival at the laboratory and not immediately after the moment of sampling. If samples are of insufficient quality, timely resampling is often not possible .
Although training of sampling can decrease the rejection rate of samples , it would be more convenient if a phlebotomist, researcher or patient is able to determine the quality of a sample at the time of sampling, which would give the possibility of immediate resampling if the sample is of insufficient quality.
In newborn bloodspot screening an optical scanning instrument is available for measuring spot quality, but this method still requires that samples are sent to the laboratory before quality inspection . Currently, no standardized, automated method exists for determining spot quality in fingerskin-prick DBS sampling at the time of sampling. We aimed to develop a tool that can be easily used by patients, healthcare workers and researchers at the time of sampling and gives reliable results for DBS spotting quality. We describe the development and performance of a web-based application (app) capable of measuring DBS quality by means of capturing images of the blood spot. The app was tested in both a trained and an untrained setting.
Materials and methods
Using the app
The app is a responsive web-based application accessible in the browser of a smartphone, tablet, laptop or desktop PC. The app requires a working Internet connection to load but no installation on a device is required. After the app has been loaded and saved in the browsers cache, the app can be used off-line. A detailed instruction on how to use the app can be found in Figure 1. The app is available in Dutch and English and can be found at www.dbsapp.umcg.nl. The app has been developed by MAD multimedia (Groningen, The Netherlands) in consultation with specialists from the Department of Clinical Pharmacy and Pharmacology from the University Medical Centre Groningen (Groningen, The Netherlands). A detailed description of the app specifications can be found in Supplementary file S1.
DBS samples were visually inspected for layering, contaminations, hemolysis, dilution, clotting, smearing of blood, saturation of the paper, coloration and intactness of the filter paper based on available guidelines because all of these factors can influence analytical results , , . Two experienced technicians (ELP) independently evaluated the test samples and were considered as gold standard (GS) for the app. When the judgment of the ELP differs, the sample was re-evaluated by the ELP until consensus was obtained. The performance of the app was defined as the percentage of samples where the judgment of the app is in agreement with the GS. If the judgment of the app and ELP differ, there can be either a false positive or false negative result. False positives (app judges sample as insufficient, ELP judges as acceptable) will lead to unnecessary resampling but not to delayed monitoring. False negatives (app judges sample as acceptable, ELP judges insufficient) will lead to sending samples of insufficient quality, which would result in delayed monitoring or incomplete data.
In clinical validation studies, usually 95% of samples obtained by trained phlebotomists are judged as acceptable . Therefore, we set the performance qualification of the app at 95% prior to testing the app.
A sample size calculation was performed based on a non-inferiority hypothesis, a power of 80% and an alpha of 5%. The judgment of the ELP (P1) is 0.99 and the judgment of the app (P2) is expected to be 0.96. A non-inferiority margin is set at 0.01 and sampling ratio at 1:1. This resulted in a sample size of 187. For the trained setting, 221 DBS samples were available. For the untrained setting, 1610 DBS samples were available. To avoid selection bias, we decided to use all samples to test the app.
For the performance testing, patient samples were used from earlier studies , . Additionally, patients were asked to participate in the user test. Due to the availability of previously collected samples, the need to obtain written informed consent from the subjects was waived by the Ethics Committee of the University Medical Center Groningen (Metc 2011.394).
In total 221 blood spots were collected from 181 adult kidney transplant patients . Samples were collected during routine visits of transplant patients to the clinic using a standardized method . Trained phlebotomists obtained the samples by fingerprick using a Blue Microtainer Contact-activated Lancet (BD and Co, Franklin Lakes, NJ, USA) and letting a drop of blood fall freely on a Whatman FTA DMPK-C sampling card (GE Healthcare, Chicago, IL, USA).
A total of 1610 individual spots were collected in a previous study . The samples were collected as part of a TDM study of anti-tuberculosis drugs in Bangladesh (n=244), Belarus (n=358), Indonesia (n=516) and Paraguay (n=492) . DBS samples were obtained by local healthcare workers who did not receive on the job training and only had the written instructions in English before sampling . Although 1856 individual spots were obtained in the aforementioned study, some spots were already analyzed before a photo could be captured resulting into 1610 usable spots for this study.
Testing app performance
The app was tested using an Apple iPhone 5S (Cupertino, CA, USA), equipped with a standard 8 megapixel camera. The DBS card was placed on a clean and flat surface. No extra lighting apart from the standard ceiling chemiluminescent lights (3350 lumen) available in the laboratory was used. To avoid variation, the iPhone 5S was not handheld but fixed in landscape position at 8 cm above the DBS card. Pictures were taken after auto-focusing of the camera without using the flash light. All pictures of the samples were processed in duplicate in the app on a desktop PC.
The International Conference on Harmonization (ICH) states “The robustness/ruggedness of an analytical procedure is a measure of its capacity to remain unaffected by small but deliberate variations in method parameters and provides an indication of its reliability during normal usage” . To test robustness, factors that could possibly interfere with the performance of the app were identified: person taking the picture, camera type, lighting, casting a shade, use of the camera’s flashlight, distance between sample and camera, angle for taking the picture, device on which the app is used. To test the influence of these factors, a library of 16 samples was made from the “trained setting” samples set that were difficult for the app to process during performance testing as experienced by the technicians testing the app and based on the function of the app. The test samples consisted of five false negatives, five false positives, three good spots and three bad spots as was determined by the app during the initial performance testing. Three different investigators using three different phones tested the app for all 16 samples using ideal circumstances as described under “testing app performance” as baseline with alteration of one of the following conditions for each test run: (1) Dimly lit room (no ceiling lights and only limited daylight through a small window), (2) Casting a shade on the sampling card, (3) Using the camera’s flashlight, (4) Using a distance of 50 cm between camera and sampling card, (5) Taking the picture from a 45° angle. The success rate was defined as the percentage of samples that yielded the same results in the app as was found in the initial performance testing of the samples. The three phone cameras that were used were the standard equipped cameras using autofocus on the iPhone 5, Nokia C5 2010 version (Espoo, Finland) and Samsung Galaxy S7 Edge (Seoul, South Korea). All pictures were tested in the app both on the device the picture was taken on and on a PC with the exception of the photos taken with the Nokia C5 which were only tested on a PC.
Usability is defined as “the extent to which a product can be used by specified users to achieve specified goals with effectiveness, efficiency and satisfaction in a specified context of use” . A user test was designed based on available literature, details can be found in Supplementary file S2 , , , , . Results were scored using the system usability score (SUS), a score of above 70 was considered acceptable .
In total, 149 (67.4%) samples were judged as acceptable and 72 (32.6%) as insufficient by the GS. The first version of the app showed a performance with accurate judgment of 76.8% of the samples with 10.5% false negatives and 12.7% false positives. For the false negatives, two types of errors were identified. The app could not identify layering of blood spots (Figure 2A) and spots that were hemolytic or discolored due to humidity (Figure 2B). The false positives consisted of spots that were not circle-shaped (Figure 2C). Because this result did not meet the performance qualification of 95%, the app was improved, resulting in a second version. In this version, the nine electronic iterations with a 10° rotation were introduced and width–height ratio was set at 12% based on retesting of false positive and false negatives samples (see Supplementary file S1). The second version of the app resulted in a performance of 90.0%, with 5.9% false positives and 4.1% false negatives.
In the second version of the app, the number of layered spots that were identified as false negatives were reduced from 21 to 7 due to the introduction of the nine iterations wherein the picture is rotated. As a result, the number of false positives dropped from 28 to 13 and the number of false negatives dropped from 23 to 9. The second version of the app was used for all remaining tests.
The app was used to test the 1610 samples obtained in an untrained setting. The performance was 87.4% with 5.5% false negatives and 7.1% false positives, comparable to the clinical samples. Results per country can be found in Table 1. Only 42.2% of the samples were of sufficient quality for analysis using 8 mm punches as determined by the GS . Hypothetically, if the app was present and used correctly at the time of sampling and if the suggested resampling by the app was performed without error the amount of samples sufficient for analysis would have been 87.9% (Table 2). It should be noted that reasons for insufficient quality of DBS samples differed per country in the untrained setting. For instance, Belarus had a relatively large number of very small spot sizes (<8 mm), while in Bangladesh, Indonesia and Paraguay humidity-related problems were more abundant .
|Correct||416 (84.6%)||348 (97.2%)||194 (79.5%)||449 (87.0%)||1407 (87.4%)|
|False negative||23 (4.7%)||3 (0.8%)||29 (11.9%)||34 (6.6%)||89 (5.5%)|
|False positive||53 (10.8%)||7 (2.0%)||21 (8.6%)||33 (6.4%)||114 (7.1%)|
|Total||492 (100%)||358 (100%)||244 (100%)||516 (100 %)||1610 (100%)|
|Samples of sufficient quality||Without app, %||With app, %|
During performance testing the deliberately induced unfavorable circumstances sometimes resulted in the app not being able to identify red pixels in a picture. As a result, the spots could not be indicated in the app (Figure 1, step 4) and the steps in the app could not be completed. This was indicated as an error. Because the error rate of the Nokia C5 was 36% and errors also occurred under perfect circumstances the Nokia C5 was considered not suitable to use with the app and the results were omitted from the performance testing. For each factor, a total of 64 samples were analyzed (16 pictures per phone, measured on both the phone and a PC). The overall performance of the robustness test is shown in Table 3. The success rate of the app was 84% under perfect conditions. The angle, lighting, casting a shade and the distance were all of influence on the performance of the app. Therefore, these specific issues are addressed in the instructions (Figure 1). The use of the flashlight is not of major influence on the app’s results. The error rate was 0% for the two newest phones (Samsung Galaxy S7 Edge and iPhone 5S).
|Factors in the robustness test||Success rate, %||Error rate, %|
|Dimly lighted room||67||19|
|Casting a shade on the sampling card||77||3|
|Distance 50 cm||39||50|
|Angle of 45°||29||54|
In the test, one factor was changed compared to the perfect conditions described in the Materials and methods section. The success rate is defined as the percentage of samples that yielded the same results in the app as was found in the initial performance testing of the samples.
After verbal consent, a total of seven patients and one caregiver participated in the user test. Details are provided in Supplementary file 2. None of the patients successfully used the app without prior instructions. Although the app was built to be intuitive, especially the use of the buttons to align the picture to the frame and indicating the spots were steps that could not be completed in the first try. After an instruction explaining the steps and pitfalls in using the app, all patients could complete all steps in the app with a median time of 1 min and 45 s. The average SUS score was 74, which can be classified as an acceptable satisfaction. All patients and the caregiver gave a score >50, showing good overall usability of the app. The most common mistakes made by the patients were trying to pinch and swipe in step 3 (Figure 1) and forgetting to indicate the spots in step 4 (Figure 1).
We developed an app to measure spot quality in DBS sampling that can easily be accessed and used by patients and professionals to determine spot quality, collected for TDM, in an objective way. Because the developed app is accessible on different devices, it is flexible and can be used in many different situations including home sampling and research in remote areas. Use of the app will only take a few minutes per sample.
In the first version of the app the acceptable width-to-height ratio was set lower than 12% which resulted in 12.7% false positives in the trained setting. The false positive results in the first version of the app mainly consisted of spots that were rejected by the app because of an unacceptable width-height ratio. In the second version, the acceptable width-to-height ratio was set at 12% lowering the amount of false positives from 12.7% to 5.9%. In clinical practice, the fall of a droplet on a card does not always provide a perfect circle-shaped spot. The ELP can determine whether a spot consists of one droplet without smearing. Even if the spot is not perfectly round, it would be acceptable (Figure 2C). Allowance of higher values for the width-to-height ratio would potentially decrease the amount of false positives, but would introduce an increase in false negatives because more layered spots would wrongfully be judged as acceptable. Allowance of lower values for the width-height ratio would increase the number of false positives, because acceptable spots that are not entirely circle-shaped would be rejected by the app. Therefore, despite limitations of the app, it was concluded that the second version of the app was of sufficient quality.
The app is unable to identify hemolytic or humid spots because hemolytic discoloration of the spots is still red as defined by specified RGB range and therefore is identified as a blood-pixel by the app. In clinical practice, discoloration due to hemolysis or humidity will not be visible until approximately 24 h after application of the blood to the DBS card . Even if the app could identify hemolytic spots this will probably not be in time to allow resampling in a reasonable time frame. For instance, the patient will already have taken the medication, so measuring a trough concentration is not possible within the intended sapling time.
Only eight patients participated in the user test and thus only the major problems in usability of the app could be identified. After introduction of the app, post introduction surveillance should be performed to enable further optimization of the usability and app user instructions. The robustness testing showed a result of 84% repeatability in perfect conditions. This was unexpected because the device on which the app is used should not be of any influence on the app results. In addition, the pictures were taken under the same conditions across three devices. However, the samples that were chosen for the robustness test were deliberately selected based on their difficulty, in order to test repeatability in the most extreme circumstances. For instance, one of the samples had a spot diameter of an 8.6 mm. The influence of the aligning of the picture (Figure 1, step 3) becomes paramount in this setting because 8.5 mm is judged as acceptable and 8.4 mm as insufficient. Other spots included false negatives with multiple layered spots where the width-height ratio was slightly lower than 12% and false positive spots that are not perfectly circle shaped as shown in Figure 2C. This could explain the observed difference between the used devices. When considering all samples obtained in the untrained setting, the robustness should be higher. In addition, during initial performance testing, the phone was fixed in landscape position above the DBS excluding variation of distance between phone and DBS. During robustness testing, the phone was handheld. Variation in distance between phone and sample might also contribute to reduced repeatability in perfect conditions, especially considering that a distance of 50 cm is of great influence. Because of the difference in results between smartphones, it is recommended, in future studies or applications, to first test the device intended to use with the app for repeatability. Especially, with regards to the setting in which the app will be used and different users.
The performance of 90.0% and 87.4% for samples obtained in respect to a trained and untrained setting did not meet the performance criterion of 95% set beforehand. However, the current version of the app would lead to resp. 5.9% and 7.1% unnecessary resampling. Although this is not optimal, the resampling, when using the app correctly, should lead to (another) good quality spot that will be sent in. No delay in patient monitoring or missing data in research will be introduced. Thus, the current version of the app should lead to sending in good quality samples in resp. 95.9% and 94.5% of the cases.
In a setting where training of healthcare workers is not possible, the app might lead to a major increase in sufficient quality samples (from 42.2% to 87.9%, Table 2). In a setting where training of patients or healthcare worker is possible, the potential benefit of the app is less pronounced. The training of healthcare workers in DBS sampling can lead to 100% sufficient spot quality in a research setting . However, patients trained in DBS sampling who perform sampling at home as part of routine care only produce 80% sufficient quality spots . Therefore, application of the app in a patient home sampling setting might still lead to an increase in the number of sufficient quality spots (from 80% to 95.9%). However, this increase will only be possible if patients are trained in using the app as shown by the user test and robustness is improved after implementation.
One of the limitations of the app is that the current version of the app will only work with DBS sampling paper that has the same size and dimensions as Whatman FTA DMPK-C cards because the frame of the paper is used to measure the size of the spots. However, other commonly used DBS sampling cards such as the Ahlstrom AutoCollect and Whatman FTA DMPK variant A and B have the same dimensions. In addition, the app is calibrated for 8 mm punches. If smaller punches are used, the app needs to be calibrated for the appropriate punch size. However, other sampling instructions advise to let the blood drop fall freely on the DBS card . A DBS that is generated from a freely fallen blood drop is at least 8 mm in diameter due to the viscosity of the blood and the subsequent formation and falling of a blood drop. Even when smaller punches are being used for analysis, the current app settings would still be correct for the evaluation of a DBS. As mentioned before, insufficient quality spots due to humidity or hemolysis cannot be identified by the app. This can be challenging if sampling and drying is performed in extremely humid conditions such as tropical areas. Additional precautions on sample handling are needed . The app is developed to determine spot quality, after the spot has been made by the subject, based on spot size, color and shape. Other important factors affecting DBS sample quality such as differences between sampling card materials, hematocrit and volcano effects on sport formation and influence of drying time, sample transport and direct exposure to sunlight need to be addressed otherwise , . Finally, the technician is responsible for the final judgment of the quality of received samples and should always determine if a received DBS sample is fit for analysis . Therefore, the app is only an aid for patients and researchers and is not defined as a medical device .
DBS sampling is a patient friendly and easy-to-use sampling method. However, insufficient spot quality is a major issue in DBS sampling. The DBS app is a quick and easy tool to objectively measure the quality of DBS. Based on our test, the app can increase the amount of sufficient quality spots in an untrained setting from 42.2% to 87.9% and in a trained setting from 80% to 95.9%. The app is accessible in a browser by any patient, caregiver or researcher with a smartphone, tablet or PC. The app can be a valuable asset for increasing the amount of spots of sufficient quality in patient care and to increase the amount of usable data in DBS research studies. The app can contribute to a more widespread use of the DBS technology in bioanalysis and TDM.
Funding source: Merck Sharp and Dohme
Award Identifier / Grant number: SDD351581
The authors thank Petra Denig from the UMCG for her assistance in setting up the user test, Karin Vermeulen of the UMCG for her assistance with the power calculation and Kees Wildeveld and Nadia El Hamouchi for their assistance in the robustness testing.
Author contributions: All the authors have accepted responsibility for the entire content of this submitted manuscript and approved submission.
Employment or leadership: None declared.
Honorarium: None declared.
Competing interests: R. Brinkman is an employee of MAD Multimedia, the company that developed the app. The funding organization played no role in the study design; in the collection, analysis, and interpretation of data; in the writing of the report; or in the decision to submit the report for publication.
1. Hofman S, Bolhuis MS, Koster RA, Akkerman OW, van Assen S, Stove C, et al. Role of therapeutic drug monitoring in pulmonary infections: use and potential for expanded use of dried blood spot samples. Bioanalysis 2015;7:481–95. Search in Google Scholar
2. Veenhof H, Koster RA, Alffenaar JW, Berger SP, Bakker SJ, TouwDJ. Clinical validation of simultaneous analysis of tacrolimus, cyclosporine A and creatinine in dried blood spots in kidney transplant patients. Transplantation 2017;101:1727. Search in Google Scholar
3. Vu D, Alffenaar J, Edelbroek P, Brouwers J, Uges D. Dried blood spots: a new tool for tuberculosis treatment optimization. Curr Pharm Des 2011;17:2931–9. Search in Google Scholar
4. Eick G, Urlacher SS, McDade TW, Kowal P, Snodgrass JJ. Validation of an optimized ELISA for quantitative assessment of Epstein-Barr virus antibodies from dried blood spots. Biodemogr Soc Biol 2016;62:222–33. Search in Google Scholar
5. Edelbroek PM, van der Heijden J, Stolk LM. Dried blood spot methods in therapeutic drug monitoring: methods, assays, and pitfalls. Ther Drug Monit 2009;31:327–36. Search in Google Scholar
6. Hoogtanders K, van der Heijden J, Christiaans M, Edelbroek P, van Hooff JP, Stolk LM. Therapeutic drug monitoring of tacrolimus with the dried blood spot method. J Pharm Biomed Anal 2007;44:658–64. Search in Google Scholar
7. Enderle Y, Foerster K, Burhenne J. Clinical feasibility of dried blood spots: analytics, validation, and applications. J Pharm Biomed Anal 2016;130:231–43. Search in Google Scholar
8. CLSI. Blood collection on Filter Paper for Newborn Screening Programs; Approved Standard – Sixth Edition. CLSI Document NBS01-A6. Wayne, PA: Clinical and Laboratory Standards Institute 2013;NBS01-A6. Search in Google Scholar
9. Panchal T, Spooner N, Barfield M. Ensuring the collection of high-quality dried blood spot samples across multisite clinical studies. Bioanalysis 2017;9:209–13. Search in Google Scholar
10. Al-Uzri AA, Freeman KA, Wade J, Clark K, Bleyle LA, Munar M, et al. Longitudinal study on the use of dried blood spots for home monitoring in children after kidney transplantation. Pediatr Transplant JID – 9802574, OTO – NOTNLM 0621, LID – doi:10.1111/petr.12983. Search in Google Scholar
11. Zuur M, Veenhof H, Aleksa A, van ’t Boveneind-Vrubleuskaya N, Darmawan E, Hasnain G, et al. Quality assessment of dried blood spots from tuberculosis patients from four countries. Ther Drug Monit 2019, doi: 10.1097/FTD.0000000000000659. Search in Google Scholar
12. Zakaria R, Allen KJ, Koplin JJ, Roche P, Greaves RF. Advantages and challenges of Dried Blood Spot analysis by mass spectrometry across the total testing process. EJIFCC JID – 101092742, PMC – PMC5282914, OTO – NOTNLM 0202. Search in Google Scholar
13. World Health Organization. Participant Manual Module 14 Blood Collection and Handling – Dried Blood Spot (DBS). 2005; Module 14: EQA (December). Search in Google Scholar
14. Vu DH, Bolhuis MS, Koster RA, Greijdanus B, de Lange WC, van Altena R, et al. Dried blood spot analysis for therapeutic drug monitoring of linezolid in patients with multidrug-resistant tuberculosis. Antimicrob Agents Chemother 2012;56:5758–63. Search in Google Scholar
15. Dantonio PD, Stevens G, Hagar A, Ludvigson D, Green D, Hannon H, et al. Comparative evaluation of newborn bloodspot specimen cards by experienced laboratory personnel and by an optical scanning instrument. Mol Genet Metab 2014;113:62–6. Search in Google Scholar
17. Validation of analytical procedures: text and methodology Q2 (R1). Geneva, Switzerland: International Conference on Harmonization; 2005. Search in Google Scholar
18. International Organization for Standardization. ISO 9241-11: Ergonomic Requirements for Office Work with Visual Display Terminals (VDTs): Part 11: Guidance on Usability. ISO 9241 1998: Part 11. Search in Google Scholar
19. Georgsson M, Staggers N. An evaluation of patients’ experienced usability of a diabetes mHealth system using a multi-method approach. J Biomed Inform 2016;59:115–29. Search in Google Scholar
20. Georgsson M, Staggers N. Quantifying usability: an evaluation of a diabetes mHealth system on effectiveness, efficiency, and satisfaction metrics with associated user characteristics. J Am Med Informatics Assoc 2015;23:5–11. Search in Google Scholar
21. Brooke J. SUS: a quick and dirty usability scale. In: Jordan PW, Weerdmeester B, Thomas A, Mclelland IL, editors. Usability evaluation in industry 1996. Search in Google Scholar
22. Brooke J. SUS: a retrospective. J Usability Stud 2013;8:29–40. Search in Google Scholar
23. Virzi RA. Refining the test phase of usability evaluation: how many subjects is enough? Hum Factors 1992;34:457–68. Search in Google Scholar
24. Denniff P, Spooner N. Effect of storage conditions on the weight and appearance of dried blood spot samples on various cellulose-based substrates. Bioanalysis 2010;2:1817–22. Search in Google Scholar
25. Koster RA, Botma R, Greijdanus B, Uges DR, Kosterink JG, TouwDJ, et al. The performance of five different dried blood spot cards for the analysis of six immunosuppressants. Bioanalysis 2015;7:1225–35. Search in Google Scholar
26. International Organization for Standardization. Medical laboratories – requirements for quality and competence. ISO 15189 2012:5.4.6e. Search in Google Scholar
27. Study Group 1 of the Global Harmonization Task Force. Definition of the Terms ‘Medical Device’ and ‘In Vitro Diagnostic (IVD) Medical Device’. 2012;GHTF/SG1/N071. Search in Google Scholar
The online version of this article offers supplementary material (https://doi.org/10.1515/cclm-2019-0437).
©2019 Jan-Willem C. Alffenaar et al., published by De Gruyter, Berlin/Boston
This work is licensed under the Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 License.