Alice S. Forster ORCID logo, Greg Rubin, Jon D. Emery, Matthew Thompson, Stephen Sutton, Niek de Wit, Fiona M. Walter and Georgios Lyratzopoulos

Measuring patient experience of diagnostic care and acceptability of testing

Open Access
De Gruyter | Published online: January 28, 2021
Published Ahead of Print / Just Accepted

Abstract

A positive patient experience has been long recognised as a key feature of a high-quality health service, however, often assessment of patient experience excludes diagnostic care. Experience of diagnostic services and the acceptability of diagnostic tests are often conflated, with lack of clarity about when and how either should be measured. These problems contrast with the growth in the development and marketing of new tests and investigation strategies. Building on the appraisal of current practice, we propose that the experience of diagnostic services and the acceptability of tests should be assessed separately, and describe distinct components of each. Such evaluations will enhance the delivery of patient-centred care, and facilitate patient choice.

Introduction

Patient-centred care is a core aim of health systems, which includes regarding patients as integral participants in the diagnostic process [1]. However the importance of patient experience of diagnostic services, and acceptability of investigations is still inadequately considered. Where they are considered, the experience of diagnostic services is often conflated with the acceptability of diagnostic tests. We outline the key components and correlates of either concept, and illustrate how they can be assessed.

Conceptualising patient experience of diagnostic services

Recognition of the importance of patient experience, defined as ‘… the sum of all interactions, shaped by an organization’s culture, that influence patient perceptions across the continuum of care’, as a distinct dimension of care quality has increased over the last two decades [2]. Building on pioneering work by the Picker Institute [3], NICE (the English National Institute for Health and Social Care Excellence, a UK evidence-based, and patient-focused guideline producing organisation) distinguishes five dimensions of patient experience, that are appropriate for guiding the assessment of experience of diagnostic services with some modifications [4]. We also propose a sixth and seventh dimension:

  1. 1)

    Patient-centred care: opportunities to discuss concerns and preferences. For example, ‘I was given the chance to discuss my concerns about the test’;

  2. 2)

    Essential requirements of care: physical and psychological needs assessed and addressed, as well as patients being treated with dignity, kindness and respect. For example, ‘I was treated with dignity and respect during testing’ or ‘my test result was delivered sensitively’;

  3. 3)

    Tailored healthcare: care is designed around patients’ needs and preferences. For example, ‘I was given a choice about when I had my test done’;

  4. 4)

    Continuity and coordination of care, including quality and safety. For example, ‘My doctor knew important information about my medical history’ or ‘I knew when I should have the results of my test’;

  5. 5)

    Shared decision-making and informed choice about testing; including communication of uncertainty. For example, ‘I felt I had an understanding of the benefits of the test’;

Novel dimensions that should be considered are:

  1. 6)

    Waiting times: consideration of time waiting between test ordering, performance (including waiting for an appointment) and results [5], [, 6]. For example, ‘I did not have to wait long before going in to my appointment’.

  2. 7)

    Service environment: assessment of the quality of diagnostic service facilities [7], quality of transport links, and availability of parking, where required. For example, ‘I was satisfied with the cleanliness of the testing clinic’.

Although the nature of the test result is more relevant to patients’ disease experience, it should be noted that it may confound the testing experience.

Good practice in measuring patient experience of diagnostic services

Evaluating experience of diagnostic services should encompass all aspects/phases of the diagnostic care pathway, including referral, communication of diagnostic information, performance, and assessment. Robust nationwide measurement of experience of diagnostic services can inform policy decisions and guide patient choice between different diagnostic care providers. However, major patient survey initiatives such as the US Consumer Assessment of Healthcare Providers and Systems program (CAHPS) or the UK General Practice Patient Survey (GPPS) do not adequately address the experience of diagnostic testing. Therefore, developing psychometrically valid items, covering key concepts proposed, and incorporating them into patient surveys is important. As a starting point, service experience items from existing surveys such as CAHPS and GPPS could be modified for diagnostic care. Although single item example questions are provided above, multiple items will be needed to cover all key facets of experience relating to each dimension.

Assessing patient experience of diagnostic services for quality improvement can be particularly helpful in elective (non-acute) care contexts, where the potential for informed decision-making and patient choice are greatest. For example, assessment of the ‘service environment’ of testing could identify that the testing location is difficult for patients to get to, resulting in a change to where the service is offered, and reducing non-attendances. Assessment of experience of diagnostic services can also help to understand potential patient group inequalities in experience of diagnostic care [8], to determine patients at greater risk of poorer experience, guiding the development of interventions.

Conceptualising patient acceptability of diagnostic tests

While examining the experience of diagnostic services is important, the acceptability of diagnostic tests should be considered separately. Previous work to assess test experience has focused on partial aspects such as physical and psychological discomfort [7], [9], [10], [11], or global satisfaction [7], [11], [12], [13]. We propose that test experience can be measured more systematically and comprehensively by its ‘acceptability’ as a special type of healthcare intervention.

Sekhon et al. propose that acceptability of health care interventions is defined as ‘a reflection of the extent to which people receiving a healthcare intervention consider it to be appropriate, based on anticipated or experiential cognitive and emotional responses to the intervention’ [14]. They suggest that acceptability comprises seven facets: affective attitude, burden, ethicality, intervention coherence, opportunity costs, perceived effectiveness and self-efficacy [14]. According to this framework, acceptability can be measured prospectively (in advance of undergoing the test), retrospectively (following the test) and concurrently (while undergoing the test). Clarity is however required about applying facets of acceptability in the context of diagnostic tests (Table 1).

  1. ‘Affective attitude’ can represent a global measure of accepting the test e.g. ‘I would like to have the test’.

  2. ‘Test burden’ encompasses physical and psychological discomfort during and after the test, psychological distress experienced before it and, in the case of a negative test in particular, relief after it. For example, ‘I experienced pain after the test’. It can also incorporate time burden (travel time, time away from the family).

  3. ‘Test coherence’ reflects the patient’s understanding of why the test is being done given their symptoms/context. This will depend on information provision and health literacy. For example, ‘I felt I had a good understanding of why the test was being done’.

  4. ‘Perceived test effectiveness’ assesses whether patients think the test is right for their situation and whether the test will give an accurate result. It will vary by the test’s purpose (e.g. whether for ruling in or out a condition, or to guide management or to assess prognosis), and by patient experience (e.g. how well uncertainty has been communicated). For example, ‘I felt the test would help give me an accurate diagnosis’.

  5. ‘Self-efficacy’ comprises the patient’s confidence that they can physically and cognitively complete the test. For example, ‘It was easy for me to do the test’.

Table 1:

Key features of acceptability of tests (based on Sekhon et al. [14], applied to diagnostic testing).

Affective attitude
How the patient feels about the test. A global measure of acceptability
Burden
The perceived amount of effort required to have/do the test and any resulting side-effects, both physical and psychological
Test coherence
The extent to which the patient understands why the test is being done given their symptoms and context
Perceived test effectiveness
The extent to which the patient believes that the test is likely to achieve its purpose given their symptoms and context, and give an accurate result
Self-efficacy
The patient’s confidence that they can complete the test
Financial opportunity costs
The extent to which there are costs associated with having/doing the test
Ethicality
The extent to which the test is a good fit with the patient’s ideological, religious or political beliefs. Preferences for over/under diagnosis and whether values have to be forgone to have the test are also pertinent, along with the potential impact of results on relatives in the case of genetic testing.

    Features are not listed in order of importance. Italics indicates amendment to the original.

The concepts of ‘opportunity costs’ and ‘ethicality’ require further elaboration for the testing context:

  1. ‘Opportunity costs’ are defined by Sekhon in line with health economics literature [14]. In the context of testing it should refer to costs of having the test, including paying for the test (fee-for-service) and having to take unpaid time off work. It may also refer to individuals being unable to do the test because of cost. This dimension may be variably applicable depending on different healthcare systems and insurance schemes. For example, ‘I had to take unpaid time off work to have the test’.

  2. Ethicality encompasses whether the test meets patients’ ideological, religious or political beliefs. For example, the use of genetic tests may be objected to by some patients on grounds of religious or ideological belief or because of the potential impact a result would have on relatives or decisions to have children. Personal preferences for trade-offs between over-/under-diagnosis of a condition, attitudes to risk, and whether patients’ preferences for body privacy have to be forgone in undergoing the test are also relevant.

As for assessments of patient experience of diagnostic services, multiple items are needed to assess each facet of acceptability.

Good practice in measuring patient acceptability of specific tests

Tests will be more patient-centred if assessments of acceptability are embedded into the development cycle of new tests [15]. This is particularly relevant in the context of elective diagnosis of conditions where risks are concentrated in the future, such as in the use of genetic tests to assess susceptibility for a condition, or the assessment of premalignant conditions [16], [, 17]. Test-specific, and generic measures of test acceptability will facilitate between-test comparisons and patient choice. For example, ratings of test acceptability may help patients with low-risk symptoms of bowel cancer choose between faecal immunochemical testing (FIT) [18], sigmoidoscopy, CT colonography or colonoscopy, as a first test. Of course, informed decision-making will also need to involve information about the clinical utility of the test.

Although there is little evidence supporting (or refuting) the assertion that test results affect experience [9], [19], [20], the experience of having a positive test relates greatly to the experience of the diagnosed illness and its likely treatment and prognosis, therefore it is less relevant to the acceptability of the test per se.

Unlike the case for measuring the experience of diagnostic services, which needs to be repeated periodically, if the acceptability of a test, to a range of possible test users, has been validly evaluated during test development, it is generally unnecessary to repeat such evaluations routinely. Exceptions may apply to situations where there is likely to be variation in the intended purpose of the test or the population undergoing it.

Conclusions

Clinicians, service managers, test developers, and researchers all have a key role to play in ensuring that patients have a positive experience of diagnostic care, and that available tests are acceptable as healthcare interventions. We have presented a conceptual guide outlining the dimensions that are crucial for evaluations of the patient experience of diagnostic care and the acceptability of diagnostic tests, and highlighted contexts in which such evaluations can result in the greatest improvements to patient experience. We recommend that:

  1. the assessment of patient experience of elective diagnostic services should be incorporated into routine patient surveys to support quality improvement activities and patient choice;

  2. to facilitate the assessment of patient experience and test acceptability, robust survey items, should be developed using the framework we set out in this paper;

  3. a comprehensive examination of the patient acceptability of specific tests should be embedded into test development;

  4. where possible, future evaluations should use comparable items assessing experience and acceptability across tests and health systems to facilitate international comparisons and patient choice.

Assessment of patients’ experiences of diagnostic services, and the degree to which patients find specific tests to be acceptable, are requisites for enabling services to make improvements to their diagnostic care quality. Robust assessments can enhance the degree to which we can deliver patient-centred care, and facilitate patient choice.

Funding source: Cancer Research UK

Award Identifier / Grant number: C18081/A18180, C49896/A17429, C8640/A23385

Acknowledgments

The authors would like to thank Dr. Jo Waller for her feedback on an earlier draft of this article.

    Research funding: This paper arises from the CanTest Collaborative, which is funded by Cancer Research UK [C8640/A23385]. AF and GL are also supported by Cancer Research UK grants [C49896/A17429 and C18081/A18180 respectively].

    Author contributions: All authors have accepted responsibility for the entire content of this manuscript and approved its submission.

    Competing interests: Authors state no conflict of interest.

References

1. Balogh, EP, Miller, BT, Ball, JR, editors. Improving diagnosis in health care. Washington (DC): National Academies Press (US); 2015. Search in Google Scholar

2. Wolf, JA. State of patient experience 2015. Texas, USA: The Beryl Institute; 2015. Search in Google Scholar

3. Picker Institute. Principles of patient centred care; 2018. Available from: http://www.picker.org/about-us/principles-of-patient-centred-care/. Search in Google Scholar

4. NICE. Patient experience in adult NHS services: improving the experience of care for people using adult NHS services. London: NICE; 2012. Search in Google Scholar

5. NHS England. Cancer patient experience survey; 2018. Available from: https://www.england.nhs.uk/statistics/statistical-work-areas/cancer-patient-experience-survey/. Search in Google Scholar

6. Howse, J, Rubin, G. ACE wave 2 patient experience survey; 2018. Available from: https://www.cancerresearchuk.org/sites/default/files/the_ace_wave_2_patient_experience_survey_2018_-_final.pdf. Search in Google Scholar

7. Evans, RE, Taylor, SA, Beare, S, Halligan, S, Morton, A, Oliver, A, et al.. Perceived patient burden and acceptability of whole body MRI for staging lung and colorectal cancer; comparison with standard staging investigations. Br J Radiol 2018;91:20170731. https://doi.org/10.1259/bjr.20170731. Search in Google Scholar

8. Vis, JY, van Zwieten, MC, Bossuyt, PM, Moons, KG, Dijkgraaf, MG, McCaffery, KJ, et al.. The influence of medical testing on patients’ health: an overview from the gynecologists’ perspective. BMC Med Inf Decis Making 2013;13:117. https://doi.org/10.1186/1472-6947-13-117. Search in Google Scholar

9. Ghanouni, A, Plumb, A, Hewitson, P, Nickerson, C, Rees, CJ, von Wagner, C. Patients’ experience of colonoscopy in the English bowel cancer screening programme. Endoscopy 2016;48:232–40. https://doi.org/10.1055/s-0042-100613. Search in Google Scholar

10. Salmon, P, Shah, R, Berg, S, Williams, C. Evaluating customer satisfaction with colonoscopy. Endoscopy 1994;26:342–6. https://doi.org/10.1055/s-2007-1008988. Search in Google Scholar

11. Kadri, SR, Lao-Sirieix, P, O’Donovan, M, Debiram, I, Das, M, Blazeby, JM, et al.. Acceptability and accuracy of a non-endoscopic screening test for Barrett’s oesophagus in primary care: cohort study. BMJ 2010;341:c4372. https://doi.org/10.1136/bmj.c4372. Search in Google Scholar

12. Buisson, A, Gonzalez, F, Poullenot, F, Nancey, S, Sollellis, E, Fumery, M, et al.. Comparative acceptability and perceived clinical utility of monitoring tools: a nationwide survey of patients with inflammatory bowel disease. Inflamm Bowel Dis 2017;23:1425–33. https://doi.org/10.1097/mib.0000000000001140. Search in Google Scholar

13. Dyrberg, E, Larsen, EL, Hendel, HW, Thomsen, HS. Diagnostic bone imaging in patients with prostate cancer: patient experience and acceptance of NaF-PET/CT, choline-PET/CT, whole-body MRI, and bone SPECT/CT. Acta Radiol 2018;59:1119–25. https://doi.org/10.1177/0284185117751280. Search in Google Scholar

14. Sekhon, M, Cartwright, M, Francis, JJ. Acceptability of healthcare interventions: an overview of reviews and development of a theoretical framework. BMC Health Serv Res 2017;17:88. https://doi.org/10.1186/s12913-017-2031-8. Search in Google Scholar

15. Walter, FM, Thompson, MJ, Wellwood, I, Abel, GA, Hamilton, W, Johnson, M, et al.. Evaluating diagnostic strategies for early detection of cancer: the CanTest framework. BMC Canc 2019;19:586. https://doi.org/10.1186/s12885-019-5746-6. Search in Google Scholar

16. Archer, S, de Villiers, CB, Scheibl, F, Carver, T, Hartley, S, Lee, A, et al.. Evaluating clinician acceptability of the prototype CanRisk tool for predicting risk of breast and ovarian cancer: a multi-methods study. PloS One 2020;15:e0229999. https://doi.org/10.1371/journal.pone.0229999. Search in Google Scholar

17. Fitzgerald, RC, di Pietro, M, O’Donovan, M, Maroni, R, Muldrew, B, Debiram-Beecham, I, et al.. Cytosponge-trefoil factor 3 versus usual care to identify Barrett’s oesophagus in a primary care setting: a multicentre, pragmatic, randomised controlled trial. Lancet 2020;396:333–44. https://doi.org/10.1016/S0140-6736(20)31099-0. Search in Google Scholar

18. Nicholson, BD, James, T, Paddon, M, Justice, S, Oke, JL, East, JE, et al.. Faecal immunochemical testing for adults with symptoms of colorectal cancer attending English primary care: a retrospective cohort study of 14,487 consecutive test requests. Aliment Pharmacol Ther 2020;52:1031–41. https://doi.org/10.1111/apt.15969. Search in Google Scholar

19. Robb, KA, Lo, SH, Power, E, Kralj-Hans, I, Edwards, R, Vance, M, et al.. Patient-reported outcomes following flexible sigmoidoscopy screening for colorectal cancer in a demonstration screening programme in the UK. J Med Screen 2012;19:171–6. https://doi.org/10.1177/0969141313476629. Search in Google Scholar

20. Ayanian, JZ, Zaslavsky, AM, Arora, NK, Kahn, KL, Malin, JL, Ganz, PA, et al.. Patients’ experiences with care for lung cancer and colorectal cancer: findings from the Cancer Care Outcomes Research and Surveillance Consortium. J Clin Oncol 2010;28:4154–61. https://doi.org/10.1200/jco.2009.27.3268. Search in Google Scholar

Received: 2020-08-12
Accepted: 2020-12-27
Published Online: 2021-01-28

© 2021 Alice S. Forster et al., published by De Gruyter, Berlin/Boston

This work is licensed under the Creative Commons Attribution 4.0 International License.