The Contrast Sensitivity Function relates the spatial frequency and contrast of a spatial pattern to its visibility and thus provides a fundamental description of visual function. However, the current clinical standard of care typically restricts assessment to visual acuity, i.e. the smallest stimulus size that can be resolved at full contrast; alternatively, tests of contrast sensitivity are typically restricted to assessment of the lowest visible contrast for a fixed letter size. This restriction to one-dimensional subspaces of a two-dimensional space was necessary when stimuli were printed on paper charts and simple scoring rules were applied manually. More recently, however, computerized testing and electronic screens have enabled more flexible stimulus displays and more complex test algorithms. For example, the quick CSF method uses a Bayesian adaptive procedure and an information maximization criterion to select only informative stimuli; testing times to precisely estimate the whole contrast sensitivity function are reduced to 2-5 minutes. Here, we describe the implementation of the quick CSF method in a medical device. We make several usability enhancements to make it suitable for use in clinical settings. A first usability study shows excellent results, with a mean System Usability Scale score of 86.5.
The current gold standard for the assessment of visual function goes back to 1862, when Herman Snellen developed a standard letter chart that has remained in use with only small modifications ever since. By showing progressively smaller letters at full contrast, the Snellen chart assesses visual acuity, i.e. the visual system’s resolving power. However, the visual system’s ability to detect a pattern depends not only on the size of the pattern, but also on its contrast; the Contrast Sensitivity Function (CSF) , which relates spatial frequency (size) to threshold contrast, therefore is a more comprehensive descriptor of visual function. Importantly, neurologic and ophthalmic pathologies may affect contrast sensitivity before they affect acuity [8, 13].
Despite its recognized clinical utility, routine assessment of contrast sensitivity in clinical trials and regular care is hampered by practical limitations. Compared to one-dimensional acuity, the two-dimensional CSF necessitates testing at many combinations of spatial frequency and contrast, for which paper charts lack resolution and flexibility. Simple scoring heuristics such as “stop at the last line with three out of five correct letter identifications” can be evaluated manually, but also lack computational efficiency to precisely capture the probabilistic nature of psychometric responses. By randomizing the sequence of tested optotypes, computerized vision testing alleviates some of the shortcomings of paper charts, but test scoring is typically implemented following the same simple rules as in non-computerized testing.
The quick CSF method  was originally developed for behavioural testing in psychophysical laboratories and uses a Bayesian adaptive procedure and an information maximization rule to test only informative combinations of spatial frequency and contrast, based on the full trial history. In this paper, we describe the implementation of the quick CSF method in a medical device that may be used to rapidly and precisely estimate visual function in clinical settings.
2 The quick CSF method
The quick CSF method exploits the observation that the human CSF can be accurately described using only four parameters [10, 12]: peak frequency and gain, bandwidth, and a low-frequency truncation parameter. After each trial of a test, the probability distribution p(Θ) over a set of possible CSFs described by four-parameter tuples Θ (in our implementation, about 12 million CSFs) can be computed, given the full trial history. Before each trial, the expected information gain is calculated for all possible stimuli (here, 19 spatial frequencies at 128 contrast levels each; 2432 stimuli overall) over a sampled subset of Θ, and only informative stimuli are chosen for presentation. By avoiding uninformative regions of the search space, the quick CSF method in  already estimated the whole CSF in 50-100 trials. For these results, the authors in  used sinewave gratings of two possible orientations as stimuli. Gratings are very frequency-selective and correspond well to neuronal receptive fields in primary visual cortex , but the small number of alternatives leads to a high guessing rate (50%) and thus lower statistical efficiency. Therefore, we implemented the use of ten Sloan letters, bandpass-filtered with a raised cosine window with peak frequency 4 cycles per letter, as proposed in ; as a consequence, the number of trials can be further reduced to 25 for rapid testing times, or 50 for very high precision.
2.1 Usability enhancements
We implemented two additional usability enhancements of the quick CSF. First, in classical forced-choice tasks, observers are asked to guess a response even if they do not perceive the stimulus. While even highly experienced psychophysical observers still can have strong biases when guessing , this is particularly difficult for inexperienced observers such as patients; we therefore added an “I don’t know” to the set of responses. Second, the lack of a (visible) stimulus on the screen can be confusing to observers; therefore, three letters are always presented simultaneously in a horizontal line, with the middle and left letters displayed at two and four times the contrast of the right letter, respectively. Because the contrast of the right letter is chosen by the quick CSF method and is usually near threshold contrast, it is very likely that at least one of the letters is easily recognizable by the observer, which helps observers to determine the location and size of the test letters.
The hardware setup of our quick CSF implementation mainly comprises a small-form factor PC (Intel i5 CPU, 4 GB RAM) and a large-format screen for stimulus display. At 46” diagonal with a resolution of 1920 by 1080 pixels and at a viewing distance of 400 cm, the screen allows the display of stimuli in a spatial frequency range from 1.4 to 36.2 cycles per degree, which includes the whole set of frequencies mandated by the FDA (1.5 to 18 cpd). Also according to FDA standards, mean screen luminance is calibrated to 85 cd/m2. All device functionality including power control is accessed through a handheld tablet device; an NFC reader provides an authentication mechanism through smartcards. External interfacing for data export is provided by an Ethernet and a USB port.
4.1 Main computer
The small-form factor PC runs a regular Linux operating system. However, the OS is transparent to the user because all user interaction is performed on the tablet remote control, and the vision test software permanently runs in fullscreen mode. The PC-side software is written in C++; the graphics display further uses OpenGL shaders and implements spatio-temporal dithering to increase bit depth of the screen.
For convenience, the software automatically calculates several features of the CSF: threshold sensitivities at five individual spatial frequencies mandated by the FDA ; CSF acuity, the intersection of the CSF with the x-axis (i.e. spatial frequency where contrast threshold is 100%); and a summary statistic, the area under the log CSF in the range from 1.5 to 18 cpd . For an example screenshot of the results display, see Fig. 3.
Raw data, such as the trial history and the full distribution of posterior parameters fmax, ymax, β, δ, are stored in a database and can be exported via a web-based database interface or the USB port.
4.2 Tablet remote control
The tablet remote control is based on a customized Android image that always runs the remote control app in the foreground. During the test, the patient reads out the letters on the screen. The examiner, who is presented with the ground truth, codes these responses as correct or incorrect; an “I don’t know” response can also be coded; see Fig. 2 for a screenshot of the user interface. In order to reduce spatial uncertainty, each stimulus presentation is preceded by markers indicating the spatial position and scale of the upcoming stimulus; the examiner can repeat this marker presentation using the “Prompt” button. After response entry, the examiner can initiate the next trial; if necessary, previous responses can be undone as well.
5 Usability evaluation
Protoype systems are currently in use in several clinics in the United States and Germany. After extended use, we asked the technicians who operate the device on a regular basis to anonymously fill out system usability scale (SUS) questionnaires . SUS comprises ten questions (five positive, five negative statements) that require a rating on a scale from 1 (“strongly disagree”) to 5 (“strongly agree”).
Fig. 4 shows the response distributions for each of the 10 SUS questions obtained from nine participants. For visualization purposes, we converted scores to a range of ’-’ (score 1 on odd-numbered questions, score 5 on even-numbered questions) to ’++’ (scores 5 and 1 for odd- and even-numbered, respectively), so that darker colours in Fig. 4 are better. Numerically, the mean SUS was 86.5 with a s.d. of 10. The strongest agreement was found with item 3, “I thought the system was easy to use” (mean 4.78, s.d. of .44).
Moore’s law and the increase in available computational power have made it possible to apply complex algorithms even during brief inter-trial intervals in behavioural testing. Adaptive testing that rests on the exact computation of probability distributions in large multi-dimensional spaces goes beyond manually-scored heuristics and provides greater precision with reduced testing times. We here describe how an algorithm that was developed for psychophysics laboratories has been translated into an easy-to-use medical device for clinical settings. We implemented usability enhancements for the patient, and integrated all device interaction in a tablet-based remote control for the technician’s convenience. A usability study demonstrated that users rated the device highly, with a mean score that corresponds to an “excellent” rating in a large-scale study of SUS scores .
A first clinical study has already shown that quick CSF test results correlate better with patient-reported outcomes than established far and near visual acuity measures . Further studies are currently ongoing to demonstrate the quick CSF’s sensitivity to track subtle changes in visual function due to disease progression or treatment effects.
Conflict of interest: The authors declare personal financial interests in and/or employment with Adaptive Sensory Technology, a company commercializing the technology presented here. Material and Methods: Informed consent: Informed consent is not applicable. Ethical approval: The conducted research is not related to either human or animals use.
 R Applegate, G Hilmantel, and H Howland. Area under the log contrast sensitivity function: A concise method of following changes in visual performance. OSA Technical Digest Series, 1:98–101, 1997.10.1364/VSIA.1997.SaB.4Search in Google Scholar
 Aaron Bangor, Philip Kortum, and James Miller. Determining what individual SUS scores mean: Adding an adjective rating scale. Journal of Usability Studies, 4(3):114–123, 2009.Search in Google Scholar
 John Brooke. SUS – a ’quick and dirty’ usability scale. Usability evaluation in industry, 189(194):4–7, 1996.Search in Google Scholar
 Fang Hou, Luis Lesmes, Peter Bex, Michael Dorr, and ZhongLin Lu. Using 10AFC to further improve the efficiency of the quick CSF method. Journal of Vision, 15(2):1–18, 2015.10.1167/15.9.2Search in Google Scholar
 Frank Jäkel and Felix A Wichmann. Spatial four-alternative forced-choice method is the preferred psychophysical method for naive observers. Journal of Vision, 6:1307–22, 2006.10.1167/6.11.13Search in Google Scholar
 Luis Andres Lesmes, Zhong-Lin Lu, Jongsoo Baek, and Thomas D. Albright. Bayesian adaptive estimation of the contrast sensitivity function: The quick CSF method. Journal of Vision, 10(3), 2010.10.1167/10.3.17Search in Google Scholar PubMed PubMed Central
 J P Stellmann, K L Young, J Pöttgen, M Dorr, and C Heesen. Introducing a new method to assess vision: Computer-adaptive contrast-sensitivity testing predicts visual functioning better than charts in multiple sclerosis patients. Multiple Sclerosis Journal: Experimental, Translational and Clinical, 1(2055217315596184):1–8, 2015.10.1177/2055217315596184Search in Google Scholar PubMed PubMed Central
 R L Woods and J M Wood. The role of contrast sensitivity charts and contrast letter charts in clinical practice. Clinical & Experimental Optometry, 78(2):43–57, 1995.10.1111/j.1444-0938.1995.tb00787.xSearch in Google Scholar
 American National Standards Institute Committee Z80. American National Standard for Ophthalmics: Multifocal Intraocular Lenses. Optical Laboratories Association, 11096 Lee Highway, Fairfax, VA, 2007.Search in Google Scholar
© 2015 by Walter de Gruyter GmbH, Berlin/Boston
This article is distributed under the terms of the Creative Commons Attribution Non-Commercial License, which permits unrestricted non-commercial use, distribution, and reproduction in any medium, provided the original work is properly cited.