Tom A. W. Wolterink ORCID logo, Robin D. Buijs, Giampiero Gerini, A. Femius Koenderink ORCID logo and Ewold Verhagen ORCID logo

Localizing nanoscale objects using nanophotonic near-field transducers

Open Access
De Gruyter | Published online: March 12, 2021

Abstract

We study how nanophotonic structures can be used for determining the position of a nearby nanoscale object with subwavelength accuracy. Through perturbing the near-field environment of a metasurface transducer consisting of nano-apertures in a metallic film, the location of the nanoscale object is transduced into the transducer’s far-field optical response. By monitoring the scattering pattern of the nanophotonic near-field transducer and comparing it to measured reference data, we demonstrate the two-dimensional localization of the object accurate to 24 nm across an area of 2 × 2 μm. We find that adding complexity to the nanophotonic transducer allows localization over a larger area while maintaining resolution, as it enables encoding more information on the position of the object in the transducer’s far-field response.

1 Introduction

Nanoscale metrology is imperative for advances in nanoscience, biology and semiconductor technology. As many structural features of interest are smaller than the optical diffraction limit, they are not resolved through direct imaging with a conventional microscope. In fluorescence imaging, multiple superresolution techniques have been developed, such as photo-activated localization microscopy and stimulated emission depletion microscopy, that allow imaging of smaller features by relying on careful fitting or engineering of a point spread function [1], [2]. An alternative approach to construct images with subwavelength resolution is by detecting the evanescent optical fields close to the sample that contain high-frequency spatial information. In near-field scanning optical microscopy, a nanoscale probe is brought into close proximity of the sample surface, enabling coupling to the optical near field and access to high-resolution information [3], [4]. A drawback of such scanning microscopy techniques is the relatively long acquisition times that are needed for the physical translation of the probe to perform raster scanning. Therefore, development of near-field techniques that do not rely on physical scanning, enabling rapid nanoscale-resolution sensing and imaging, would be of great benefit to nanoscale metrology.

Scattering-type near-field scanning optical microscopy (s-NSOM) [5], [6] maps the optical near field of a sample by moving a nanoscale scatterer through it and collecting scattered light. The intensity of the scattered signal as a function of position, usually measured on a bucket detector with a single degree of freedom, gives information on the permittivity distribution of the sample [7] or on the optical near field supported by the sample. In this work, we pursue a converse aim. We aim to construct an optical near-field transducer in the form of a nanophotonic target structure that determines the position of a nanoscale perturbation located near the structure on basis of collected scattered light. The ‘sample’ in s-NSOM terms becomes in our work a transducer that encodes the location of a scatterer – the known tip in s-NSOM, but here the unknown variable under study – into a far-field optical response. Our approach to retrieving the scatterer position purely from optical fields is to not use a bucket detector for total scatterered intensity, but instead to exploit the many degrees of freedom, including wavevector, polarization and wavelength, that can be detected in the scattered signal [8], [9]. Those many degrees of freedom in the scattered far-field potentially encode detailed subwavelength information on the location of the scatterer near the transducer [10]. So unlike s-NSOM, where spatial information is obtained from consecutive measurements while raster-scanning a detecting element, here we exploit the complexity and connection of near and far fields to obtain spatial information from a single measurement. Indeed, methods based on far-field scattering signals have been developed to localize a single scatterer in a carefully tailored illumination beam with subwavelength resolution [11], [12], [13], [14], [15]. In such a framework, one would expect the sensitivity, resolution and field of view to be controllable by the design of the complex nanophotonic scattering structures in terms of geometry and mode structure, as shown in Figure 1. While in this work we focus on sensing the location of a single scatterer, one could ultimately imagine a combination of multiplexed readouts of multiple degrees of freedom in the scattered field with optimized metasurface near-field transducers to obtain rapid nanoscale sensing and potentially even imaging without the physical movement of the transducer.

Figure 1: (a) A complex nanophotonic structure scatters incident light into many degrees of freedom in the far field. These contain rich subwavelength information on a sample positioned in the near field of the structure, which functions as a near-field transducer. A multiplexed readout of these degrees of freedom may enable rapid nanoscale sensing without the translation of the transducer. (b) SEM images of the near-field transducers used in this work, which consist of one or more apertures in a gold film.

Figure 1:

(a) A complex nanophotonic structure scatters incident light into many degrees of freedom in the far field. These contain rich subwavelength information on a sample positioned in the near field of the structure, which functions as a near-field transducer. A multiplexed readout of these degrees of freedom may enable rapid nanoscale sensing without the translation of the transducer. (b) SEM images of the near-field transducers used in this work, which consist of one or more apertures in a gold film.

In this work, we demonstrate a nanophotonic near-field transducer for detecting the position of a subwavelength object based on angle-resolved far-field scattering patterns, as a first step towards rapid nanoscale sensing. To this end, we experimentally investigate the dependence of the angle-resolved optical signal scattered from the transducer and containing many degrees of freedom, on the position of a nanoscale object in its near field. The near-field transducer consists of one or more apertures in a gold film (see Figure 1(b)), which upon excitation provides high optical near fields in the direct vicinity of the apertures. The transducer is illuminated from the far field, while reflected light is collected to image its far-field scattering pattern. Introduction of a nanoscale object will influence the near-field environment of the transducer as it alters the permittivity distribution, resulting in changes in the radiation pattern. The way in which the radiation pattern is modified depends on the position of the object. Therefore, monitoring the transducer’s far-field radiation pattern enables retrieval of the position of the object, provided that the patterns are uniquely different for each position. To experimentally verify this technique, we first build a library of radiation patterns, placing a nanoscale object at a grid of positions near a near-field transducer and recording the radiation pattern for each object position. Next, we reconstruct the object position with subwavelength resolution solely from a measured far-field radiation pattern using a library-based approach exploiting singular value decomposition [16], [17]. We show that our technique greatly benefits from employing more complex nanostructures as near-field transducers, which enables accurate retrieval of the object position across the entire transducer area.

2 Experimental method

We consider the experimental system sketched in Figure 2(a). We measure the far-field radiation pattern of a near-field transducer, consisting of one or more nanoapertures in a metal film on a glass coverslip, and monitor how it changes when a nanoscale object is moved through its near field. The experimental setup is shown in Figure 2(b). Light from a supercontinuum white light laser (Fianium WhiteLase Micro), spectrally filtered to cover a wavelength range of 500–750 nm, is transmitted through a linear polarizer (LP) and focused to a diffraction-limited spot on the transducer using a microscope objective (60×, NA = 0.95, Nikon CFI Plan Apochromat Lambda). Reflected light is collected through the same objective, transmitted through a second LP, and detected on a camera (Basler acA1920-40 um), which images the back focal plane of the objective. To suppress spurious signals from the substrate, the polarization component orthogonal to the incident polarization is detected. Light scattered by the transducer can experience polarization conversion, while direct reflections of the substrate are expected to maintain the incident polarization. To test our near-field transducer, a small perturbation is required that we can place at a controlled position in the near field of the transducer. This role is played by a tapered optical fiber (tip radius 50 nm) [18]. The fiber is positioned at a height of approximately 10 nm above the transducer using shear-force feedback [19] and can be scanned transversally using closed-loop piezo actuators with strain gauges for position read out. We emphasize that the setup is in terms of components identical to a near-field scanning probe mounted on an inverted microscope, but with the unconventional addition of Fourier imaging. However, where the sharp fiber tip is usually viewed as the sensor that detects optical information, here the viewpoint is reversed. The fiber tip is the object to be detected and localized, while the nanoaperture pattern is the transducer.

Figure 2: Experimental setup. (a) Minute changes in the near-field environment of an aperture influence its far-field radiation pattern. Monitoring the aperture’s radiation pattern enables the retrieval of the position of a nanoscale object. (b) Linearly polarized light is focused on the aperture using a microscope objective. Reflected light is collected through the same objective, filtered using a LP, and the cross-polarized signal is detected on a camera, which is imaging the back focal plane of the objective. A tapered optical fiber, positioned in the near field of the aperture using shear-force feedback, acts as the object. (c) Schematic of a near-field transducer containing a single aperture of 110 × 50 nm. The incident light is vertically polarized. The object is scanned across the aperture following a raster grid (black dots). (d) Measured cross-polarized radiation pattern of the aperture.

Figure 2:

Experimental setup. (a) Minute changes in the near-field environment of an aperture influence its far-field radiation pattern. Monitoring the aperture’s radiation pattern enables the retrieval of the position of a nanoscale object. (b) Linearly polarized light is focused on the aperture using a microscope objective. Reflected light is collected through the same objective, filtered using a LP, and the cross-polarized signal is detected on a camera, which is imaging the back focal plane of the objective. A tapered optical fiber, positioned in the near field of the aperture using shear-force feedback, acts as the object. (c) Schematic of a near-field transducer containing a single aperture of 110 × 50 nm. The incident light is vertically polarized. The object is scanned across the aperture following a raster grid (black dots). (d) Measured cross-polarized radiation pattern of the aperture.

The near-field transducers consist of one or more apertures milled in gold. They are fabricated by first depositing a 150 nm gold film on a glass substrate using thermal evaporation. Subsequently, focused ion beam milling using 30 keV Ga ions (1.5 pA, dwell time 2 μs, pixel pitch 5 nm) is used to mill the apertures. The use of apertures in an opaque layer eliminates direct scattering of the illumination beam from the object; any observed interaction between the object and the light is mediated by the transducer. Figure 2(c) shows a schematic of a transducer containing a single slot aperture of 110 × 50 nm. The incident light is vertically polarized, while the aperture is oriented at 45°, allowing for excitation of modes polarized along either axis of the aperture, to obtain polarization conversion required for cross-polarized detection. In addition to a component in the direction of incident polarization, the focal field will have a longitudinal component as well as a component in the orthogonal transverse direction due to depolarization effects. While any component of the focal field could interact with the aperture, the component in the direction of incident polarization has the largest magnitude and is expected to dominate the interaction with the aperture. A typical measured cross-polarized radiation pattern of a single aperture is shown in Figure 2(d). As is common in back-focal-plane microscopy, the raw camera image reports radiation patterns as a function of normalized parallel momentum (kx, ky)/k0 of the radiated light, where k0 is the wave number in vacuum. Due to the cross-polarized detection, a four-lobe pattern appears at large angle (NAs from circa 0.85 to 0.95), a signature of the depolarization effects due to tight focusing. In the experiments, the object is raster scanned across the transducer while its radiation pattern is captured at each position.

3 Localization strategy

To demonstrate high-resolution localization of the object, we develop a strategy with two main ingredients. The first is that we determine object locations by comparing measured radiation patterns against a prerecorded library. Data collection thus has two stages: first to record the library, and subsequently to take the test data. The second main ingredient is that we use a highly efficient representation of the library of radiation patterns to facilitate the comparison between test and library data. This representation of the measured radiation patterns uses singular value decomposition, as previously used to localize a point source of light [17]. We note that, alternatively, the comparison between test and library data could be a task suitable to address with machine learning, provided that the library data set is sufficiently large – typically orders of magnitude larger than the number of parameter values to be distinguished.

We collect the library data in a matrix M that contains each of the normalized radiation patterns, one per object position, as a single row. Through a singular value decomposition, we decompose this matrix into M = UΣV, with U, V unitary matrices and Σ a diagonal matrix. Here, U forms a basis for object position and V for the radiation pattern. The entries of Σ, the singular values σi, reflect the importance of each component in describing the data set M and are sorted in order of decreasing magnitude. In our experiment, each column of V, called the principal component direction, represents a vector (basis element) of the orthogonal basis for the representation of the radiation patterns generated by the structure, for different positions of the object. In other terms, all possible radiation patterns can be represented as a superposition of these elements. Each column of U provides the projection of the corresponding basis element on the total scattering pattern per object position. In other words, it provides a coefficient that, multiplied with the corresponding singular value, expresses the contribution of the basis element in the total scattering pattern for each position, known as the principal component. To directly reflect the importance of a radiation pattern basis element at each position, we treat UΣ as a single entity.

Once we have efficiently summarized the library of radiation patterns by singular value decomposition, we explore the retrieval of object positions from radiation patterns taken in a second measurement run. We use the previously acquired library data set as a reference and project the new measurements onto the radiation pattern basis V of the reference data. Next, we calculate the match of newly acquired data A at positions i with the reference data at positions j, which we define as 1 U Σ j A i V / 2 . A high match indicates high similarity between radiation patterns [16], [17]. To obtain an estimate for the object position based on the measured radiation patterns, we take the reference position that is associated with the highest match as the retrieved position. Comparing this with the actual position of the object, known from the calibrated reading of the position of the piezo actuators that control the probe, allows for defining a reconstruction error as the distance between the actual and retrieved positions.

Figure 3: Measured dependence of the radiation pattern on the object position for a (a–c) small aperture and (d–f) large aperture. (a and d) SEM image (top) and shear-force topography scan (bottom) of the transducer. (b and e) Singular value decomposition of a set of reference data for 21 × 21 probe points covering an area of 1 × 1 μm in 50 nm steps. Shown are the first eight principal component directions V (blue–red), forming a radiation pattern basis, and matching principal components UΣ (purple–green), representing the importance at each object position. (c and f) Match of signal data with reference data. Each of the images in the 21 × 21 grid shows the match of a newly acquired radiation pattern at this position with all positions of the reference data set. Three positions are highlighted below.

Figure 3:

Measured dependence of the radiation pattern on the object position for a (a–c) small aperture and (d–f) large aperture. (a and d) SEM image (top) and shear-force topography scan (bottom) of the transducer. (b and e) Singular value decomposition of a set of reference data for 21 × 21 probe points covering an area of 1 × 1 μm in 50 nm steps. Shown are the first eight principal component directions V (blue–red), forming a radiation pattern basis, and matching principal components UΣ (purple–green), representing the importance at each object position. (c and f) Match of signal data with reference data. Each of the images in the 21 × 21 grid shows the match of a newly acquired radiation pattern at this position with all positions of the reference data set. Three positions are highlighted below.

4 Results

First, we investigate the dependence of the radiation pattern on the object position for a transducer consisting of a single aperture in Figure 3. Figure 3(b) and (c) show the measurement results of 21 × 21 probe points covering an area of 1 × 1 μm in 50 nm steps, for a small aperture of 250 × 110 nm, of which a SEM image and shear-force topography scan are shown in Figure 3(a). The measured dependence of the cross-polarized radiation pattern of the aperture on the object position is presented in Figure 3(b) in the form of a singular value decomposition. Shown in the figure are the first eight principal component directions V, columns of the radiation pattern basis and matching principal components UΣ, describing the importance of the radiation pattern basis element at each object position. To reconstruct the radiation pattern for a specific object position, one considers, for each component, the vector of the radiation pattern basis V (left), multiplies this pattern by the corresponding value of UΣ (right), and finally sums over all components. The first, most important, component of the data set shows little position dependence, and its radiation pattern closely resembles Figure 2(d). This component corresponds to the common denominator shared by all measured radiation patterns. It consists of four lobes of high intensity at far off-normal angles, resulting from polarization conversion in the direct reflection of the gold surface. Furthermore, a region of nonzero intensity is visible at near-normal angles in the center, corresponding to light reflected from the aperture. A strong position dependence is visible for the next two components. The second and third components share two similar features in their position dependence: a high signal centered on the aperture and a gradient across the entire scan area. Let us first consider the feature centered on the aperture, for either of these two components. Comparing positions near the aperture to positions at larger distance shows that UΣ negatively peaks at the aperture, which means that the corresponding radiation pattern element V is mainly of importance for positions near the aperture, and with a negative sign. Taking the negative sign into account, this radiation pattern element shows a negative contribution to the intensity at near-normal angles in the center and positive at high angles. As every measured radiation pattern has been normalized individually, these relative contributions actually correspond to a decrease in the absolute intensity at low angles, while leaving the signal at high angles unaffected. Thus, the second and third components reveal that when the object is positioned near the aperture, less light is reflected back at low angles. This can be explained by the interaction with the object introducing an extra loss channel. The gradients in UΣ are the result of a slight drift in the aperture position relative to the microscope during the experiment, which is also observed in measurements without any nearby object. To remove major continuous drifts of this kind, we fit a two-dimensional plane to UΣ for each of the first three components and subtract their effect from the data before further analysis. We note that any instability in the aperture and object positions will also negatively influence the match of data between subsequent measurement runs. Further components also exhibit structure in position dependence across the scan area. However, their magnitude is much lower, indicating that they are of less importance in describing the measured radiation patterns.

We now turn to matching the newly acquired test data set A with the reference data (Figure 3(c)). Each of the images in the 21 × 21 grid shows the match of the radiation pattern at this specific object position with all positions of the reference data set. Three positions are highlighted for which a zoom is provided. One can recognize that the measurement with the object centered on the aperture (left) matches best with the same position from the reference data, and poorly for all positions far away from the aperture. For an off-center object position (middle), the measurement matches well with most measurements taken at similar distance from the aperture, visible as a bright circle in the image, but neither with the aperture (dark color, strong mismatch), nor with radiation patterns further out. Finally, for object locations far from the aperture (right), the radiation pattern matching essentially reports that the object is surely not at the aperture, without further specific information on the distance to the aperture center. The observation that, in the vicinity of the aperture, the radiation pattern mainly depends on just the radial distance from the object to the aperture matches with the sharply peaked feature centered at the aperture that is visible in the most important components of Figure 3(b), and can be explained by considering the small, subwavelength aperture acting as a single-mode filter. This subwavelength aperture has dimensions small enough such that its transmission is dominated by a single evanescent spatial mode [20], of which we measure the far-field radiation pattern. Information about near-field perturbations of the environment that is scattered to the far field on the other side of the aperture has to be mediated via this single mode, resulting in a single degree of freedom for detection. The far-field intensity radiated in this single mode reflects the strength of the perturbation, which due to the almost circular symmetry of the aperture translates to the distance from the object to the aperture.

Therefore, we investigate if an aperture of larger size, that can support multiple modes, shows more components with a stronger dependence on object position. Figure 3(e) and (f) show the principal components of the library data set for a large aperture of 690 × 200 nm. Its geometry is shown in Figure 3(d). The singular value decomposition of the measured object position dependence of the radiation pattern is depicted in Figure 3(e). Similar to the small aperture, there is a strong component that corresponds to presence of the object near the aperture, leading to less reflected light at low angles, now visible in the second component. Here, its spatial extent is larger, and elongated, matching the larger size of the aperture. Additionally, the vertical position of the object is encoded in a strong third component, redistributing intensity diagonally in the radiation pattern. This agrees with a picture of the large aperture supporting multiple modes, interacting either resonantly or below cutoff, as expected for elongated apertures [20]. The position dependence of further components is reminiscent of higher-order modes at the aperture, but care must be taken in drawing conclusions about their exact shape as there is no guarantee for the singular value decomposition to reveal features directly matching the physical modes of the aperture.

Shown in Figure 3(f) is the matching of subsequently recorded test data with the previously acquired reference data set for the large aperture. Similar to the small aperture, the general trend is that measurements match well with reference data taken with the object at the same distance from the aperture. The geometry of the aperture, elongated along the diagonal, is apparent in the center of the scan area, for instance as the region over which data taken with the object at the aperture (left) matches well to the reference data has an elliptical shape. Interestingly, measurements with the object just above the aperture (middle) now match well with those positions, but worse with positions below the aperture. This is a result of the third principal component in Figure 3(e), which encodes the vertical object position, and which was not as important in the principal components of the smaller aperture’s radiation patterns.

Figure 4: Position estimates for signal data retrieved by the localization algorithm versus actual position for a (a and b) small aperture, (c and d) large aperture and (e and f) two-dimensional aperture array. The color scale shows the error between the reconstructed position and the known position. For incorrect estimates, arrows indicate the direction of the error. The reconstruction error is measured across an area of (a, c and e) 2 × 2 μm in 100 nm steps, and (b, d and f) 0.25 × 0.25 μm in 25 nm steps. In (a, c and e), the outline of the transducer is indicated (dashed).

Figure 4:

Position estimates for signal data retrieved by the localization algorithm versus actual position for a (a and b) small aperture, (c and d) large aperture and (e and f) two-dimensional aperture array. The color scale shows the error between the reconstructed position and the known position. For incorrect estimates, arrows indicate the direction of the error. The reconstruction error is measured across an area of (a, c and e) 2 × 2 μm in 100 nm steps, and (b, d and f) 0.25 × 0.25 μm in 25 nm steps. In (a, c and e), the outline of the transducer is indicated (dashed).

Figure 5: Measured dependence of the radiation pattern on the object position for a two-dimensional aperture array. (a) SEM image (top) and shear-force topography scan (bottom) of the transducer. (b) Singular value decomposition of a set of reference data for 21 × 21 probe points covering an area of 2 × 2 μm in 100 nm steps. Shown are the first twelve principal component directions V (blue–red), forming a radiation pattern basis, and matching principal components UΣ (purple–green), representing the importance at each object position. (c) Match of signal data with reference data. Each of the images in the 21 × 21 grid shows the match of a newly acquired radiation pattern at this position with all positions of the reference data set. Some positions are highlighted to the right.

Figure 5:

Measured dependence of the radiation pattern on the object position for a two-dimensional aperture array. (a) SEM image (top) and shear-force topography scan (bottom) of the transducer. (b) Singular value decomposition of a set of reference data for 21 × 21 probe points covering an area of 2 × 2 μm in 100 nm steps. Shown are the first twelve principal component directions V (blue–red), forming a radiation pattern basis, and matching principal components UΣ (purple–green), representing the importance at each object position. (c) Match of signal data with reference data. Each of the images in the 21 × 21 grid shows the match of a newly acquired radiation pattern at this position with all positions of the reference data set. Some positions are highlighted to the right.

Comparing the reference position with the highest match to the actual object position gives the error in reconstructing the position, which is shown in Figure 4. Figure 4(a) displays the reconstruction error versus actual object position for a measurement on the small aperture covering an area of 2 × 2 μm in 100 nm steps. Arrows indicate the direction in which the error is made. The figure shows that it is possible to accurately retrieve the object position from its radiation pattern for positions within an area of approximately 0.5 μm in diameter around the aperture. At positions further away from the aperture, the typical reconstruction error increases, since the measured radiation patterns depend less strongly on position. This matches the observation in Figure 3(c), where the radiation pattern match essentially reveals no information on the object position, apart from that the object is surely not located at the aperture. Nonetheless, the span over which retrieval is correct is neither limited to a single point at the aperture position, nor to the aperture size, but to an area of size that is set by λ, over which the near field has intricate spatial features. Zooming in on this central area in a new set of measurements, Figure 4(b) shows the results of a set of measurements across an area of 0.25 × 0.25 μm in steps of 25 nm, centered on the aperture. Here, the average reconstruction error is 32 nm (≈λ/22).

For the larger aperture, reconstruction errors are displayed in Figure 4(c) and (d). We find similar behavior to the small aperture. However, the area around the aperture where the object position is correctly retrieved is increased in size, revealing the larger dimension of the aperture along the bottom-left to top-right diagonal, and the average reconstruction error at the aperture has decreased. We note that the area over which successful reconstruction occurs must be related to both the extent of the transducer’s near field as well as the effective noise level. This is attributed to the multimode nature intrinsic to the increased aperture size and the resulting additional strong position-dependent components in the radiation patterns.

Finally, in an approach to increase the field of view, i.e., the area of successful retrieval of the object position, we consider a two-dimensional array of slot apertures as a transducer, as shown in Figure 5(a). The apertures, each of size 110 × 50 nm, are arranged in a centered rectangular lattice of pitch 160/200 nm. Such a structure naturally supports many more modes than a single aperture and could exhibit a highly intricate spatial near-field distribution through multiple scattering of plasmon waves mediating coupling between the apertures. The diffraction-limited illumination spot is centered on the array and has a width of approximately 900 nm that is smaller than the size of the array. Figure 5(b) displays the singular value distribution of the dependence of the radiation pattern on the object position for a measurement covering an area of 2 × 2 μm in 100 nm steps. Multiple components show strong position-dependent features, not only near the illumination spot but extending across the entire array. This intricate position dependence of the radiation pattern may enable successful reconstruction of the object position across a large area.

Figure 5(c) shows the match between radiation patterns in the subsequent test data acquisition run against the reference measurements. For positions around the center of the scan area (magnified panels), it is clear that the point of best match moves in step with object position across the array. This demonstrates that the position of the object with respect to the aperture array is encoded in the radiation pattern. Comparing the position of the highest match with the actual object position gives the reconstruction error, which is shown in Figure 4(e). The object position is accurately retrieved across a large part of the aperture array where the near field has intricate spatial features, covering an area of approximately 2 × 2 μm. At positions away from the array, the reconstruction error increases. Figure 4(f) shows the reconstruction error for a measurement area of 0.25 × 0.25 μm in steps of 25 nm, centered on the array. Here, the average reconstruction error is 24 nm (≈λ/29). Similar reconstruction errors are observed at off-center object positions on the array. This shows that using extended complex nanostructures as a near-field transducer enables the accurate retrieval of the object position across an area covering almost the entire transducer.

5 Conclusion

In summary, we have constructed a nanophotonic near-field transducer for detecting the position of a subwavelength object, which is encoded in the far-field radiation pattern of the transducer. By monitoring the radiation pattern and using a library-based technique, we demonstrated the retrieval of the object position accurate to 24 nm across an area of 2 × 2 μm. We find that introducing more complexity to the nanophotonic transducer allows for encoding of more information about the object position in its rich far-field scattering signal.

An excellent question would be what limits the precision and the field of view of successful position retrieval, and how to enhance these. Ultimately, the localization precision will rely on differences between radiation patterns, which contain a component that is modified due to presence of the object. One may expect this to depend not only on the signal-to-noise ratio of the measured radiation patterns, but also on gradients in the optical near field. The field of view is related to the spatial extent of a transducer with a complex multimode structure. Therefore, it is essential to optimize the geometry of the transducer, through for instance the aperture shape and array arrangement, to construct maximally localized near-field distributions, for instance using insights into the spatial mode structure of such arrays of apertures, exploiting plasmon resonances of the apertures. As the optical near field of the transducer directly depends on the optical field used for excitation, further control over the near-field distribution could be attained via complex structured illumination with an engineered wavefront that is optimized in amplitude, phase and polarization. The aim is to tailor the near-field distribution such that scattering from the transducer is maximally sensitive to the position of the object. Indeed, related approaches have been used in propagating illumination beams, where the engineering of a complex structured excitation field results in position-dependent scattering, to achieve successful localization of a single scatterer with subwavelength precision [11], [12], [13], [14], [15]. Besides intensity, using additional degrees of freedom such as phase, polarization and wavelength enables encoding of even more information in radiation patterns. This may depend on the ability of the transducer to independently address these degrees of freedom. Therefore, fully resolving the field at the detector may enable the extraction of more information on the object position. It would be exciting to investigate what physics fundamentally limits the precision and field of view. In the current experiments, the library and test data have been obtained using the same transducer. Whether every realization of the transducer requires separate calibration would be an interesting question and robustness of the library could be taken into account in the design of the transducer. Although in this work, we focused on a multiplexed readout of multiple degrees of freedom in the signal scattered from a complex nanophotonic near-field transducer, an alternative route towards rapid nanoscale sensing without the physical movement of the transducer would be to incorporate active reconfiguration of the illumination conditions, with the aim of shaping the transducer’s optical near field [21], [22], [23], [24], an approach we currently pursue [24]. Extraction of nanoscale information with a combination of complex illumination, transducer and detection opens up further avenues of improvement using compressive sensing methods [25], [26]. Such techniques could extend our method beyond the localization of a single object to enable the detection of multiple objects in the field of view and potentially even imaging, as can be achieved using s-NSOM. We note that, while in the current demonstration a microscope objective has been used for illumination and collection, our technique is compatible with multimode fiber-based imaging methods [27], [28]. This opens up the possibility of integrating nanostructured transducers at the end of high-numerical aperture fibers that could be advantageous in industrial applications. Promising prospects of nanophotonic near-field transducers could for instance be found in the detection of nanoscale defects and contaminants for large-area mask or wafer inspection in semiconductor industry. Our approach to encode subwavelength position information in radiation patterns using a near-field transducer may also extend to fluorescence microscopy, thereby impacting superresolution imaging in biophysical context [29].

Funding source: Nederlandse Organisatie voor Wetenschappelijk Onderzoek

Award Identifier / Grant number: High Tech Systems and Materials 14669

Acknowledgement

The authors are grateful to Thomas Bauer for help with fabrication of the tapered optical fibers.

    Author contribution: All the authors have accepted responsibility for the entire content of this submitted manuscript and approved submission.

    Research funding: This work is part of the research program of the Netherlands Organization for Scientific Research (NWO). It is part of the program High Tech Systems and Materials (HTSM) with Project No. 14669, which is (partly) financed by NWO.

    Conflict of interest statement: The authors declare no conflicts of interest regarding this article.

References

[1] S. W. Hell and J. Wichmann, “Breaking the diffraction resolution limit by stimulated emission: stimulated-emission-depletion fluorescence microscopy,” Opt. Lett., vol. 19, p. 780, 1994. https://doi.org/10.1364/ol.19.000780. Search in Google Scholar

[2] E. Betzig, G. H. Patterson, R. Sougrat, et al.., “Imaging intracellular fluorescent proteins at nanometer resolution,” Science, vol. 313, p. 5793, 2006. https://doi.org/10.1126/science.1127344. Search in Google Scholar

[3] D. W. Pohl, W. Denk, and M. Lanz, “Optical stethoscopy: image recording with resolution λ/20,” Appl. Phys. Lett., vol. 44, p. 651, 1984. https://doi.org/10.1063/1.94865. Search in Google Scholar

[4] A. Lewis, M. Isaacson, A. Harootunian, and A. Muray, “Development of a 500 Å spatial resolution light microscope: I. light is efficiently transmitted through λ/16 diameter apertures,” Ultramicroscopy, vol. 13, p. 227, 1984. https://doi.org/10.1016/0304-3991(84)90201-8. Search in Google Scholar

[5] F. Zenhausern, M. P. O’Boyle, and H. K. Wickramasinghe, “Apertureless near-field optical microscope,” Appl. Phys. Lett., vol. 65, p. 1623, 1994. https://doi.org/10.1063/1.112931. Search in Google Scholar

[6] R. Bachelot, P. Gleyzes, and A. C. Boccara, “Near-field optical microscope based on local perturbation of a diffraction spot,” Opt. Lett., vol. 20, p. 1924, 1995. https://doi.org/10.1364/ol.20.001924. Search in Google Scholar

[7] F. Zenhausern, F. Martin, and H. K. Wickramasinghe, “Scanning interferometric apertureless microscopy: optical imaging at 10 angstrom resolution,” Science, vol. 269, p. 1083, 1995. https://doi.org/10.1126/science.269.5227.1083. Search in Google Scholar

[8] I. Sersic, C. Tuambilangana, and A. F. Koenderink, “Fourier microscopy of single plasmonic scatterers,” New J. Phys., vol. 13, p. 083019, 2011. https://doi.org/10.1088/1367-2630/13/8/083019. Search in Google Scholar

[9] J. A. Kurvits, M. Jiang, and R. Zia, “Comparative analysis of imaging configurations and objectives for Fourier microscopy,” J. Opt. Soc. Am. A, vol. 32, p. 1081, 2015. https://doi.org/10.1364/josaa.32.002082. Search in Google Scholar

[10] D. Bouchet, R. Carminati, and A. P. Mosk, “Influence of the local scattering environment on the localization precision of single particles,” Phys. Rev. Lett., vol. 124, p. 133903, 2020. https://doi.org/10.1103/physrevlett.124.133903. Search in Google Scholar

[11] M. Neugebauer, P. Woźniak, A. Bag, G. Leuchs, and P. Banzer, “Polarization-controlled directional scattering for nanoscopic position sensing,” Nat. Commun., vol. 7, p. 11286, 2016. https://doi.org/10.1038/ncomms11286. Search in Google Scholar

[12] A. Bag, M. Neugebauer, P. Woźniak, G. Leuchs, and P. Banzer, “Transverse kerker scattering for angstrom localization of nanoparticles,” Phys. Rev. Lett., vol. 121, p. 193902, 2018. https://doi.org/10.1103/physrevlett.121.193902. Search in Google Scholar

[13] W. Shang, F. Xiao, W. Zhu, et al.., “Unidirectional scattering exploited transverse displacement sensor with tunable measuring range,” Opt. Express, vol. 27, p. 4944, 2019. https://doi.org/10.1364/oe.27.004944. Search in Google Scholar

[14] A. Bag, M. Neugebauer, U. Mick, S. Christiansen, S. A. Schulz, and P. Banzer, “Towards fully integrated photonic displacement sensors,” Nat. Commun., vol. 11, p. 2915, 2020. https://doi.org/10.1038/s41467-020-16739-y. Search in Google Scholar

[15] S. Nechayev, J. S. Eismann, M. Neugebauer, and P. Banzer, “Shaping field gradients for nanolocalization,” ACS Photonics, vol. 7, p. 581, 2020. https://doi.org/10.1021/acsphotonics.9b01720. Search in Google Scholar

[16] I. T. Jolliffe and J. Cadima, “Principal component analysis: a review and recent developments,” Philos. Trans. R. Soc., A, vol. 374, p. 20150202, 2016. https://doi.org/10.1098/rsta.2015.0202. Search in Google Scholar

[17] R. D. Buijs, N. J. Schilder, T. A. W. Wolterink, G. Gerini, E. Verhagen, and A. F. Koenderink, “Super-resolution without imaging: library-based approaches using near-to-far-field transduction by a nanophotonic structure,” ACS Photonics, vol. 7, p. 3246, 2020. https://doi.org/10.1021/acsphotonics.0c01350. Search in Google Scholar

[18] J. A. Veerman, A. M. Otter, L. Kuipers, and N. F. van Hulst, “High definition aperture probes for near-field optical microscopy fabricated by focused ion beam milling,” Appl. Phys. Lett., vol. 72, p. 3115, 1998. https://doi.org/10.1063/1.121564. Search in Google Scholar

[19] K. Karrai and R. D. Grober, “Piezoelectric tip-sample distance control for near field optical microscopes,” Appl. Phys. Lett., vol. 66, p. 1842, 1995. https://doi.org/10.1063/1.113340. Search in Google Scholar

[20] F. J. Garcia-Vidal, L. Martin-Moreno, T. W. Ebbesen, and L. Kuipers, “Light passing through subwavelength apertures,” Rev. Mod. Phys., vol. 82, p. 729, 2010. https://doi.org/10.1103/revmodphys.82.729. Search in Google Scholar

[21] A. Sentenac and P. C. Chaumet, “Subdiffraction light focusing on a grating substrate,” Phys. Rev. Lett., vol. 101, p. 013901, 2008. https://doi.org/10.1103/physrevlett.101.013901. Search in Google Scholar

[22] T. S. Kao, S. D. Jenkins, J. Ruostekoski, and N. I. Zheludev, “Coherent control of nanoscale light localization in metamaterial: creating and positioning isolated subwavelength energy hot spots,” Phys. Rev. Lett., vol. 106, p. 085501, 2011. https://doi.org/10.1103/physrevlett.106.085501. Search in Google Scholar

[23] G. Roubaud, P. Bondareff, G. Volpe, S. Gigan, S. Bidault, and S. Grésillon, “Far-field wavefront control of nonlinear luminescence in disordered gold metasurfaces,” Nano Lett., vol. 20, p. 3291, 2020. https://doi.org/10.1021/acs.nanolett.0c00089. Search in Google Scholar

[24] R. D. Buijs, T. A. W. Wolterink, G. Gerini, E. Verhagen, and A. F. Koenderink, “Nanophotonic compressed sensing with small dipole arrays,” in Optical Sensors and Sensing Congress, Optical Society of America, 2020, p. SM4B.4. Search in Google Scholar

[25] J. Romberg, “Imaging via compressive sampling,” IEEE Signal Process. Mag., vol. 25, p. 14, 2008. https://doi.org/10.1109/msp.2007.914729. Search in Google Scholar

[26] E. J. Candès and M. B. Wakin, “An introduction to compressive sampling,” IEEE Signal Process. Mag., vol. 25, p. 21, 2008. https://doi.org/10.1109/msp.2007.914731. Search in Google Scholar

[27] T. Čižmár and K. Dholakia, “Shaping the light transmission through a multimode optical fibre: complex transformation analysis and applications in biophotonics,” Opt. Express, vol. 19, p. 18871, 2011. Search in Google Scholar

[28] L. V. Amitonova and J. F. de Boer, “Endo-microscopy beyond the abbe and nyquist limits,” Light Sci. Appl., vol. 9, p. 81, 2020. https://doi.org/10.1038/s41377-020-0308-x. Search in Google Scholar

[29] H. Aouani, O. Mahboub, E. Devaux, et al.., “Plasmonic antennas for directional sorting of fluorescence emission,” Nano Lett., vol. 11, p. 2400, 2011. https://doi.org/10.1021/nl200772d. Search in Google Scholar

Received: 2020-12-22
Revised: 2021-02-10
Accepted: 2021-02-16
Published Online: 2021-03-12

© 2021 Tom A. W. Wolterink et al., published by De Gruyter, Berlin/Boston

This work is licensed under the Creative Commons Attribution 4.0 International License.