Jump to ContentJump to Main Navigation
Show Summary Details
More options …

Advanced Optical Technologies

Editor-in-Chief: Pfeffer, Michael


CiteScore 2018: 1.42

SCImago Journal Rank (SJR) 2018: 0.499
Source Normalized Impact per Paper (SNIP) 2018: 1.346

In co-publication with THOSS Media GmbH

Online
ISSN
2192-8584
See all formats and pricing
More options …
Volume 2, Issue 3

Issues

Super-resolution microscopy heads towards 3D dynamics

Klaus Weisshart / Thomas Dertinger / Thomas Kalkbrenner / Ingo Kleppe / Michael Kempe
Published Online: 2013-05-24 | DOI: https://doi.org/10.1515/aot-2013-0015

Abstract

Resolving fine details of subcellular structures is key to understanding the organization and function of cellular networks. Recent advances in far-field fluorescence microscopy provide the necessary tools to analyze these structures with resolutions well below the classical diffraction limit in all three dimensions. Technical improvements go hand-in-hand with new versions of switchable fluorophores that allow nonlinear optical effects to be more efficiently used to push the resolution limit down further. High contrast combined with the wide spectrum of available colors currently endow these fluorescence-based super-resolution techniques with the power to study the complexity of subcellular organelles and the relation of their constituting components down to the molecular level and under physiological conditions. In this way, they give us a far better understanding of the assembly of macromolecular complexes and their functions within a cell than has been possible before employing conventional imaging methods. In this review, we give an overview of the technical state-of-the art of these technologies, their fundamental and technical trade-offs, and provide typical application examples in this exciting field.

Keywords: photo-activated localization microscopy; PSF engineering; reversible switchable fluorophore; structured illumination; super-resolution microscopy; OCIS code: 180.0180

1 Introduction

It is more than half a century ago that the first subcellular structure has been resolved in detail by an electron microscope [1]. Although electron microscopy (EM) still provides the best resolution of cellular ultrastructures, preserving the native composition during fixation and providing three-dimensional (3D) imaging with high specificity to a target such as a specific protein of interest remains a challenge. For 3D reconstruction, sections have to be taken which is laborious and can result in artifacts due to mechanical stress. In addition, EM is not compatible with live-cell imaging. It is, however, exactly these attributes that render fluorescence imaging so valuable and indispensable for cell biology. Since the advent of the green fluorescent protein (GFP) and its variants, fluorescent tags can be genetically targeted to any protein of interest and endogenously expressed in a living cell under physiological conditions [2]. Fluorescent labeling and imaging techniques have become standard procedures in most biological and biomedical laboratories as they are relatively easy to perform and hence many research fields historically relied on the use of fluorescence microscopy for decades. Despite all of these advantages, fluorescence imaging is limited in its spatial resolution to ~200 nm laterally and ~500 nm axially when using visible light. This diffraction limit in an optical microscope manifests itself in the fact that a point source of a light emitter is convoluted by the instrument to a blurred finite-sized focal spot, the so-called point spread function (PSF). The resolution limit was first recognized and described by Ernst Abbé in terms of resolvable spatial frequencies that an object to be imaged contains. He stated that a structure with given spatial frequency can only be resolved if at least the zero and first diffraction order of light emanating from it are detected [3], which is closely related to the size of the PSF. Under conditions typically encountered in optical microscopes, this limit can be expressed as his famous diffraction theorem [Table 1, Eqs. (1a) and (1b)]. Abbé’s law remains fundamental for all far-field imaging techniques and has precluded the visualization of fine details of structures having nanometer extensions. Owing to this nanoscale architecture of many biological structures, among them cytoskeletal components, nucleosomes, membranes, and suborganelle components, to name just a few, researchers have been keen to develop techniques to overcome the classical resolution limit of a microscope. Given the huge relevance of fluorescence for biological imaging, it is not particularly surprising that the focus of most technical improvements in resolution has been and still is devoted to this type of microscopy (Table 1) [4].

Table 1

Fluorescence-based imaging methods with their theoretical and practical resolutions.

Axial resolution of microscopes is generally by at least a factor of two lower than lateral resolution, which has prompted researchers in the early 1990s to first conduct work towards better performance in the z-direction. Techniques termed 4Pi Microscopy and Image Interference Microscopy (I5M) took advantage of two opposing objectives and interference to push axial resolution to approximately 100 nm using confocal and wide-field systems, respectively [5]. In contrast to axial resolution, lateral resolution remained unimproved. As the sample is sandwiched between the opposed lenses, both techniques are generally limited to thin specimens. The difficulty in alignment is certainly another reason why these technologies have never met widespread use.

During the late 1990s it was realized that the minimum resolvable distance as defined in Eqs. (1a) and (1b) can become considerably smaller, if optical nonlinearities of the light-sample interaction are exploited [6]. This has prompted a rapid development of methodologies devoted in the first place to extend the lateral resolution of microscopic imaging which led to a variety of far-field nanoscopic imaging techniques now collectively termed super-resolution microscopy (Box 1). In principle, although their strategies are quite different, all nonlinear techniques draw in the end on a spatial or temporal modulation of the transition between different states of a fluorophore, for example, switching between a bright and a dark state (Figure 1).

Various classified methods employed for resolution improvement based on nonlinear sample-light interactions. Generally, the nonlinear interaction involves optical transitions between a fluorescing (‘on’ or ‘bright’) and a non-fluorescing (‘off’ or ‘dark’) state of the molecules. See Box 1 for abbreviations.
Figure 1

Various classified methods employed for resolution improvement based on nonlinear sample-light interactions.

Generally, the nonlinear interaction involves optical transitions between a fluorescing (‘on’ or ‘bright’) and a non-fluorescing (‘off’ or ‘dark’) state of the molecules. See Box 1 for abbreviations.

Box 1

Strategies for and categories of super-resolution techniques.

The only linear technique for resolution enhancement utilizes structured illumination. Although the basic idea of structured illumination microscopy (SIM) had already been suggested and demonstrated by Lukosz in the 1960s [7], it took until the end of the 1990s to introduce the method successfully to fluorescence microscopy by Heintzmann and Cremer [8] and Gustafsson [9], achieving a twofold resolution enhancement in the lateral direction. As linear interaction is sufficient for this effect virtually any fluorophore can be used, which lead to the success of SIM as a standard super-resolution method. In an extension, excitation saturation as well as photo-switchable fluorophores were employed in what is called saturated SIM (SSIM) or saturated pattern excitation microscopy (SPEM). Both techniques use these nonlinear interactions to improve resolution further [10, 11].

A different approach to resolution beyond the diffraction limit was taken by Hell [12]. He first turned to a photo-physical transition called stimulated emission, in which molecules in the excited state are brought back to their ground state by illuminating them with high intensities of light at a suitable wavelength. This led to the development of stimulated emission depletion (STED) microscopy [12]. In an attempt to reduce the extremely high intensities needed for STED, Hell sought other transitions that could be potentially used. Indeed, in ground state depletion (GSD) microscopy he found that the transition to the dark triplet state needed less laser powers [13]. A further considerable reduction in laser power was finally achieved in a technology that uses a reversible switching of fluorophores between two different states for fluorescence inhibition [14]. This method was called reversible saturable (or switchable) optical fluorescence transitions (RESOLFT), although this term was expanded to describe all the above approaches.

A few years later Harald Hess and Eric Betzig introduced the concept of photo-activated localization microscopy (PALM) [15]. They realized the power of photo-switchable fluorescent proteins (PS-FPs) for localization microscopy (LM). Localization methods were already in use at that time to determine, as their name implies, the localization of single molecules with a precision that is one order of magnitude smaller than the width of the PSF. This precision relies on the determination of the center of gravity of the PSF and on the a-priori knowledge of a single molecule emitting. They were, however, of limited use for imaging, as molecules could only be sparsely labeled in order to be singled out. Therefore, applications were restricted in most cases to spectroscopic analysis and distance measurements. The mechanism of photo-switching, however, opened up the possibility for dense labeling as most of the fluorophores can be switched to and kept for a prolonged time in their dark state [16]. Using stochastic activation it is then possible to have only one among many emitters per PSF in its on-state and hence determine its localization with high precision [17]. By collecting a series of images to capture all fluorophores in the sample the super-resolution image is constructed. First used in a total internal reflection fluorescence (TIRF) illumination scheme, the method was also introduced using epifluorescence in an implementation called fluorescence PALM (FPALM) [18]. In addition, other photo-switching mechanisms were employed, among these Förster resonance energy transfer (FRET) of dye pairs as in stochastical optical reconstruction microscopy (STORM) [19], the transition to a long-lived reduced quenched state of an organic dye as in direct STORM (dSTORM) [20], or the transition to the triplet state as in ground state depletion followed by individual molecule return microscopy (GSDIM) [21]. At present, PALM technology holds the potential for the best achievable practical resolution in light microscopy, and because the instrumental set-up is relatively easy it has found widespread recognition. This might explain why many variants with many acronyms have been published and the list seems to be growing by the year [22].

A conceptually similar yet different method to enhance resolution is based on fluctuation analysis. Super-resolution optical fluorescence imaging (SOFI) as one representative relies on higher-order statistical analysis of temporal fluctuations, caused, for example, by blinking molecules, recorded in a sequence of images [23]. The fluorescence fluctuations in each pixel are recorded as a function of time and subsequently correlated. As fluorophores have to be recorded over several frames they have to display sufficiently slow fluctuations in the range of tens of milliseconds and to be persistently observable over the full measurement time.

Until recently, live-cell and 3D approaches have eluded super-resolution imaging hampering their full potential to reveal structural reorganization and dynamic processes. This was partly due to lack in adapted technologies, low acquisition speeds, insufficient algorithms, and inefficient fluorophores. As a consequence, improvement in these areas has become the focus of current research activities [22, 24, 25]. Virtually all high-resolution technologies are now expanded to the third dimension by using modifications of the illumination or detection schemes, by adapting evaluation algorithms and employing faster processing tools [26]. Last but not least, fluorophore properties have been altered to better match experimental requirements, especially with regard to switching behavior [27]. The aim of this review is to shed light on these continuing efforts. To provide the necessary background to understand their impact and limitations in biomedical research, we will refer to the basic principles of the different methods when required. Owing to space limitations a certain selection of methodologies and references was inevitable and we apologize to those not cited.

2 PSF engineering with sectioning capabilities and reduced light levels

A very direct way of achieving higher resolution is to sharpen the diffraction limited spot (PSF). Nonlinear methods have proven to be successful strategies to this end (see Box 2). Although linear methods exist, for example, based on polarization and phase manipulation, these were less useful as narrowing the PSF is typically small and accompanied by undesired and unavoidable side effects, including reduced contrast due to large side lobes. This is not surprising, as linear methods can only redistribute the spatial frequency spectrum captured by the optical system rather than extend the spatial frequency band, a prerequisite for improved resolution.

Box 2

Principle of PSF engineering approaches.

In PSF engineering methods, a diffraction limited spot of fluorophores is excited in the sample but is locally modified by de-excitation using light (depletion). This de-excitation can be achieved by STED, which depopulates fluorophores from the excited to the ground state [12], by GSD, in which molecules are pumped into the dark triplet state [13], or by switching molecules to a metastable dark state, for example, by a reversible photo-switch of photochromic fluorophores or reversibly photo-switchable FPs [14]. Usually, a donut- shaped light distribution is used for depletion to narrow the PSF as given by the diffraction limit. All these methods utilize reversible saturable optical (fluorescence) transitions [29]. For them to work properly special conditions have to be met: first, the fluorophores must be stable enough to withstand the intensities of the depletion laser to enable many switching cycles; and second, the depletion light must be optimally tuned to enable efficient transition from the on-state to the off-state. These two requirements have prompted the search for fluorophores suitable for biomedical imaging and the development of adapted reversible switchable FPs allowing resolutions of approximately 80 nm [30] as well as the search for simpler laser solutions. Efforts along this line are time-gating the detection in combination with readily available continuous wavelength (cw) lasers [31] and the development of one wavelength STED. The latter has been possible by employing two-photon excitation in combination with a fluorophore whose emission is shifted so much that the de-excitation can be stimulated with the same light source wavelength and has been applied to biomedical imaging [32].

As the depletion area was confined in the lateral plane, STED was combined with other technologies to improve z-resolution. For example, STED was used together with 4Pi microscopy [33] and on optical sections (Figure 2) [34] in order to enhance axial resolution to approximately 80–100 nm. Also, STED took advantage of the z-sectioning capabilities of selective plane illumination microscopy (SPIM) leading to an improved axial resolution by nearly a factor of two [35]. Finally, the depletion mask was extended to the third dimension to also improve axial resolution [36].

3D-STED microscopy of neuronal cells. Two-color imaging and 3D reconstruction of a neuronal network in 75 nm sections. Syntaxin-1 was labeled with ATTO 532 (false color coded in green) and synaptophysin by Atto 663 (false colored in red). (A) Confocal overview of a single section of the series. The dyes were excited at 514 and 633 nm, respectively. In addition, the autofluorescence of the resin, excited at 408 nm, was collected in a third detection channel to visualize the boundaries of the slice. The white square indicates the region imaged in the STED microscope. Scale bar, 10 μm. (B) Two-color STED of a typical slice of the 3D stack. Scale bar, 1 μm. Color bars code for intensities. The arrows indicate some of the beads used for the alignment of the stack. Reprinted, with permission, from [34].
Figure 2

3D-STED microscopy of neuronal cells.

Two-color imaging and 3D reconstruction of a neuronal network in 75 nm sections. Syntaxin-1 was labeled with ATTO 532 (false color coded in green) and synaptophysin by Atto 663 (false colored in red). (A) Confocal overview of a single section of the series. The dyes were excited at 514 and 633 nm, respectively. In addition, the autofluorescence of the resin, excited at 408 nm, was collected in a third detection channel to visualize the boundaries of the slice. The white square indicates the region imaged in the STED microscope. Scale bar, 10 μm. (B) Two-color STED of a typical slice of the 3D stack. Scale bar, 1 μm. Color bars code for intensities. The arrows indicate some of the beads used for the alignment of the stack. Reprinted, with permission, from [34].

Owing to the high laser powers required for STED, live-cell imaging remains a challenge for this type of technology due to photo-bleaching, photo-damage, and photo-toxicity. Therefore STED has been mostly used with stable synthetic dyes such as Atto532 and Atto647. With depletion powers in the range of 100–500 MW/cm2, resolutions down to 40 nm have been achieved using these dyes. By contrast, under ideal conditions, STED has been successfully applied to image living mammalian cells with two colors [37]. Acquisition speeds were increased by parallelization using multiple depletion beams and wide-field observation [38].

To reduce laser powers necessary for de-excitation, other methods were investigated. One of them is GSD microscopy (GSDM) that uses a single wavelength for excitation and de-excitation to the triplet state [39]. As laser powers were still comparatively high for live-cell imaging and it generally appears that the larger versatility of STED outweighs the benefits of this moderate laser power reduction, GSDM has never seen extensive use in fluorescence imaging despite the fact that suitable fluorophores were available. A significant reduction in laser power is made possible by using reversible transitions of structural states of fluorophores (RESOLFT). This method is therefore much better suited for live-cell studies using switchable FPs and conventional fluorophores [40]. Although many reversible photo-switchable FPs exist that can be shuttled between a bright and dark fluorescent state, only a few of them enable suitable switching rates. Among these is rsEGFP, which can be recycled thousands of times and was successfully used in imaging live dendritic spines [41]. An enhanced rsEGFP version was engineered having higher switching rates rendering the new variant, rsEGFP2, the prime FP for live-cell imaging using the RESOLFT approach [30].

PSF engineering methods have the distinct feature that they physically sharpen the excitation spot size due to nonlinear sample-light interaction. This opens up the possibility of sample manipulation at scales beyond the diffraction limit not possible with other high resolution techniques. In these cases, the manipulation itself can but must not be the nonlinear interaction used for PSF engineering as occurs, for example, in photo-disruption [41]. The smaller volume accomplished with STED is also beneficial for fluorescence correlation spectroscopy (FCS) as a point-scanning method [42]. By varying the spot size the characteristics of diffusion processes can be assessed.

3 Fast structuring of light in three dimensions

In structured (patterned) illumination microscopy, spatial information of the sample is shifted in frequency domain, similarly to what is done with lock-in techniques in the temporal domain involving temporal, that is, one-dimensional information (see Box 3) [9, 11]. Owing to the down-shift of frequencies in emission by patterned illumination, object structures at high spatial frequencies are moved in the range imaged by the optical system as defined by Abbé’s principle. There are two requirements associated with these methods. First, frequency-shifted components have to be retrieved from the image recorded. Only after retrieval, these components can be shifted back to their original position and thus object information can be reconstructed. Therefore, as many images as components to be retrieved have to be recorded with distinct phases. Second, frequency shift and subsequent reconstruction has to be done in all spatial directions and dimensions of relevance as one mostly desires an isotropic lateral resolution, ideally even isotropic resolution in all three dimensions. Hence, either illumination pattern must contain frequencies in all directions of relevance or it must be rotated to achieve patterning in all directions sequentially. This increases the number of components and therefore the images required for reconstruction further.

Box 3

Principle of structured illumination microscopy.

Using a two-beam interference set-up for generating illumination pattern structuring will only be seen in the lateral plane (2D-SIM; TIRF-SIM). However using three-beam interference, a standing interference pattern at high frequency is also created in the axial dimension improving its resolution by twofold as well (3D-SIM) [43]. Typically, a phase grating with a zero and two first diffraction orders is used for structuring the coherent illumination in the latter case. This requires five images per direction and a total of 15 images to achieve a nearly isotropic lateral as well as axial resolution improvement. As SIM is based merely on linear effects, laser powers can be low and virtually any fluorophore can be used. These attributes combined with its intrinsic 3D capability has made SIM one of the most-frequently used super-resolution technique for live-cell and/or 3D multicolor imaging.

The concept of linear structured illumination has been extended to the nonlinear regime using either saturation [10, 11] or switchable FPs [44]. The former method is called saturated pattern excitation microscopy (SPEM) or saturated SIM (SSIM), the latter nonlinear SIM (NL-SIM). The basic principle underlying the nonlinear version of SIM is the generation of spatial frequencies in illumination beyond the diffraction limit. Photo-switching has the enormous advantage over saturation of being more compatible with biological samples. The number of images necessary and switching cycles of the fluorophore, however, increase dramatically with resolution. Thus, the fluorophore properties itself are the most limiting factors of the technology. Nevertheless, Rego and coworkers demonstrated up to 50 nm resolution for imaging in TIRF mode (Figure 3) [44].

Nonlinear structured illumination of the nuclear pore. The nuclear pore protein POM121 was fused to Dronpa, a photo-switchable fluorescent protein, and imaged by either conventional wide-field microscopy (A) or nonlinear structured-illumination microscopy (NL-SIM) (B). The localization pattern of POM121, an integral membrane protein, was markedly different when imaged by NL-SIM with one or two higher order harmonics (HOH) (3, 4) than when imaged by conventional (1) or even linear structured-illumination microscopy (2). This is confirmed by taking a line profile through a nuclear pore (C). Scale bar, 200 nm (1–4). Image kindly provided by Hesper Rego, Department of Immunology and Infectious Diseases, Harvard School of Public Health, Boston, MA 02115, USA.
Figure 3

Nonlinear structured illumination of the nuclear pore.

The nuclear pore protein POM121 was fused to Dronpa, a photo-switchable fluorescent protein, and imaged by either conventional wide-field microscopy (A) or nonlinear structured-illumination microscopy (NL-SIM) (B). The localization pattern of POM121, an integral membrane protein, was markedly different when imaged by NL-SIM with one or two higher order harmonics (HOH) (3, 4) than when imaged by conventional (1) or even linear structured-illumination microscopy (2). This is confirmed by taking a line profile through a nuclear pore (C). Scale bar, 200 nm (1–4). Image kindly provided by Hesper Rego, Department of Immunology and Infectious Diseases, Harvard School of Public Health, Boston, MA 02115, USA.

Structuring is not limited to a periodic wide-field pattern as the diffraction limited spot in a confocal microscope lends itself to this kind of modulation as demonstrated in an implementation termed image scanning microscopy (ISM) [45]. In contrast to conventional confocal microscopy, this kind of approach requires spatial sampling of the detection. Thus, it is crucial for each position to project the diffraction limited spot onto a camera or sensor array instead of using a point detector. In analogy to SIM, ISM almost doubles the resolution in all three dimensions compared with conventional wide-field microscopy [45]. Although multiple images have to be read from the camera due to its symmetry, a single scan of the spot pattern is sufficient to obtain almost isotropic resolution enhancement. The increase in both resolution and contrast improves the often poor signal-to-noise ratio in live-cell data, which is due to the deliberately low photon dose applied to the sample to avoid photo-toxicity. To overcome the relative slow acquisition speeds when employing a point scanner, the method was multiplexed by a multifocal configuration [46]. The combination of resolution, optical sectioning, and acquisition speed made this approach suitable to study the cytoskeletal network in a transgenic zebrafish [46].

A similarly special variant of structured illumination combines confocal filtering in one direction accomplished by line scanning with structured illumination along the line [47]. Owing to the elimination of some of the out-of-focus light before detection, such a configuration allows deeper penetration depths. A different combination with similar advantages involves SPIM and structured illumination (SPIM-SI) [48]. In SPIM, as only the plane to be imaged is illuminated, typically at a right angle to the observation path, the out-of-focus light is minimized and bleaching is reduced. Bleaching can arise as an issue especially for live-cell imaging, as conventional SIM needs several images for reconstruction that might be affected by bleaching. In an implementation called digital scanned laser light-sheet fluorescence microscope (DSLM-SI), structuring is achieved by temporally modulating the light sheet by the image generating laser beam and improved contrast has been demonstrated [49]. The high contrast achieved combined with low bleaching rates renders this method an exquisite tool to study dynamic processes in live animals. More recently, Bessel beam light sheet illumination has been combined with SIM to a very impressive live-cell imaging technique that achieves a unique combination of resolution enhancement in 3D and photon efficiency translating to live-cell compatibility as well as imaging speed and penetration depth [49]. The structured illumination pattern is generated by the naturally occurring minima and maxima of the Bessel beam, which is scanned and amplitude-modulated to achieve phase shifting of the pattern and homogeneous illumination across the sample. Although the absolute lateral resolution achieved here is slightly lower than achieved with SIM in a wide-field configuration, it outperforms the latter in speed, penetration depth, and photo-toxicity and thus it may become a new standard for 3D live-cell imaging with resolution enhancement.

Yet another variant to produce a structured illumination pattern, namely speckles, is based on scattering of coherent light. As the structures in a speckle field are diffraction-limited, they can be used to provide resolution enhancement to biomedical fluorescence imaging. To achieve quasi-confocal, speckle-free images, many frames in the order of >100 have to be acquired [50]. Owing to this disadvantage and the missing reconstruction flexibility compared with deterministic illumination patterns, the method has not been used widely. However, it has the advantage that a priori knowledge of a speckled pattern is not strictly required as long as the pattern is relatively homogeneous throughout the sample. Unlike approaches with a known interfering pattern, usually periodic or focused, the sample in this blind SIM approach is simply illuminated with several uncontrolled random speckles from which the sample fluorescence density is retrieved [51]. The method yields as conventional SIM twofold resolution enhancement in all directions. The approach is insensitive to any aberrations induced by illumination or the specimen and does not require any calibration steps, features that simplify the experimental set-up.

4 Unlocking 3D live-cell observation in single molecule localization microscopy (SMLM)

PALM and related methods have been initially conceived as 2D techniques [15, 18, 19]. Prohibitory out-of-focus light and labeling densities that are more demanding in a volume than in a plane have prevented the technology to advance to the axial direction at the time of its introduction. However, recent technical advances have made this possible. Several optical implementations have been put forward that differ in the way they obtain axial information.

In one set of techniques, the z-information is encoded in the shape of the PSF. They achieve, in general, axial resolutions between 50 and 70 nm. By introducing, for example, an astigmatic lens in the detection path the PSF becomes elliptical above and below the focal plane. The extent and orientation of the ellipse fitted by a 2D Gaussian function will yield the z-position of the emitter [52]. The astigmatic approach has been enhanced to 20 nm axial resolution using two opposed objectives, which helped to entangle the intricate meshwork of actin cytoskeleton [53]. A second approach based on PSF modification is represented by placing a phase mask into the detection path, which yields a twisted double-helical PSF [54]. The relative orientation of and distance between the generated two lobes encode information of the z-position. Lithographic techniques have been refined to render the phase masks more light efficient than using spatial light modulators (SLMs). A third approach uses a glass wedge covering half of the objective pupil, thereby acting as a phase ramp that also leads to a bi-lobed PSF whose relative distance depends on defocus [55]. Proof-of-principle of this phase ramp imaging localization microcopy (PRILM) was achieved by looking at the microtubule network in 3D. Most of the PSF engineering concepts are limited to a capture range of around 1 μm. To allow for an extended depth of field, the focal plane can be altered progressively and the final images are stitched together. Related but conceptually different is the use of confined activation by temporal focusing via two-photon illumination, where the elliptical PSF is fitted to a 3D Gaussian function [56]. Various organelles have been imaged with an axial resolution below 100 nm in this way. Another strategy also yielding axial resolutions in the 50–70 nm range is splitting the signal onto two different portions of the detector (dual-view) or onto separate detectors (dual-camera) in a technology termed bi-plane PALM (BP-PALM) [57]. As the different portions (or cameras) are arranged at different focal planes, the z-position is encoded by the relative extension of the observed PSFs.

Interferometric PALM (iPALM) has so far proved to be the most powerful among existing 3D point localization techniques achieving a z-resolution below 10 nm [58]. This approach is based on a three-way interferometer where photons collected by two opposing objective lenses are detected by three different cameras. The relative intensity on each camera is the read-out for the z-position of the emitter. In an elegant study, iPALM has helped to decipher the stratification arrangement and orientation of focal adhesion components at cell-substrate contacts [59].

iPALM per se is powerful if the molecules organize in well-defined 3D structures in close vicinity to well-known structures, such as the membrane (Figure 4). If such a link is missing, it is of great help or even necessary to correlate the PALM image with EM. An excellent example of such a correlative light and electron microscopy (CLEM) approach is provided by the characterization of the organization of a mitochondrial nucleoid protein in relation to the mitochondrial cistaernae (Figure 5) [60]. The combination of iPALM with ion ablation and scanning EM (SEM) yielded <30 nm axial resolution.

iPALM image of lamellipodium in a U2OS cell. Shown is the advancing edge (lamellipodium) of a U2OS cell. The plasma membrane is labeled with mEOS2-Farnesyl. The colors are from the top and bottom membrane height. The boxes indicate cross-section (A, B) in z. Image kindly provided by Harald Hess, Howard Hughes Medical Institute, Janelia Farms, Ashburn VA, USA.
Figure 4

iPALM image of lamellipodium in a U2OS cell.

Shown is the advancing edge (lamellipodium) of a U2OS cell. The plasma membrane is labeled with mEOS2-Farnesyl. The colors are from the top and bottom membrane height. The boxes indicate cross-section (A, B) in z. Image kindly provided by Harald Hess, Howard Hughes Medical Institute, Janelia Farms, Ashburn VA, USA.

Correlative 3D microscopy. The red isosurface is from mEOS2 labeling TFAM, which associates with the mitochondrial DNA (nucleoid) and was detected with iPALM. The ultrastructural context of the mitochondria is provided by focused ion beam scanning electron microcopy (FIB-SEM). Shown is one slice gray-colored. Membrane cristae are visible as grooves in the vicinity of the nucleoid. As a perspective view of the surface is shown, no scale bar is given. However, the mitochondrial nucleoid is approximately 400 nm across. Courtesy of Janelia Farms Research Campus, Howard Hughes Medical Institute, Ashburn VA, USA.
Figure 5

Correlative 3D microscopy.

The red isosurface is from mEOS2 labeling TFAM, which associates with the mitochondrial DNA (nucleoid) and was detected with iPALM. The ultrastructural context of the mitochondria is provided by focused ion beam scanning electron microcopy (FIB-SEM). Shown is one slice gray-colored. Membrane cristae are visible as grooves in the vicinity of the nucleoid. As a perspective view of the surface is shown, no scale bar is given. However, the mitochondrial nucleoid is approximately 400 nm across. Courtesy of Janelia Farms Research Campus, Howard Hughes Medical Institute, Ashburn VA, USA.

Point localization methods bear great potential in the study of live-cell dynamics as long as temporal information of single molecules rather than of macromolecular complexes is the prime focus. In the former case, SMLM can be used to activate and deactivate molecules for numerous cycles to track many subsets of molecules successively [61]. Unlike conventional single particle tracking (SPT) methods, single particle tracking PALM (sptPALM) generates large sampling statistics and high density maps of 2D diffusion trajectories based on the time-dependent positions of single molecules. sptPALM has been, among others, used to analyze the dynamics of viral and cytoplasmic proteins [61]. The use of dual-color labeling opens up the possibility to probe the temporal inter-relationship between two or more proteins in space.

In principle, live-cell SMLM methods can also be used to track macromolecular structures over time. However, in this case spatial information is also required necessitating the superposition of numerous image frames implying acquisition times in the range of several minutes. Especially when using FPs photon counts can be low, which requires slower acquisition speeds. Keeping this caveat in mind, one has to carefully choose the dynamic process to match achievable acquisition speeds with the underlying process. For example, the biogenesis and rearrangements of focal adhesions occurs in a suitable time domain (minutes) and Shroff et al. were able to assess these dynamic processes using fast PALM imaging at an acquisition speed of 25 s per PALM image [62].

The key to localization microscopy approaches is the isolation and localization of individual emitters, pushing the complexity of the method from hardware to sample preparation and image analysis including localization algorithms. The image analysis task consists of finding the positions of up to several hundred molecules per camera frame in up to 20 000 or more frames, resulting in a total number of molecules of up to 10 million. The situation is even worse for 3D imaging. Typically, the position of an isolated molecule is determined by fitting a Gaussian mask or a 2D Gaussian to its isolated PSF (see Box 4). In the first PALM experiments, this took as long as 12 h with, at that time, state-of-the-art personal computers precluding a feedback during image acquisition [15]. Computational power increases as a general trend, and when live-cell imaging came into focus, many algorithms have become available with increased processing speed, essentially serving for convenience and throughput [63–65]. A super-resolution image can be derived from the raw dataset in real time; therefore, the relevance of this dataset and its parameters can be fed back to the experiments immediately. As these new algorithms mushroomed in recent years, it is out of the scope of this review to cover all of them ([65] and references therein).

Box 4

Principle of localization microscopy.

As FPs have a low photon budget, researchers have looked for ways to introduce organic dyes with tenfold increased photon counts to the living cells and in this way accelerate frame rates and shorten measurement times [66]. As immune-staining with dye conjugated antibodies is restricted to the surface of living cells, the field has resorted to employing self-labeling tags [67]. Such sequences can be introduced into a protein by genetic engineering, prominent examples being Snap, Halo, TMP, and FlAsh tags [68]. Chemical tags can pass the cellular membrane when applied exogenously and bind their specific sequence thus giving specificity to the attached label, which can be a switchable fluorophore suited for SMLM. Partly, these studies used double- or multi-color staining to speed up analysis while giving structural context to the precisely localized molecules. To widen this scope, FPs can be combined with chemical tags as endogenous glutathione concentrations in cells are compatible with organic dye switching [69].

Stoichiometries and the formation of clusters can also be assessed by live-cell imaging [70] (Figure 6). However, such analysis can be impeded by multiple occurrences of the same molecule, because a molecule can last longer than the image frame time and hence appears in multiple frames. Owing to stochastic alterations from frame to frame, it appears as a cluster of points instead of a single point. A method called pair correlation PALM (PC-PALM) circumvents this problem of single molecule assignment [71]. As its name implies, it uses pair correlation algorithms to remove multiple appearances of the same molecule in the final image.

3D cluster analysis. The upper inserted image shows a 3D BP-PALM image of Jurkat T cells stained with antibodies against LAT (linker for activation of T cells). Scale bar, 5 μm; height color coding from 0 nm (blue) to 500 nm (red). T cell activation begins with the formation of signaling complexes of the T cell antigen receptor (TCR) with the adaptor protein LAT. Depicted is a 3D isosurface rendered as a virtual cluster map stack in Imaris at different time points after T cell activation. LAT clusters accumulate against the plasma membrane in the time course of the experiment. Image kindly provided by Katharina Gaus, University of New South Wales, Sydney, Australia.
Figure 6

3D cluster analysis.

The upper inserted image shows a 3D BP-PALM image of Jurkat T cells stained with antibodies against LAT (linker for activation of T cells). Scale bar, 5 μm; height color coding from 0 nm (blue) to 500 nm (red). T cell activation begins with the formation of signaling complexes of the T cell antigen receptor (TCR) with the adaptor protein LAT. Depicted is a 3D isosurface rendered as a virtual cluster map stack in Imaris at different time points after T cell activation. LAT clusters accumulate against the plasma membrane in the time course of the experiment. Image kindly provided by Katharina Gaus, University of New South Wales, Sydney, Australia.

A combination of 3D and live-cell imaging has been hampered above all by the need to achieve labeling densities that lead to true representations of the structures of interest and often are employed by looking at protein distributions rather than structure [72]. Indeed, often such high labeling densities are hard to achieve or should even not be attained to reduce the probability of two or more molecules emitting per PSF at the same time [73]. The demand for higher labeling densities was met with increased efforts to come up with algorithms that were able to estimate localizations with high precision even in the presence of overlapping signals. Apart from increasing the acquisition rates of the camera and at the same time increasing the photon output per molecule, this seems another promising way to speed up the acquisition times significantly. Most of the approaches are based on fitting multiple model PSFs instead of just one using maximum-likelihood estimations in 2D (DAOSTORM, PALMER, and MFA [74–76], and DAOSTORM-3D [77]). More recently, compressed sensing algorithms known from signal processing were employed for the task of localizing sparse but not necessarily isolated emitters [78]. As many of these algorithms were implemented on graphics processing unit (GPU) architecture, analysis times could be reduced to minutes and less. Those algorithms are able to fit an order of magnitude more molecules per PSF and bear great promise for live-cell applications as acquisition times can be cut down by a similar factor. Even the algorithms fitting multiple emitter models to the intensity distribution of one camera frame fail as the density of emitters surpasses a certain threshold. This is due to the fact that many different models of fluorophore location with different numbers of fluorophores and intensities may fit the experimental data almost equally well. This limitation is overcome by Bayesian statistical approaches as they analyze the whole dataset globally [79].

5 Fluctuation analysis in the presence of highly dense labels

A recently developed super-resolution method, termed super-resolution optical fluctuation imaging (SOFI), draws on the stochastic blinking behavior of fluorophores by evaluating the recorded signal using cumulant functions (see Box 5) [23]. Similar to all other super-resolution methods, SOFI relies on the ability of the fluorophores to exhibit at least two distinguishable fluorescent states, for example, an ‘on’ and an ‘off’ state. SOFI is typically performed on a camera-based system, although it can also be employed on sequential acquisition schemes, such as laser scanning microscopes.

Box 5

Principle of fluctuation analysis.

SOFI works in and close to the single molecule regime, that is, the sensitivity of the system has to be high enough to be able to detect fluctuations of a single emitter. This in turn puts a constraint on the concentration range of the molecules for which SOFI can be suitably applied. As a rule of thumb, SOFI works in the same regime as fluorescence correlation spectroscopy (FCS), which is in the sub-micro molar range. Thus, compared with localization-based methods such as PALM or STORM, SOFI works at much higher concentrations, as it does not rely on the localization of single fluorophores in a single frame. By contrast, SOFI takes all recorded photons into account in order to extract super-resolution information. Owing to the cumulant approach, it is not required that the PSFs of fluorophores are optically separated within a single frame, but instead there can be many overlapping PSFs at the same time. Therefore, SOFI can be performed on less controllable samples, where ‘on’ and ‘off’ times and duration might be difficult to tune for a localization scheme. SOFI also works in a lower signal-to-noise regime than any of the SMLM techniques [80].

SOFI is based on the evaluation of high-order cumulants. The higher the order of the calculated cumulants, the better the achieved resolution. In the ideal case using deconvolution on top of the correlation, the order of the cumulant is equivalent to the resolution gain over the original image. For example, the second-order cumulant will result in a SOFI image which features twice the resolution of the original fluorescence image; a third-order SOFI image essentially increases resolution by a factor of three and so forth. This works for all three dimensions.

However, there are limitations for practical reasons. First of all, the signal quality determines how many cumulant orders can be calculated. This had limited SOFI to be used with quantum dots [23] and organic fluorophores [81] excluding FPs. This situation changed when reversible switchable FPs became available that allowed for sufficiently high contrast [82]. This so-called photochromic stochastic optical fluctuation imaging (pcSOFI) also paved the way for live-cell applications. SOFI can be used with various microscope types, such as a lamp-based wide-field, TIRF or spinning disk microscopes. Second, as each cumulant order probes different aspects of the underlying blink distribution of the fluorophores, for example, skewness, kurtosis, etc., negative SOFI values are possible, which might prove disadvantageous for display and interpretation of the SR image. Third, bright molecules will become disproportionally brighter for higher orders leaving only the few brightest molecules visible, whereas the dimmer molecules will remain almost invisible due to the increased dynamic range of the SOFI image. However, this issue has been overcome with the introduction of balanced SOFI (bSOFI) and images of cellular structures, labeled with conventional organic fluorophores up to the fifth order have been generated featuring a resolution of ~80 nm [83]. The inherent sectioning capabilities of SOFI together with its resolution enhancement in the axial direction have been demonstrated on the cytoskeleton network of HeLa cells (Figure 7) [84].

3D SOFI image of microtubular network. Microtubules in HeLa cells were stained via secondary antibodies coupled to QD800 (InVitroGen). Left image is the wide-field image, right image the SOFI representation. Scale bar, 20 μm. An image  z-stack consisting of 32 frames and a step size of 200 nm was taken in a wide-field microscope with LED illumination. Each plane represents 2000 frames. Image reprinted from [84] under license policy of the journal ‘Optical Nanoscopy’ and with permission of the authors.
Figure 7

3D SOFI image of microtubular network.

Microtubules in HeLa cells were stained via secondary antibodies coupled to QD800 (InVitroGen). Left image is the wide-field image, right image the SOFI representation. Scale bar, 20 μm. An image z-stack consisting of 32 frames and a step size of 200 nm was taken in a wide-field microscope with LED illumination. Each plane represents 2000 frames. Image reprinted from [84] under license policy of the journal ‘Optical Nanoscopy’ and with permission of the authors.

Even more subtle fluctuations than needed in SOFI occurring at fluorophore densities as high as used for conventional imaging can be modeled using Bayesian analysis. This method also relies on the stochastic blinking behavior of fluorophores in time but could be positioned in-between the single emitter localization methods and the fluctuation analysis. In an approach called Bayesian blinking and bleaching (3B) analysis, an entire time series recorded in a standard wide-field illumination scheme is modeled as a set of blinking and bleaching fluorophores [79]. In the 3B approach the entire time series is globally analyzed and a fluorophore location probability map is generated by making a weighted average over all possible models. Therefore, the analysis is only weakly constrained by prior information about the blinking and bleaching characteristics of the fluorophore, which can be estimated from separate experiments. Modeling includes the use of data from overlapping fluorophores as well as the use of information from bleaching events, blinking events and changes caused by fluorophores being added or removed. To account for the ambiguity in the modeled fluorophore distributions, an average of all possible models is taken resulting in a probability map where areas with more possible different distribution models appear as regions of worse resolution. This strengthens the robustness to this approach against misinterpretation. A few seconds of data collection using a wide-field fluorescence microscope have been reported to yield an image with a spatial resolution approaching 50 nm together with a time resolution of 4 s, as demonstrated with the standard fluorescence protein mCherry [79]. Longer imaging times will allow for better localization estimates, but with a trade-off for temporal resolution.

The 3B approach shifts the complexity of super-resolution from optical set-up to post-processing analysis. Adequately sampling from the set of all possible models is a demanding computational task, and analyzing datasets requires several hours per square micrometer of data analyzed. But much as with the development of the localization-based approaches, the rapid development of computational hardware seems likely to improve this situation in the future.

6 Making fluorophores blink – transitions reloaded

A cornerstone for all nonlinear super-resolution techniques is the employment and control of fluorophores that can be photo-modulated or photo-switched between at least two different states, commonly referred to as the ‘on’ – or active and ‘off’ – or inactive state (see Box 6). Photo-switchable fluorophores come in two flavors: PS-FPs, sometimes also referred to as photo-modulatable FPs, [26] and synthetic/organic fluorophores [85]. These two classes of fluorescent molecules are very different in their chemistry and require different staining and imaging conditions. The two most crucial factors that determine the suitability of a certain fluorophore are its brightness and contrast ratio. The former determines the number of photons and hence the signal that can be obtained; the latter determines the background and therefore the achievable labeling density. Stability and switching rates are two additional relevant factors that influence the choice of fluorophores.

Box 6

Fluorescence transitions used by super-resolution techniques.

Typically, PS-FPs can be switched from a dark to a bright state or from one to another spectral state by illuminating the sample with low amounts of activation light, in most cases light from the violet part of the spectrum. They are converted back to the off-state by high power of the excitation light used for imaging, for example, by photo-bleaching or a reversible transition back to the on-state. PS-FPs can be assigned to either of three categories, which are photo-activatable (PA), photo-convertible, and photo-chromic FPs. PA-FPs will be irreversibly switched from a dark to a bright state with violet light by chemical modifications within the fluorophore group and inactivated by photo-bleaching. Photo-convertible or photo-shiftable FPs will be irreversibly switched from one spectral to another spectral state based on backbone cleavage, and inactivated as before by photo-bleaching. Photo-chromic FPs, by contrast, are reversibly switched back and forth between a bright and a dark state driven by a cis-trans isomerization reaction. By engineered modifications of amino acids in the chromophore group or close-by regions many PS-FPs with new properties have been successfully created [86]. As PS-FPs are essentially non-fluorescent (or fluorescent at a different spectral emission band) at the start of the experiment before conversion, the number of molecules in the on-state can easily be fine-tuned by balancing the powers between the activation and imaging lasers, respectively.

Organic dyes stay mostly in their on-state and the majority of molecules have to be turned to a relatively stable and reversible non-fluorescent off-state by irradiation with sufficiently high laser powers of the imaging light [73]. The presence of a reducing agent will help to bring organic dyes into a dark long-lived quenched state that can recover to the ground state by violet light and the presence of molecular oxygen. Hence, cycling between these states can be fine-tuned by the intensities of imaging and reactivation light as well as the concentration of reducing agent and oxygen. Organic dyes can be used as chemical tags for live-cell imaging as glutathione concentrations within cells are fortunately compatible with useful switching rates [87]. Employing an oxygen scavenging system to deplete the oxygen leads to a profound stabilization of the reduced state’s lifetime, which can be accomplished by a buffering system with both reducing and oxidizing reagents (termed ROXS) [88].

Advantages of PS-FPs are their small size in the range of 2 nm allowing for high labeling densities as well as their outstanding target specificity they provide as genetically engineered tags fused to a protein of interest. In favor of organic dyes stands their superior brightness that, giving rise to thousands of photons per cycle, is an order of magnitude higher than the hundreds of photons obtained from their protein counterparts [89]. Such a high photon flux allowed Löschberger et al., in an elegant study, to visualize the eightfold symmetry of a nuclear pore complex protein, a feature that was resolved so far only by EM (Figure 8) [90].

Nuclear pore complex (NPC) isolated from nuclear lamina of Xenopus laevis oocyte. (A) One color dSTORM image of integral membrane component gp210 labeled with antibody conjugated to Alexa 647 highlighting the eightfold symmetry of the NPC. (B) Dual color dSTORM image of gp210 (magenta) and wheat germ agglutinin (WGA), which labels the nucleoporin central channel (green). (C) Averaged image localization data analysis and reconstruction of gp210 and WGA revealing a double-ring structure of the NPC. Scale bars, 100 nm. Images kindly provided by Markus Sauer, University of Würzburg, Würzburg, Germany.
Figure 8

Nuclear pore complex (NPC) isolated from nuclear lamina of Xenopus laevis oocyte.

(A) One color dSTORM image of integral membrane component gp210 labeled with antibody conjugated to Alexa 647 highlighting the eightfold symmetry of the NPC. (B) Dual color dSTORM image of gp210 (magenta) and wheat germ agglutinin (WGA), which labels the nucleoporin central channel (green). (C) Averaged image localization data analysis and reconstruction of gp210 and WGA revealing a double-ring structure of the NPC. Scale bars, 100 nm. Images kindly provided by Markus Sauer, University of Würzburg, Würzburg, Germany.

7 Outlook

A considerable variety of imaging techniques has emerged that have shown potential and use for biomedical fluorescence imaging beyond the diffraction limit. Various methods have different advantages and disadvantages and thus there is good reason to assume that several methods will continue to be practically used in parallel and more may even become available for addressing the puzzles of live in the future.

Unfortunately, no gain is without its cost in microscopy. All methods, as different as they may be, share a common fundamental drawback, which is that spatial resolution can be increased only if temporal resolution is sacrificed. The sharpening of the PSF by physical shaping as in PSF engineering approaches, or by numerical calculations as in localization and fluctuation techniques, requires finer sampling in the spatial domain in the same way as structured illumination needs finer sampling in the frequency domain. Thus, there is a negative correlation between resolution and sampling. Moreover, photon statistics sets a limit to how fast imaging can occur. Each fluorophore has a finite lifetime and can therefore only emit photons at a certain rate. This sets in turn a restriction to the obtainable signal-to-noise ratio as the higher the resolution the fewer fluorophores are contributing to the signal at a given time. Against all these odds, super-resolution techniques have started to emerge providing speeds matched to dynamic processes in the sub-minute range while retaining sufficient sub-diffraction resolution. As most of the implementations have already technically approached fundamental optical limits, we can foresee three areas with the potential to improve acquisition speeds significantly in the near future: faster detection systems, enhanced algorithms, and improvements of fluorophore characteristics.

Array sensors employed for wide-field or multifocal scanning approaches have been the heart of detection in many super-resolution techniques. Owing to their outstanding sensitivity electron multiplying charge-coupled devices (EMCCDs) have been at the forefront, but are in most cases the speed-limiting factor. Cameras with faster read out times are therefore highly desirable and the introduction of scientific complementary metal-oxide-semiconductor (sCMOS) technology holds great promise in attaining faster imaging rates. Denser sampling allows reduction in number of image frames needed by an order of magnitude and has been made possible by the development of algorithms allowing overlapping emitters to be accurately fitted, among these Bayesian approaches. Processing times need to be optimized for this kind of algorithms as the demand for higher throughput and real-time analysis increases. Last but not least, super-resolution will depend heavily on the development of better reversible photo-switchable fluorophores. They hold the key for exploiting light-sample interactions more efficiently and are therefore indispensable for all nonlinear methods. It will be of outermost importance to control switching rates to a better extent matching them to the technology used. There is no need to say that they have to be live-compatible and suited for in vivo labeling. More red-shifted versions will be needed as well to address the demand for multicolor staining avoiding crosstalk issues as best as possible. Finally, more stable versions with higher photon yield per excitation cycle and enhanced contrast ratios as well as optimized environmental conditions will help in lowering the required light doses and as such reduce or even prevent photo-bleaching, photo-damage, and photo-toxicity. As more insight into the structure and chemistry of fluorophores is gained, rational design and engineering will gain momentum.

Improvements at all of these different frontiers will, in the end, be necessary to unravel many of the biological questions currently not answered in detail. They will, among others, help to solve cell nuclear architecture, the mechanics of molecular motors, and the formation and function of macromolecular complexes. Synergism with EM is expected, as correlative microscopy methods will provide high target specificity in an ultrastructural context. And finally, super-resolution techniques have started and will likely leave their marks in optogenetics, a rapidly evolving field that allows precise photo-manipulation of fluorescent molecules. As super-resolution techniques mature, we will undoubtedly witness a significant further progress in our understandings of the inner workings of the cell as the fundamental building block of life: live and in 3D.

The authors would like to thank Jörg Enderlein, Katahrina Gaus, Stephan Hell, Harald Hess, Hesper Rego, and Markus Sauer for providing images.

References

  • [1]

    K. R. Porter, A. Claude and E. F. Fullam, J. Exp. Med. 81, 233–246 (1945).CrossrefGoogle Scholar

  • [2]

    J. Lippincott-Schwartz, Annu. Rev. Biochem. 80, 327–332 (2011).Google Scholar

  • [3]

    E. Abbe, Archiv. Mikroskop. Anat. 9, 413–468 (1873).Google Scholar

  • [4]

    L. Schermelleh, R. Heintzmann and H. Leonhardt, J. Cell Biol. 190, 165–175 (2010).CrossrefGoogle Scholar

  • [5]

    M. G. Gustafsson, Curr. Opin. Struct. Biol. 9, 627–634 (1999).Google Scholar

  • [6]

    J. Vogelsang, T. Cordes, C. Forthmann, C. Steinhauer and P.Tinnefeld, Nano Lett. 10, 672–679 (2010).CrossrefGoogle Scholar

  • [7]

    M. Lukosz and M. Marchand, J. Modern Opt. 10, 241–255 (1963).Google Scholar

  • [8]

    R. Heintzmann and C. Cremer, Proc. SPIE 2568, 185–196 (1999).Google Scholar

  • [9]

    M. G. Gustafsson, J. Microsc. 198, 82–87 (2000).Google Scholar

  • [10]

    R. Heintzmann, Micron 34, 283–291 (2003).PubMedGoogle Scholar

  • [11]

    M. G. Gustafsson, Proc. Natl. Acad. Sci. USA 102, 13081–13086 (2005).Google Scholar

  • [12]

    S. W. Hell and J. Wichmann, Opt. Lett. 19, 780–782 (1994).CrossrefGoogle Scholar

  • [13]

    S. W. Hell and M. Kroug, Appl. Phys. B 60, 495–497 (1995).CrossrefGoogle Scholar

  • [14]

    M. Hofmann, C. Eggeling, S. Jakobs and S. W. Hell, Proc. Natl. Acad. Sci. USA 102, 17565–17569 (2005).CrossrefGoogle Scholar

  • [15]

    E. Betzig, G. H. Patterson, R. Sougrat, O. W. Lindwasser, S. Olenych, et al., Science 313, 1642–1645 (2006).Google Scholar

  • [16]

    J. Vogelsang, C. Steinhauer, C. Forthmann, I. H. Stein, B. Person-Skegro, et al., ChemPhysChem 11, 2475–2490 (2010).Google Scholar

  • [17]

    R. E. Thompson, D. R. Larson and W. W. Webb, Biophys. J. 82, 2775–2783 (2002).CrossrefGoogle Scholar

  • [18]

    S. T. Hess, T. P. Girirajan and M. D. Mason, Biophys. J. 91, 4258–4272 (2006).CrossrefGoogle Scholar

  • [19]

    M. J. Rust, M. Bates and X. Zhuang, Nat. Methods 3, 793–796 (2006).Google Scholar

  • [20]

    M. Heilemann, S. van de Linde, M. Schüttpelz, R. Kasper, B. Seefeldt, et al., Angew. Chem. Int. Ed. Engl. 47, 6172–6176 (2008).CrossrefGoogle Scholar

  • [21]

    J. Fölling, M. Bossi, H. Bock, R. Medda, C. A. Wurm, et al., Nat. Methods 5, 943–945 (2008).Google Scholar

  • [22]

    P. Sengupta, S. Van Engelenburg and J. Lippincott-Schwartz, Dev. Cell 23, 1092–1102 (2012).Google Scholar

  • [23]

    T. Dertinger, R. Colyer, G. Iyer, S. Weiss and J. Enderlein, Proc. Natl. Acad. Sci. USA 106, 22287–22292 (2009).CrossrefGoogle Scholar

  • [24]

    I. Davis, Biochem. Soc. Trans. 37, 1042–1044 (2009).Google Scholar

  • [25]

    T. Ha and P. Tinnefeld, Annu. Rev. Phys. Chem. 63, 595–617 (2012).CrossrefGoogle Scholar

  • [26]

    R. Henriques, C. Griffiths, E. H. Rego and M. M. Mhlanga, Biopolymers 95, 322–331 (2011).CrossrefGoogle Scholar

  • [27]

    S. van de Linde, S. Wolter, M. Heilemann and M. Sauer, J. Biotechnol. 149, 260–266 (2010).Google Scholar

  • [28]

    S. W. Hell, M. Dyba and S. Jakobs, Curr. Opin. Neurobiol. 14, 599–609 (2004).CrossrefGoogle Scholar

  • [29]

    J. Keller, A. Schonle and S. W. Hell, Opt. Express 15, 3361–3371 (2007).CrossrefGoogle Scholar

  • [30]

    T. Grotjohann, I. Testa, M. Reuss, T. Brakemann, C. Eggeling, et al., eLIFE 1, e00248 (2012).Google Scholar

  • [31]

    G. Vicidomini, A. Schönle, H. Ta, K. Y. Han, G. Moneron, et al., PLoS One 8, e54421 (2013).Google Scholar

  • [32]

    P. Bianchinia, B. Harke, S. Galiania, G. Vicidominia and A. Diasproa, Proc. Natl. Acad. Sci. USA 109, 6390–6393 (2012).Google Scholar

  • [33]

    M. Dyba, J. Keller and S. W. Hell, New J. Phys. 7, 1 (2005).Google Scholar

  • [34]

    A. Punge, S. O. Rizzoli, R. Jahn, J. D. Wildanger, L. Meyer, et al., Microsc. Res. Tech. 71, 644–650 (2008).Google Scholar

  • [35]

    M. Friedrich, Q. Gan, V. Ermolayev and G. S. Harms, Biophys. J. 100, L43–L45 (2011).CrossrefGoogle Scholar

  • [36]

    D. Wildanger, R. Medda, L. Kastrup and S. W. Hell, J. Microsc. 236, 35–43 (2009).CrossrefGoogle Scholar

  • [37]

    K. I. Willig, A. C. Stiel, T. Brakemann, S. Jakobs and S.W. Hell, Nano Lett. 11, 3970–3973 (2011).CrossrefGoogle Scholar

  • [38]

    P. Bingen, M. Reuss, J. Engelhardt and S. W. Hell, Opt. Express 19, 23716–23726 (2011).CrossrefGoogle Scholar

  • [39]

    S. Bretschneider, C. Eggeling and S. W. Hell, Phys. Rev. Lett. 98, 218103 (2007).CrossrefGoogle Scholar

  • [40]

    I Testa, N. T. Urban, S. Jakobs, C. Eggeling, K. I. Willig, et al., Neuron 75, 992–1000 (2012).Google Scholar

  • [41]

    T. Grotjohann, I. Testa, M. Leutenegger, H. Bock, N. T. Urban, et al., Nature 478, 204–208 (2011).Google Scholar

  • [42]

    C. Eggeling, C. Ringemann, R. Medda, G. Schwarzmann, K. Sandhoff, et al. Schönle and S. W. Hell, Nature 457, 1159–1162 (2009).Google Scholar

  • [43]

    M. G. Gustafsson, L. Shao, P. M. Carlton, C. J. Wang, I. N. Golubovskaya, et al., Biophys. J. 94, 4957–4970 (2008).Google Scholar

  • [44]

    E. H. Rego, L. Shao, J. J. Macklin, L. Winoto, G. A. Johansson, et al., Proc. Natl. Acad. Sci. USA 109, E135–E145 (2012).Google Scholar

  • [45]

    C. B. Muller and J. Enderlein, Phys. Rev. Lett. 104, 198101 (2010).Google Scholar

  • [46]

    A. G. York, S. H. Parekh, D. D. Nogare, R. S. Fischer, K. Temprine, et al., Nat. Methods 9, 749–754 (2012).Google Scholar

  • [47]

    O. Mandula, M. Kielhorn, K. Wicker, G. Krampert, I. Kleppe, et al., Opt. Express 20, 24167–24174 (2012).Google Scholar

  • [48]

    T. Breuninger, K. Greger and E. H. Stelzer, Opt. Lett. 32, 1938–1940 (2007).CrossrefGoogle Scholar

  • [49]

    L. Gao, L. Shao, C. D. Higgins, J. S. Poulton, M. Peifer, et al., Cell 151, 1370–1385 (2012).Google Scholar

  • [50]

    C. Ventalon and J. Mertz, Opt. Lett. 30, 3350–3352 (2005).CrossrefGoogle Scholar

  • [51]

    E. Mudry, K. Belkebir, J. Girard, J. Savatier, E. Le Moal, et al., Nat. Photonics 6, 312–315 (2012).Google Scholar

  • [52]

    B. Huang, W. Wang, M. Bates and X. Zhuang, Science 319, 810–813 (2008).Google Scholar

  • [53]

    K. Xu, H. P. Babcock and X. Zhuang, Nat. Methods 9, 185–188 (2012).Google Scholar

  • [54]

    S. Quirin, S. R. Pavani and R. Piestun, Proc. Natl. Acad. Sci. USA 109, 675–679 (2012).CrossrefGoogle Scholar

  • [55]

    D. Baddeley, M. B. Cannell and C. Soeller, Nano Lett. 4, 589–598 (2011).Google Scholar

  • [56]

    A.G. York, A. Ghitani, A. Vaziri, M. W. Davidson and H. Shroff, Nat. Methods 8, 327–333 (2011).Google Scholar

  • [57]

    M. F. Juette, T. J. Gould, M. D. Lessard, M. J. Mlodzianoski, B. S. Nagpure, et al., Nat. Methods 5, 527–529 (2008).Google Scholar

  • [58]

    G. Shtengel, J. A. Galbraith, C. G. Galbraith, J. Lippincott-Schwartz, J. M. Gillette, et al., Proc. Natl. Acad. Sci. USA 106, 3125–3130 (2009).Google Scholar

  • [59]

    P. Kanchanawong, G. Shtengel, A. M. Pasapera, E. B. Ramko, M. W. Davidson, et al., Nature 468, 580 (2010).Google Scholar

  • [60]

    B.G. Kopek, G. Shtengel, C. S. Xu, D. A. Clayton and H. F. Hess, Proc. Natl. Acad. Sci. USA 109, 6136–6141 (2012).CrossrefGoogle Scholar

  • [61]

    S. Manley, J. M. Gillette, G. H. Patterson, H. Shroff, H. F. Hess, et al., Nat. Methods 5, 155–157 (2008).Google Scholar

  • [62]

    H. Shroff, C. G. Galbraith, J. A. Galbraith and E. Betzig, Nat. Methods 5, 417–423 (2008).Google Scholar

  • [63]

    P. N. Hedde, J. Fuchs, F. Oswald, J. Wiedenmann and G. U. Nienhaus, Nat. Methods 6, 689–690 (2009).Google Scholar

  • [64]

    C. S. Smith, N. Joseph, B. Rieger and K. A. Lidke, Nat. Methods 7, 373–375 (2010).Google Scholar

  • [65]

    S. Wolter, A. Löschberger, T. Holm, S. Aufmkolk, M.-C. Dabauvalle, et al., Nat. Methods 9, 1040–1041 (2012).Google Scholar

  • [66]

    S. van de Linde, M. Heilemann and M. Sauer, Annu. Rev. Phys. Chem. 63, 519–540 (2012).Google Scholar

  • [67]

    G. Crivat and J. W. Taraska, Trends Biotechnol. 30, 8–16 (2012).CrossrefGoogle Scholar

  • [68]

    R. Wombacher and V. W. Cornish, J. Biophoton. 4, 391–402 (2011).CrossrefGoogle Scholar

  • [69]

    T. Klein, S. van de Linde and M. Sauer, Chembiochem 13, 1861–1863 (2012).CrossrefGoogle Scholar

  • [70]

    P. Sengupta and J. Lippincott-Schwartz, Bioessays 34, 396–405 (2012).CrossrefGoogle Scholar

  • [71]

    P. Sengupta, T. Jovanovic-Talisman, D. Skoko, M. Renz, S. L. Veatch et al., Nat. Methods 8, 969–975 (2011).CrossrefGoogle Scholar

  • [72]

    D. M.Owen, A. Magenau, Williamson D. J. and K. Gaus, Methods Mol. Biol. 950, 81–93 (2013).Google Scholar

  • [73]

    S. van de Linde, A. Löschberger, T. Klein, M. Heidbreder, S. Wolter, et al., Nat. Protoc. 6, 991–1009 (2011).Google Scholar

  • [74]

    S. J. Holden, S. Uphoff and A. N. Kapanidis, Nat. Methods 8, 279–280 (2011).Google Scholar

  • [75]

    Y. Wang, T. Quan, S. Zeng and Z. L. Huang, Opt. Express 20, 16039–16049 (2012).CrossrefGoogle Scholar

  • [76]

    F. Huang, S. L. Schwartz, J. M. Byars and K. A. Lidke, Biomed. Opt. Express 2, 1377–1393 (2011).Google Scholar

  • [77]

    H. Babcock, Y. M. Sigal and X. Zhuang, Opt. Nanoscopy 1, 6 (2012).Google Scholar

  • [78]

    L. Zhu, W. Zhang, D. Elnatan and B. Huang, Nat. Methods 9, 721–723 (2012).Google Scholar

  • [79]

    S. Cox, E. Rosten, J. Monypenny, T. Jovanovic-Talisman, D. T. Burnette, et al., Nat. Methods 9, 195 (2012).Google Scholar

  • [80]

    S. Geissbuehler, C. Dellagiacoma and T. Lasser, Biomed. Opt. Express 2, 408–420 (2011).Google Scholar

  • [81]

    T. Dertinger, M. Heilemann, R. Vogel, M. Sauer and S. Weiss, Angew. Chem. Int. Ed. Engl. 49, 9441–9443 (2010).Google Scholar

  • [82]

    P. Dedecker, G. C. Mo, T. Dertinger and J. Zhang, Proc. Natl. Acad. Sci. USA 109, 10909–10914 (2012).CrossrefGoogle Scholar

  • [83]

    S. Geissbühler, N. Bocchio, C. Dellagiacoma, M. Geissbühler, C. Berclaz, et al., Opt. Nanoscopy 1, 4 (2012).Google Scholar

  • [84]

    T. Dertinger, J. Xu, O. F. Naini, R. Vogel and S. Weiss, Opt. Nanoscopy 1, 2 (2012).CrossrefGoogle Scholar

  • [85]

    M. Heilemann, S. van de Linde, A. Mukherjee and M. Sauer, Angew. Chem. Int. Ed. Engl. 48, 6903–6908 (2009).Google Scholar

  • [86]

    S. G. Olenych, N. S. Claxton, G. K. Ottenberg and M. W. Davidson, Curr. Protoc. Cell Biol. Chapter 21, Unit 21.5 (2007).Google Scholar

  • [87]

    R. Wombacher, M. Heidbreder, S. van de Linde, M. P. Sheetz, M. Heilemann, et al., Nat. Methods 7, 717–719 (2010).CrossrefGoogle Scholar

  • [88]

    S. van de Linde, I. Krstic, T. Prisner, S. Doose, M. Heilemann et al., Photochem. Photobiol. Sci. 10, 499–506 (2011).Google Scholar

  • [89]

    R. Henriques and M. M. Mhlanga, Biotechnol. J. 4, 846–857 (2009).Google Scholar

  • [90]

    A. Löschberger, S. van de Linde , M. C. Dabauvalle, B. Rieger, M. Heilemann, et al., J. Cell Sci. 125, 570–575 (2012).Google Scholar

About the article

Klaus Weisshart

Klaus Weisshart received his diploma degree in Biology at the University of Constance. He undertook his PhD thesis on viral DNA replication at the German Cancer Research Center in Heidelberg, Heidelberg, Germany, work that he continued during postdoctoral fellowships at the Harvard Medical School in Boston, MA, USA and the Gene Center of the Ludwig-Maximilian University Munich, Munich, Germany. He joined Carl Zeiss Microscopy GmbH Jena in 2000 and holds the position of a Senior Product Manager with responsibilities for single molecule technologies.

Thomas Dertinger

Thomas Dertinger studied at University Cologne and received his doctoral degree in Physics in 2007. In his doctoral thesis he developed Dual-Focus FCS (2fFCS), Prof. Jörg Enderlein being his mentor. Subsequently, Thomas Dertinger moved for 3 years to Los Angeles to become a postdoctoral fellow at University of California Los Angeles (UCLA) in the group of Prof. Shimon Weiss. At UCLA he worked on super-resolution imaging techniques, specifically on SOFI. Currently, Thomas Dertinger is the head of SOFast GmbH and also works in the field of intellectual property management.

Thomas Kalkbrenner

Thomas Kalkbrenner received his diploma in Physics and his PhD at the University of Constance where he mainly worked in the field of near-field optics. He continued his research at the Department of Physical Chemistry at ETH Zürich and then joined the AMOLF Institute in Amsterdam where he moved into the field of biophysics, employing optical tweezers, FCS, and FRET to study the dynamics of DNA. He then worked at CyBio GmbH on the development of fluorescence and luminescence readers for high-throughput screening before he joined the Advanced Development Department of Carl Zeiss Microscopy GmbH in 2008. He works on the development of novel 3D imaging technologies.

Ingo Kleppe

Ingo Kleppe received his diploma in Physics from the University of Heidelberg, Heidelberg, Germany, and his PhD in Neuroscience from the University of Cambridge, Cambridge, UK, working on single synaptic memory. As a Junior Research Fellow of St. John’s College Cambridge, he continued his research at University College London moving into the field of live-cell confocal microscopy before joining Carl Zeiss Microscopy in 2006 working in the Advanced Development Department. His main focus is the assessment of new 3D imaging technologies for biomedical research.

Michael Kempe

Michael Kempe is Senior Principal at the Corporate Research and Technology Department of Carl Zeiss AG in Jena, Germany. He earned his diploma in Physics at Jena University and his PhD in Optical Sciences at the University of New Mexico in Albuquerque, NM, USA. After postdoctoral research positions at the City University in New York and the Physikalisch-Technische Bundesanstalt in Berlin he joined Carl Zeiss in 1999. His research work has concentrated on optical imaging technologies, in particular for biomedical application, ranging from research microscopy to medical diagnostics, which continues to be his main focus.


Corresponding author: Michael Kempe, Carl Zeiss AG, Carl Zeiss Promenade 10, 07745 Jena, Germany


Received: 2013-03-11

Accepted: 2013-04-20

Published Online: 2013-05-24

Published in Print: 2013-06-01


Citation Information: Advanced Optical Technologies, Volume 2, Issue 3, Pages 211–231, ISSN (Online) 2192-8584, ISSN (Print) 2192-8576, DOI: https://doi.org/10.1515/aot-2013-0015.

Export Citation

©2013 by THOSS Media & De Gruyter Berlin Boston.Get Permission

Citing Articles

Here you can find all Crossref-listed publications in which this article is cited. If you would like to receive automatic email messages as soon as this article is cited in other publications, simply activate the “Citation Alert” on the top of this page.

[1]
Moacir Ponti, Elias S. Helou, Paulo Jorge S. G. Ferreira, and Nelson D. A. Mascarenhas
IEEE Journal of Selected Topics in Signal Processing, 2016, Volume 10, Number 1, Page 71

Comments (0)

Please log in or register to comment.
Log in