Jump to ContentJump to Main Navigation
Show Summary Details
More options …

Current Directions in Biomedical Engineering

Joint Journal of the German Society for Biomedical Engineering in VDE and the Austrian and Swiss Societies for Biomedical Engineering

Editor-in-Chief: Dössel, Olaf

Editorial Board: Augat, Peter / Buzug, Thorsten M. / Haueisen, Jens / Jockenhoevel, Stefan / Knaup-Gregori, Petra / Kraft, Marc / Lenarz, Thomas / Leonhardt, Steffen / Malberg, Hagen / Penzel, Thomas / Plank, Gernot / Radermacher, Klaus M. / Schkommodau, Erik / Stieglitz, Thomas / Urban, Gerald A.


CiteScore 2018: 0.47

Source Normalized Impact per Paper (SNIP) 2018: 0.377

Open Access
Online
ISSN
2364-5504
See all formats and pricing
More options …

A framework for feedback-based segmentation of 3D image stacks

Johannes Stegmaier / Nico Peter / Julia Portl / Ira V. Mang / Rasmus Schröder / Heike Leitte / Ralf Mikut / Markus Reischl
Published Online: 2016-09-30 | DOI: https://doi.org/10.1515/cdbme-2016-0097

Abstract

3D segmentation has become a widely used technique. However, automatic segmentation does not deliver high accuracy in optically dense images and manual segmentation lowers the throughput drastically. Therefore, we present a workflow for 3D segmentation being able to forecast segments based on a user-given ground truth. We provide the possibility to correct wrong forecasts and to repeatedly insert ground truth in the process. Our aim is to combine automated and manual segmentation and therefore to improve accuracy by a tunable amount of manual input.

Keywords: 3D imaging; accurate segmentation; automated segmentation

1 Introduction

3D microscopic-imaging has numerous fields of application in biology and medicine, e.g. to analyze model organisms like mouse [1], zebrafish [2] or fruit fly [3]. An important aim is to reconstruct 3D surfaces or volumes from a set of 2D images called stack. As the manual annotation is time-consuming, automated image processing is applied to identify and quantify specific structures known as segments. For data sets with homogeneous segments, high-contrast and clear edges, there are plenty of sophisticated methods and tools to automatically annotate and quantify these segments (e.g. [4], [5], [6], [7]).

However, if image quality is low or if connected structures change rapidly across the slices of a stack, an accurate automated segmentation is impossible. Although, there are interactive software packages for segmentation, either they require good image quality [7] or contain only a few automatic segmentation methods [8]. Thus, effort is required to either perform the segmentation manually or to correct inaccurate automatic segmentation results, which limits accurate segmentation possibilities of 3D image stacks in high-throughput.

Figure 1 shows an exemplary electron microscopy (EM) image of a neuromuscular junction in mouse. An automatic segmentation and forecast of the edges is impossible due to the variable contrast, filigree structures of interest and non-smooth edge transitions across the slices. However, a highly accurate segmentation is needed, to visualize and analyze the folded membrane in 3D and to derive new insights about the 3D structure and the signal transmission at the neuromuscular junction.

Electron microscopic image of the neuromuscular junction in mouse. Colored lines are results of a manual and a semi-automatic LiveWire segmentation. Regions below and above the segmentation lines are the pre- and postsynapse, respectively (adapted from [9]).
Figure 1

Electron microscopic image of the neuromuscular junction in mouse. Colored lines are results of a manual and a semi-automatic LiveWire segmentation. Regions below and above the segmentation lines are the pre- and postsynapse, respectively (adapted from [9]).

An idea to increase segmentation accuracy is to support automatic algorithms with ground truth given by experts. In [9], we presented such a workflow, introducing a semi-automatic method based on the LiveWire technique [10], [11] (Figure 1): The original grayscale image is filtered using an objectness filter [12] and a subsequent binarization optimizes the edges, such that a semi-automatic segmentation approach can be optimally supported. The user is asked to manually click a few points along the structures of interest in the grayscale image and the LiveWire algorithm automatically connects these points by searching for the shortest path between neighboring points in the filtered binary image.

In the present paper, we use the discussed semi-automatic segmentation and extend it to support and accelerate the 3D segmentation. We add a minimal amount of manual input, to finally extract high-quality 3D surfaces of structures of interest from large EM images. Semi-automatically annotated segments are used to forecast the segmentation to adjacent slices by automatically projecting a subset of the contour pixels to the next slice. Projected pixels are again automatically connected using the LiveWire approach. Wrong propagations can be corrected by using a higher number of click points or by manually moving erroneous segments to the correct positions. Corrected segmentations yield to further ground truth which can then be propagated to adjacent slices. There is no need for parameter modifications, which allows non-experts to operate the tool.

2 Methods

An overview of the workflow for the proposed user-guided semi-automatic segmentation is depicted in Figure 2.

Workflow for automatic segment propagation.
Figure 2

Workflow for automatic segment propagation.

In a first step, a combination of rigid and elastic registration (for initialization and refinement, respectively) is applied using the open-source software Fiji to align all images automatically [13]. This method worked well in practice, but any other registration approach may be used if more appropriate for a given data set.

The registered grayscale image is filtered to obtain emphasized edges. We therefore implemented a filter pipeline in XPIWIT [14] and call it from MATLAB. The filter pipeline consists of an objectness filter that uses the eigenvalues of the Hessian matrix at each pixel location in order to emphasize line-like structures in 2D images [12]. Due to the non-smooth edge transitions between neighboring slices, we apply the edge enhancement on all slices separately, instead of using a plane-enhancement algorithm directly in 3D. A subsequent binarization is used to equalize the intensity of all edges and the enhanced image can then be used as input for the semi-automatic segmentation. The filter steps and the corresponding XPIWIT pipeline are shown in Figure 3. Next, the user is asked to manually annotate edges in the grayscale image by clicking pixels belonging to an edge of a structure of interest. A LiveWire algorithm connects the selected points and delivers a mathematical model for the edge as described in [9].

Preprocessing steps for optimal edge enhancement of a raw input image (left) and the corresponding processing pipeline implemented in XPIWIT (right) are shown.
Figure 3

Preprocessing steps for optimal edge enhancement of a raw input image (left) and the corresponding processing pipeline implemented in XPIWIT (right) are shown.

To propagate the model to the next slice (extrapolation) we seek for pixels being similar to the click points in adjacent slices, optimizing their position and repeating the LiveWire algorithm on these virtual click points as detailed in the next section. As an automatic extrapolation step might be error-prone in some cases, a possibility for manual correction needs to be provided as well. Having annotated segments in all stacks, modeling and rendering allows to derive features from the segmented structures of interest and to generate interactive 3D visualisations.

2.1 Segmentation propagation

To further speed up the semi-automatic segmentation, we developed a prediction strategy that allows to automatically propagate the manually segmented contours to adjacent slices. For each pixel of a manually segmented contour, we calculate the surface normal using finite differences. A few equally distant points along the contour are selected for propagation to the adjacent slices. For all selected propagation points, we perform a line search along the normal direction within a radius of a few pixels that is defined by the maximally expected distance between corresponding structures in neighboring slices. The best match along the normal line is identified using Computer Vision techniques like template matching with normalized cross correlation or the correlation coefficient as similarity measures as well as descriptor matching using FREAK descriptors [15]. Furthermore, we developed a combined distance measure using the FREAK descriptor matching distance, the distance to the ground truth contour as well as the intensity information of the image. Propagated points are then connected via the LiveWire approach yielding a prediction of the ground truth segment in the adjacent slice.

2.2 Interactive graphical user interface

To control the presented segmentation framework, we developed a prototypical GUI (Figure 4). Manually or semi-automatically segmented structures of interest can be completed using the proposed propagation methods. In cases where the automatic propagation failed to reconstruct the correct structures, the user can modify segments by repeating the manual segmentation or by manipulating the propagated segments. All propagated segments can in turn again be used to perform predictions, i.e., the segmentation consists of multiple repetitions of semi-automatic segmentation, prediction and correction.

Screenshot of the MATLAB-based GUI prototype that consists of a main segmentation window (left), previews of adjacent slices and controls for the propagation algorithm (right).
Figure 4

Screenshot of the MATLAB-based GUI prototype that consists of a main segmentation window (left), previews of adjacent slices and controls for the propagation algorithm (right).

3 Results

We evaluated the proposed segment propagation on a small detail that was cropped from a large electron microscopy stack. A randomly selected structure was segmented manually in five adjacent slices and served as a ground truth reference. The same structure was segmented using the semi-automatic LiveWire approach to measure the time reduction required for segmentation. The first slice of the LiveWire segmentation was subsequently propagated to the remaining four slices using template matching with the normalized cross correlation (NCC) or the correlation coefficient (CorrCoeff), FREAK descriptors as well as a combined measure that consisted of the FREAK descriptor matching distance, the distance to the original contour and the intensity of the original image. For all slices and all methods, we calculated the maximum and mean distance to the ground truth contour in pixels as well as the root mean squared deviation (RMSD). Furthermore, the required time for the segmentation was measured in seconds. The quantitative results are summarized in Table 1 and visualized in Figure 5. The best segmentation quality was achieved with the LiveWire approach (semi-automatic) as well as the combined approach (automatic), with an average distance of less than a pixel to the ground truth. Both template matching approaches (NCC and CorrCoeff) failed to predict the first slice and are thus not practically usable in this scenario. The plain FREAK algorithm performed slightly worse than the combined strategy. Compared to a manual segmentation of the five slices that took 214s, the processing times decreased by 78% for the LiveWire approach and by 93% for all propagation-based methods. Thus, the combined strategy turns out to be the best choice with respect to quality and time consumption.

Table 1

Comparison of different segmentation strategies.

Qualitative comparison of the different segmentation methods including manual (Ground Truth) and semi-automatic (LiveWire) annotations as well as automatic propagation of a LiveWire segmentation on the first slice using normalized cross correlation (NCC), correlation coefficient (CorrCoeff), FREAK descriptors and a combined measure as described in Section 2.1.
Figure 5

Qualitative comparison of the different segmentation methods including manual (Ground Truth) and semi-automatic (LiveWire) annotations as well as automatic propagation of a LiveWire segmentation on the first slice using normalized cross correlation (NCC), correlation coefficient (CorrCoeff), FREAK descriptors and a combined measure as described in Section 2.1.

4 Conclusion and outlook

In the present contribution, we introduced a new approach for improved semi-automatic segmentation of large-scale 3D image stacks. The method does not require tedious parameter tuning and can help to significantly speed up segmentation tasks in scenarios where entirely automated processing is impossible. By propagating manual annotations to adjacent slices and by using sophisticated correction, a close to error-free segmentation can be achieved.

Further work will be put on the development of a graphical user interface that condenses all involved workflow steps and to extend the quantification of speed and quality improvements of the proposed segmentation approach. To improve the interactivity and usability of the framework we plan to develop an efficient C++-based application with highly interactive visualisation, editing and prediction capabilities as well as parallel execution of segment propagations in the background. The envisioned application will provide a powerful tool to enable detailed analyses of large-scale 3D microscopic image data sets.

Author’s Statement

Research funding: The project is funded by the Helmholtz Association (Program BioInterfaces (NP, RM, MR), the German Research Foundation DFG (JS, RM, Grant No MI 1315/4), the German Federal Ministry of Education and Research (BMBF) (IVM, Grant No 13GW0044), University of Heidelberg (RS, JP, IVM), University of Kaiserslautern (HL) and the Heidelberg Karlsruhe Research Partnership - HEiKA (JS, MR, HL). Conflict of interest: Authors state no conflict of interest. Informed Consent: Informed consent has been obtained from all individuals included in this study. Ethical approval: The conducted research is not related to either human or animal use.

References

  • [1]

    Dhenain M, Ruffins SW, Jacobs RE. Three-Dimensional digital mouse atlas using High-Resolution MRI. Dev Biol. 2001;232:458–70. Google Scholar

  • [2]

    Mikut R, Dickmeis T, Driever W, Geurts P, Hamprecht F, Kausler BX, et al. Automated processing of zebrafish imaging data - a survey. Zebrafish. 2013;10:401–21. Google Scholar

  • [3]

    Peng H, Chung P, Long F, Qu L, Jenett A, Seeds AM, et al. Brainaligner: 3D registration atlases of drosophila brains. Nat Methods. 2011;8:493–8. Google Scholar

  • [4]

    Berning M, Boergens K, Helmstaedter M. SegEM: efficient image analysis for high-resolution connectomics. Neuron. 2015;87:1193—206. Google Scholar

  • [5]

    Genovesio A, Liedl T, Emiliani V, Parak W, Coppey-Moisan M, Olivo-Marin J. Multiple particle tracking in 3D+ t microscopy: method and application to the tracking of endocytosed quantum dots. IEEE Trans Image Process. 2006;15:1062–70. Google Scholar

  • [6]

    Kreshuk A, Straehle C, Sommer C, Koethe U, Cantoni M, Knott G, et al. Automated detection and segmentation of synaptic contacts in nearly isotropic serial electron microscopy images.PLoS One. 2011;6:e24899. Google Scholar

  • [7]

    Sommer C, Straehle C, Kothe U, Hamprecht F. ilastik: interactive learning and segmentation toolkit. In: Proc. IEEE International Symposium on Biomedical Imaging: From Nano to Macro, IEEE; 2011. p. 230–3. 

  • [8]

    Saalfeld S, Cardona A, Hartenstein V, Tomancak P. CATMAID: collaborative annotation toolkit for massive amounts of image data. Bioinformatics. 2009;25:1984–6. Google Scholar

  • [9]

    Portl J, Stegmaier J, Mang I, Schröder R, Reischl M, Leitte H. Visualization for error-controlled surface reconstruction from large electron microscopy image stacks. In: Proc. IEEE Visualisation Conference; 2015. Google Scholar

  • [10]

    Bradski G. The OpenCV Library. Dr. Dobb’s Journal of Software Tools; 2000. 

  • [11]

    Mortensen EN, Barrett WA. Intelligent scissors for image composition. In: Proceedings of the 22nd Annual Conference on Computer Graphics and Interactive Techniques, ACM; 1995. p. 191–8. Google Scholar

  • [12]

    Antiga L. Generalizing vesselness with respect to dimensionality and shape. The Insight Journal. 2007;3:1–14. Google Scholar

  • [13]

    Saalfeld S, Fetter R, Cardona A, Tomancak P. Elastic volume reconstruction from series of ultra-thin microscopy sections. Nat Methods. 2012;9:717–20. Google Scholar

  • [14]

    Bartschat A, Hübner E, Reischl M, Mikut R, Stegmaier J. XPIWIT - an XML pipeline wrapper for the insight toolkit. Bioinformatics. 2016;32:315–7. Google Scholar

  • [15]

    Alahi A, Ortiz R, Vandergheynst P. Freak: fast retina keypoint. In: IEEE Conference on Computer Vision and Pattern Recognition (CVPR); 2012. p. 510–7. Google Scholar

About the article

Published Online: 2016-09-30

Published in Print: 2016-09-01


Citation Information: Current Directions in Biomedical Engineering, Volume 2, Issue 1, Pages 437–441, ISSN (Online) 2364-5504, DOI: https://doi.org/10.1515/cdbme-2016-0097.

Export Citation

©2016 Markus Reischl et al., licensee De Gruyter.. This work is licensed under the Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 License. BY-NC-ND 4.0

Comments (0)

Please log in or register to comment.
Log in