Skip to content
BY-NC-ND 3.0 license Open Access Published by De Gruyter September 12, 2015

Integrating multimodal information for intraoperative assistance in neurosurgery

  • U. Eisenmann EMAIL logo , R. Metzner , C.R. Wirtz and H. Dickhaus


Computer-assisted planning of complex neurosurgical interventions benefits from a variety of specific functions and tools. However, commercial planning- and neuronavigation systems are rather restrictive concerning the availability of innovative methods such as novel imaging modalities, fiber tracking algorithms or electrical dipole mapping. In this respect there is a demand for modular neurosurgical planning systems offering flexible interfaces for easy enhancement. Furthermore all relevant planning information should be available within neuron-avigation. In this work we present a planning system providing these capabilities and its suitability and application in a clinical setting. Our Multimodal Planning System (MOPS 3D) offers a variety of tools such as definition of trajectories for minimally invasive surgery, segmentation of ROIs, integration of functional information from atlas maps or magnetoencephalography. It also supplies plugin interfaces for future extensions. For intraoperative application MOPS is coupled with the neuronavigation system Brainlab Vector Vision Cranial/ENT (VVC). We evaluated MOPS in the Department of Neurosurgery at the University Hospital Heidelberg. Surgical planning and navigation was performed in 5 frequently occurring clinical cases. The time necessary for planning was between 5 and 15 minutes including data import, segmentation and planning tasks. The additional information intraoperatively provided by MOPS 3D was highly appreciated by the neurosurgeons and the performance was satisfactory.

1 Introduction

Neurosurgical interventions have to be prudently planned considering all available clinical information. In this respect computer-assisted planning systems play an important role as they offer a variety of specific supporting functions and useful tools [1]. As innovative information sources such as imaging modalities, novel algorithms for fiber tracking or electrical dipole mapping are readily getting available, it is crucial to investigate them at an early stage in a clinical setting to judge their impact on specific types of individual interventions.

Commercial providers usually are rather hesitant with respect to the integration of new functionalities as they do have to conform to laws and provisions like the MDA (Medical Device Amendments) in the U.S. or the MPG (medical product act) in Germany. Thus, there is a need for modular scientific planning systems that offer flexible interfaces for rapidly integration of new methods and functions in an efficient way. Different scientific system approaches follow this strategy e.g. 3D Slicer [2] and our Multimodal Planning System (MOPS 3D) [3], offering sophisticated functionalities for different planning stages such as visualizing morphological and functional data, atlas structures and segmentation objects within the surrounding tissue.

Nowadays neuronavigation is commonly used in daily clinical routine and highly appreciated as a supportive tool. To this end, the availability of multimodal information for neuronavigation is highly desirable [4].

An integration of flexible scientific planning functionalities within a certified commercial navigation system seems to be a promising approach. Intraoperatively, it would be desirable to interact with the familiar neuronavigation system only and to incorporate further information of the scientific planning system. The additional amount of work for the OR team should be as low as possible for best practice with the proposed system.

Therefore we present an approach to establish the integration of the described functionalities and its application in a clinical setting. The implementation was based on our flexible scientific planning system MOPS 3D and the commercially available neuronavigation system Vector Vison Cranial/ENT (VVC) (BrainLAB Munich, Germany).

2 System design

MOPS 3D is developed in C++ using specific open source libraries such as the Visualization Toolkit (VTK), the Insight Segmentation and Registration Toolkit (ITK) and the Volume Rendering Engine (Voreen) [5].

2.1 Data acquisition and import

A plugin mechanism was established to offer flexible extension for novel planning information without changing the core functionality. As the plugins are directly linked with the according visualisation and dialog routines, the whole planning pipeline can be customized rather easily.

So far we developed import routines for data sources being used in the neuroradiology and neurosurgery departments at the Heidelberg University Hospital. These comprise various formats for imaging modalities, functional data and elastically matched atlas data [6].

Additional processing features provide rigid and elastic matching of different imaging modalities.

2.2 Planning features

All imported data is automatically indicated in the so called structure browser. This tree-like widget is the central element for controlling which planning items shall be visualized during different phases of the planning process and to customize specific items. This customization pertains to the type of visualization (2D contour, 3D, clipping, colour, transparency, etc.) and different options such as the colour coding for fMRI.

The visualization component offers typical interaction possibilities e.g. rotation, panning and zooming into the scene. Objects may be visualized as 3D structures or as 2D overlays onto the three orthogonal MRI cross-sections (see Figure 1).

Figure 1 Surgical planning view: Tumour (3d, blue), fMRI activations (2d, orange) and elastically matched atlas objects (2d, red and green)
Figure 1

Surgical planning view: Tumour (3d, blue), fMRI activations (2d, orange) and elastically matched atlas objects (2d, red and green)

Furthermore, MOPS 3D offers a segmentation toolbox comprising methods for thresholding, region growing, interactive segmentation tools and filter algorithms. It is also possible to import segmented structures from BrainLAB’s VVC system. As the surgeon is usually accustomed to use this system in daily routine he may prefer this approach. Furthermore segmented structures may be imported from VTK-compatible formats to deliver support for scientific prototypes.

The surgeon may add interactively defined annotations such as measured distances between functional areas and a tumour or other critical structures. He also can define trajectories indicating the planned advancement throughout the operation. EEG/MEG information is displayed as dipoles. fMRI data can be selectively displayed as 3D activity clusters or 2D activation maps.

2.3 Defining intervention modes

Planning a neurosurgical intervention comprises distinct tasks (e.g. planning the trepanation) that require specific planning items such as fMRI activations or segmentation objects. The neurosurgeon may choose and configure appropriate items using the structure browser. The final visualization composition may be stored as an “intervention mode” (IM) and recalled during the planning procedure or later on during the intervention. Thus, the neurosurgeon can define IMs he may need during the neurosurgical intervention while planning it. This approach can significantly reduce the amount of system interactions during the intervention.

An intervention mode (IM) comprises a name and a description specified by the surgeon, all currently visualized structures and the related settings (2D/3D, clipping, transparency, etc.). IMs are stored in an open and flexible XML format and therefore can be reused by other surgical systems.

2.4 Intraoperative assistance

To meet the requirements regarding space problems inside the OR and the high workload of staff, three main aims were postulated to successfully accomplish the integration task:

  • – All planning information from MOPS 3D shall be usable for neuronavigation

  • – Intraoperative user interaction shall be minimized

  • – The MOPS 3D workstation may be located outside the OR

The Brainlab Vector Vision Cranial/ENT (VVC) was selected as the primary neuronavigation system because the related “Vector Vision Link library” (VVL) offers suitable functionality for interoperating over a TCP/IP network. VVL is developed in C++ and is based on VTK data structures.

Nevertheless, MOPS 3D should comprise an open and flexible solution that can be also adapted for other neuronavigation systems. Therefore a generic interface was designed wherein multiple software drivers for specific navigation systems or tracking hardware can be implemented; for example a driver for the NDI Polaris tracking system was implemented for use in the laboratory.

Figure 2 shows the different communication channels that were established for integrating MOPS 3D and the VVC system. VVC sends the tracking data including the position and rotation of each tracked tool. MOPS 3D processes this data to establish navigation including all multimodal planning data. The resulting navigated views are sent to VVC as 2D images (vtkImageData objects).

Figure 2 Communication channels between MOPS 3D and VVC. Top: Tracking data is sent from VVC to MOPS 3D. Center: Navigated views are sent from MOPS 3D to VVC. Bottom: Optionally, GUI commands may be sent from VVC to MOPS 3D.
Figure 2

Communication channels between MOPS 3D and VVC. Top: Tracking data is sent from VVC to MOPS 3D. Center: Navigated views are sent from MOPS 3D to VVC. Bottom: Optionally, GUI commands may be sent from VVC to MOPS 3D.

Additionally, it is also necessary to interact with MOPS 3D to select the navigated views and to change the intervention modes. Therefore a separate communication channel was established to remotely control MOPS 3D settings from VVC as follows:

  1. MOPS 3D dynamically generates a GUI representation (2D image) containing all required functions (e.g. intervention modes) that shall be remotely controlled and sends it to VVC.

  2. VVC monitors the mouse events inside the remote GUI visualization and reports the related information to MOPS 3D.

  3. MOPS 3D processes the received mouse events and executes the associated function.

Utilizing these communication channels, MOPS planning data can be reused in VVC including navigation information. The MOPS 3D workstation can be located outside the OR if an Ethernet connection exists for remotely controlling by important commands via the displayed GUI in VVC.

3 Evaluation

3.1 Planning component

We evaluated the system’s planning component at the University hospital in Heidelberg. Surgical planning was performed for 30 patients by 5 different users. The average planning time was about 10 minutes including all data import, segmentation and planning tasks but varied according to the type of pathology and the number of planning items. The planning module’s performance was monitored. Even in complex planning scenarios, the frame rate did not drop below 30 frames per second (Quad Core CPU 2.4 GHz, Nvidia 460 GTX, Windows 7). The neurosurgeons appreciated the comprehensive planning procedure comprising an easy selection of multimodal information and choice of different visualization modes. However, the preparation procedures like data import, registration and segmentation have so far to be assisted by a technician for complexity reasons.

3.2 Intraoperative coupling

The coupling between MOPS 3D and VVC was evaluated within 5 neurosurgical interventions. 4 neurosurgeons were involved. The intraoperative performance using the link feature is strongly dependent on the network speed and the amount of views to be exported. Usually 4 views are simultaneously displayed in MOPS 3D showing the three orthogonal MRI cross-sections and an additional 3D view.

One exported MOPS 3D view (512x512 pixels) can be displayed in VVC with 20 frames per second (fps) if Gigabit Ethernet is available. The measured network load was about 110 Megabit. If exporting all four possible 4 views from MOPS 3D an average of 9 frames per second can be obtained (network load: 250 Megabit). The neurosurgeons stated this performance as adequate for neurosurgical interventions because fast movements of tracked instruments or the operating microscope are typically not required.

The system was working reliably during the surgical interventions which lasted 2 to 4 hours. The remote interaction control for adjusting settings and choosing intervention modes in MOPS 3D was regularly used and highly appreciated. It integrates fluently into the intraoperative workflow as it can be controlled via VVC’s touch screen.

The cases used for evaluation comprised complex tumour resections and clipping of aneurysms. In case of tumour resections the comprehensive 3D visualization of supplementary data like fMRI activations and elastically matched atlas structures provided additional valuable information.

Interventions concerning aneurysms commonly are not assisted by neuronavigation. However the 3D visualization of high resolution rotational angiography (RA) data matched onto the CTA during intervention was helpful for dissecting the aneurysm and optimally positioning the clip. The clipping procedure is a challenging task because supplying vessels must not be disconnected. This can be assisted by indicating the current position of the tool in relation to the aneurysm and the surrounding vessels (see Figure 3).

Figure 3 Left: Navigated MOPS view in VVC: 3D visualization of rotational angiography (RA). Green sphere corresponds to tracked focus point of the operation microscope. Message “Uncontrolled external view content” is generated by VVC to indicate an external data source. Right: Corresponding Microscope view. The aneurysm was clipped. White cross corresponds to the tracked focus point.
Figure 3

Left: Navigated MOPS view in VVC: 3D visualization of rotational angiography (RA). Green sphere corresponds to tracked focus point of the operation microscope. Message “Uncontrolled external view content” is generated by VVC to indicate an external data source. Right: Corresponding Microscope view. The aneurysm was clipped. White cross corresponds to the tracked focus point.

4 Discussion and outlook

MOPS 3D was successfully used for assisting neurosurgeons in planning and performing interventions. The concept of integrating a flexible planning system with a commercial neuronavigation system is feasible. However it should be considered that unfortunately the development effort is rather high.

During the last years different standards for coupling devices in the OR were proposed such as OpenIGTLink [7]. So far only a few vendors have implemented according interfaces. Brainlab offers an OpenIGTLink conformant interface for some of their navigation systems. Technical evaluations indicate that the performance for displaying remote information could be considerably improved compared to VVlink. However, some important features e.g. for developing a remote GUI are not supported any longer.

If more vendors of OR equipment supported comparable communication standards this could sustainably stimulate scientific developments inside the operating theatre.

Author's Statement

  1. Conflict of interest: Authors state no conflict of interest. Material and Methods: Informed consent: For the evaluation inside the OR all individuals were informed before the intervention. Ethical approval: The certified BrainLab VVC system was used for evaluation. Additional information appearing therein from MOPS 3D was clearly tagged with a warning note. Based on the same underlying data, this supplementary visual presentation did not bear any therapeutic decision. Therefore, a formal approval by an ethics committee was not mandatory.


[1] Stadie AT, Kockro RA. Mono-stereo-autostereo: the evolution of 3-dimensional neurosurgical planning. Neurosurgery 2013;72 Suppl 1:63–77. 10.1227/NEU.0b013e318270d310.Search in Google Scholar PubMed

[2] Gering DT, Nabavi A, Kikinis R, Hata N, O’Donnell LJ, Grimson WE, et al. An integrated visualization system for surgical planning and guidance using image fusion and an open MR. J Magn Reson Imaging JMRI 2001;13:967–75.10.1002/jmri.1139Search in Google Scholar PubMed

[3] Metzner R, Eisenmann U, Wirtz CR, Dickhaus H. Pre- and intraoperative processing and integration of various anatomical and functional data in neurosurgery. Stud Health Technol Inform 2006;124:989–94.Search in Google Scholar

[4] Tanaka Y, Nariai T, Momose T, Aoyagi M, Maehara T, Tomori T, et al. Glioma surgery using a multimodal navigation system with integrated metabolic images. J Neurosurg 2009;110:163–72. 10.3171/2008.4.17569.Search in Google Scholar PubMed

[5] Meyer-Spradow J, Ropinski T, Mensmann J, Hinrichs K. Voreen: a rapid-prototyping environment for ray-casting-based volume visualizations. IEEE Comput Graph Appl 2009;29:6–13. 10.1109/MCG.2009.130.Search in Google Scholar PubMed

[6] Ganser KA, Dickhaus H, Metzner R, Wirtz CR. A deformable digital brain atlas system according to Talairach and Tournoux. Med Image Anal 2004;8:3–22.10.1016/ in Google Scholar PubMed

[7] Tokuda J, Fischer GS, Papademetris X, Yaniv Z, Ibanez L, Cheng P, et al. OpenIGTLink: an open network protocol for image-guided therapy environment. Int J Med Robot Comput Assist Surg MRCAS 2009;5:423–34. 10.1002/rcs.274.Search in Google Scholar PubMed PubMed Central

Published Online: 2015-9-12
Published in Print: 2015-9-1

© 2015 by Walter de Gruyter GmbH, Berlin/Boston

This article is distributed under the terms of the Creative Commons Attribution Non-Commercial License, which permits unrestricted non-commercial use, distribution, and reproduction in any medium, provided the original work is properly cited.

Downloaded on 27.2.2024 from
Scroll to top button