Skip to content
BY-NC-ND 3.0 license Open Access Published by De Gruyter September 12, 2015

Closed-loop approach for situation awareness of medical devices and operating room infrastructure

  • Max Rockstroh EMAIL logo , Stefan Franke and Thomas Neumuth


In recent years, approaches for information and control integration in the digital operating room have emerged. A major step towards an intelligent operating room and a cooperative technical environment would be autonomous adaptation of medical devices and systems to the surgical workflow. The OR staff should be freed from information seeking and maintenance tasks. We propose a closed-loop concept integrating workflow monitoring, processing and (semi-)automatic interaction to bridge the gap between OR integration of medical devices and workflow-related information management.

Four steps were identified for the implementation of workflow-driven assistance functionalities. The processing steps in the closed loop of workflow-driven assistance could either be implemented with centralized responsible components or in a cooperative agent-based approach. However, both strategies require a common framework and terminology to ensure interoperability between the components, the medical devices (actors) and the OR infrastructure.

1 Motivation

Surgical interventions tend to be complex processes with high inter-patient variability. Reliable and effective technical assistance for surgical procedure is challenging to implement. In recent years, approaches for information and control integration in the digital operating room (OR) have emerged. Industry and academia developed integrated OR solutions [13]. However, the next step towards an intelligent operating room and a cooperative technical environment would be autonomous adaptation of medical devices and systems to the surgical workflow. This requires an integration of methods for surgical workflow recognition and management. Thus, an effective support of the surgeon also depends on machine-readable information about the surgical procedure and situation. The aim is to unburden the surgeon and the OR staff from information seeking and maintenance tasks. The OR staff should be freed from those tasks to be able to focus on the surgical activities. The (semi-)automatic assistance functionalities require a dynamic system’s behaviour. Thereto, the medical devices and systems need a general kind of understanding of the surgical situation they are operated in. The aim of transforming the operating room into a cooperative technical environment for the surgeon defines the requirements for the monitoring of surgical processes and the subsequent processing.

In the present work, we propose a closed-loop concept for the integration of workflow monitoring, processing and (semi-)automatic interaction, thus, bridging the gap between technical integration of medical devices and workflow-related information.

Figure 1 Schematic representation of the closed-loop of workflow-driven surgical assistance.
Figure 1

Schematic representation of the closed-loop of workflow-driven surgical assistance.

2 Methods

The prerequisites for workflow-driven assistance are investigated in the following subsections.

2.1 Workflow-driven assistance

Workflow-driven assistance systems can be seen as the actors of a closed loop (see Figure 1). The surgical process is the entity influenced by the technical actors (medical devices and systems) and the monitored entity at the same time. The monitoring gathers information on the state of the process and the participating entities (surgeon, OR staff, patient, technical resources etc.) in a structured manner. These data can be analysed to recognize performed activities. A situation description is generated from those data by interpreting them using a knowledge base. Thus, the information can be enriched to provide a comprehensive representation of the operating context for the medical devices and systems. They need to decide about automatic actions based on the contextual information.

The following subsections describe the four major steps in the workflow-driven assistance loop in more detail.

2.2 Situation interpretation and assistance functionality

We consider a specific action performable by a technical system that is envisaged for automation. The action should be atomic. The responsible system needs to decide on the execution based on its current operating context. Thus, the machine-readable description must contain all the relevant information to safely distinguish between situations where the action should be performed and situations where the action is not applicable. This decision may depend on the current situation, previous situations, abstract intentions of performed activities and estimations on the further process course. Especially the latter aspects cannot be derived from observations of the current process instance but require some kind of knowledge base. The most common implementations are based on surgical process models in different granularity levels [4, 5]. The interpretation combines the knowledge base with situational information. These information entities should be represented using a vocabulary compatible to the knowledge base.

2.3 Process monitoring and data analysis

A data analysis step is required to transform the available data on the process state into a vocabulary that fulfils the requirements of the situation interpretation. Depending on the type of data monitored, there are numerous approaches and methods available to extract information on current surgical activities. Most of the workflow recognition techniques are based on a set of predefined categories (e.g. a vocabulary of surgical work steps). The recognition task has been studied on various granularity levels from the whole procedure down to the surgeon’s hand motions, but the efforts especially focused on phase or activity recognition.

Surgical processes need to be continuously monitored to gather information on the current activities. Intrinsic and extrinsic data sources can be used for the monitoring task. Intrinsic data sources include video signals from endoscopes, microscopes and room cameras [69]. Data provided by medical devices such as anaesthesia or tracking systems have also been used. Several approaches introduced additional sensors to the OR i.e. [10, 11] to enrich the signals for monitoring of surgical interventions. RFID [6] and inertial sensors are the most common. The monitored data need to be structured and easily accessible for further processing. An ideal monitoring technology should also not influence the surgical process, i.e. it should not introduce any additional workload.

3 Applications

We applied the described approach to an example use case and examined the requirements for each processing step. The removal of intracranial tumours was chosen as an example clinical procedure because of the surgical and technical complexity which indicates a need for workflow-driven assistance and automation. The procedure can be split into four major phases, a preparation of patient and devices, the craniotomy to open the skull and dura mater for access to the tumour, the resection of the tumour and a closure phase. The commonly used technical devices include a microscope, a neuro navigation system, a trephine and an HF device among others. Depending on the clinical case, ultrasound and electrophysiology might also be used during the intervention.

The technical set up in the OR includes several information sources that provide their data via video signals (microscope, ultrasound, navigation system, PACS viewer etc.). A centralized display for the different video source might help to reduce ergonomic problems [12] and reduce the information seeking workload. The selection of an appropriate video source is an action worthwhile to be considered for automation. The relevance of a video source highly depends on the surgical situation. We consider the atomic action of switching the centralized display to the neuro navigation system. Assume the video signal of the navigation system should be displayed automatically in two different situations:

  1. During landmark-based registration in preparation and

  2. While checking the tumour location during surgery.

In both situations, we can assume that the pointer is used by the surgeon. However, the pointer might also be moved in other occasions (e.g. while sorting the instrument table). Hence, contextual information is required to reliably distinguish the desired situations A and B from irrelevant incidents. This implies that the operating context of the video switching system needs to include information on the current activity (at least considering the pointer) as well as whether it is the intention of the surgeon to either register the navigation or check the tumour location. A possible rule set to estimate the latter information could be:

  1. In preparation phase AND patient dataset was loaded AND pointer is in use

  2. Not in preparation phase AND navigation was successfully registered AND pointer is in use

The intervention phase can be derived from the knowledge base, i.e. a surgical process model including phases using the recognized activities. Thus, the process data analysis needs to provide the current activity and the status of the navigation system.

The monitoring needs to provide data that allow for the identification of activities relevant for the recognition of the phase. Additionally, the monitoring component must have access to the status of the navigation system. With the advent of integrated OR technologies, such as OR.NET [3], we assume that the required data will be available directly and intrinsic through the OR network. Hence, the discussed automation use case can be implemented in a digital operating room.

The simplified strategy for autonomous decision on the selection of the navigation system as primary source of information for the surgeon covered 95.3 per cent of the 190 usages of the navigation system in the recorded brain tumour removal cases used as a test set.

4 Discussion

In the present work, we discussed the four steps necessary for the implementation of workflow-driven assistance functionalities and examined the cascades of requirements.

The application of the closed-loop concept was demonstrated with a simplified example use case. More complex system behaviour will require additional rules and information entities to optimally support the surgeon and the OR staff. For instance, the autonomous selection of video sources for the centralized display needs to implement the loop for a variety of devices and should include some priorities as well as a manual overwrite functionality.

The processing steps in the closed loop of workflow-driven assistance could either be implemented with centralized responsible components or in a cooperative agent-based approach. However, both strategies require a common framework and terminology in the implementation to ensure interoperability between the components, the medical devices (actors) and the OR infrastructure. The development process for autonomous adaptation of devices and assistance functionalities must also take risk analysis and management aspects into account. In future work, it will be valuable to examine strategies to integrate risk management tools early in the development process.

The implementation of autonomous medical device configuration and adaptation to situation-dependent surgical requirements will reduce the maintenance overhead for the surgeon and the OR staff and thus contribute to safe and efficient patient care.

Author's Statement

  1. Conflict of interest: Authors state no conflict of interest. Material and Methods: Informed consent: Informed consent has been obtained from all individuals included in this study. Ethical approval: The research related to human use has been complied with all the relevant national regulations, institutional policies and in accordance the tenets of the Helsinki Declaration, and has been approved by the authors’ institutional review board or equivalent committee.


[1] K. Cleary, A. Kinsella, and S. K. Mun, “OR 2020 workshop report: Operating room of the future,” Int. Congr. Ser., vol. 1281, pp. 832–838, Mai 2005.10.1016/j.ics.2005.03.279Search in Google Scholar

[2] H. U. Lemke and M. W. Vannier, “The operating room and the need for an IT infrastructure and standards,” Int. J. Comput. Assist. Radiol. Surg., vol. 1, no. 3, pp. 117–121, Nov. 2006.10.1007/s11548-006-0051-7Search in Google Scholar

[3] B. Andersen, H. Ulrich, A.-K. Kock, J.-H. Wrage, and J. Ingenerf, “Semantic interoperability in the OR.NET project on networking of medical devices and information systems - A requirements analysis,” in 2014 IEEE-EMBS International Conference on Biomedical and Health Informatics (BHI), 2014, pp. 428– 431.10.1109/BHI.2014.6864394Search in Google Scholar

[4] P. Jannin and X. Morandi, “Surgical models for computerassisted neurosurgery,” NeuroImage, vol. 37, no. 3, pp. 783– 791, Sep. 2007.10.1016/j.neuroimage.2007.05.034Search in Google Scholar PubMed

[5] T. Neumuth, N. Durstewitz, M. Fischer, G. Strauss, A. Dietz, J. Meixensberger, P. Jannin, K. Cleary, H. U. Lemke, and O. Burgert, “Structured recording of intraoperative surgical work-flows,” 2006, vol. 6145, p. 61450A–61450A–12.10.1117/12.653462Search in Google Scholar

[6] T. Blum, H. Feußner, and N. Navab, “Modeling and Segmentation of Surgical Workflow from Laparoscopic Video,” in Medical Image Computing and Computer-Assisted Intervention – MICCAI 2010, T. Jiang, N. Navab, J. P. W. Pluim, and M. A. Viergever, Eds. Springer Berlin Heidelberg, 2010, pp. 400– 407.10.1007/978-3-642-15711-0_50Search in Google Scholar PubMed

[7] M. Unger, C. Chalopin, and T. Neumuth, “Vision-based online recognition of surgical activities,” Int. J. Comput. Assist.Radiol. Surg., pp. 1–8, Mar. 2014.10.1007/s11548-014-0994-zSearch in Google Scholar PubMed

[8] F. Lalys, L. Riffaud, D. Bouget, and P. Jannin, “A Framework for the Recognition of High-Level Surgical Tasks From Video Images for Cataract Surgeries,” IEEE Trans. Biomed. Eng., vol. 59, no. 4, pp. 966–976, Apr. 2012.10.1109/TBME.2011.2181168Search in Google Scholar PubMed PubMed Central

[9] F. Lalys, L. Riffaud, X. Morandi, and P. Jannin, “Surgical Phases Detection from Microscope Videos by Combining SVM and HMM,” in Medical Computer Vision. Recognition Techniques and Applications in Medical Imaging, B. Menze, G. Langs, Z. Tu, and A. Criminisi, Eds. Springer Berlin Heidelberg, 2011, pp. 54–62.10.1007/978-3-642-18421-5_6Search in Google Scholar

[10] O. Weede, F. Dittrich, H. Worn, B. Jensen, A. Knoll, D. Wilhelm, M. Kranzfelder, A. Schneider, and H. Feussner, “Workflow analysis and surgical phase recognition in minimally invasive surgery,” in 2012 IEEE International Conference on Robotics and Biomimetics (ROBIO), 2012, pp. 1080–1074.10.1109/ROBIO.2012.6491111Search in Google Scholar

[11] N. Padoy, T. Blum, H. Feussner, M.-O. Berger, and N. Navab, “On-line Recognition of Surgical Activity for Monitoring in the Operating Room,” in Proceedings of the 20th National Conference on Innovative Applications of Artificial Intelligence - Volume 3, Chicago, Illinois, 2008, pp. 1718–1724.Search in Google Scholar

[12] M. Rockstroh, S. Franke, and T. Neumuth, “A workflow-driven surgical information source management,” in Int J Comput Assist Radiol Surg, Heidelberg, 2013, vol. 8 (1), pp. 189–191.Search in Google Scholar

Published Online: 2015-9-12
Published in Print: 2015-9-1

© 2015 by Walter de Gruyter GmbH, Berlin/Boston

This article is distributed under the terms of the Creative Commons Attribution Non-Commercial License, which permits unrestricted non-commercial use, distribution, and reproduction in any medium, provided the original work is properly cited.

Downloaded on 26.2.2024 from
Scroll to top button