Jump to ContentJump to Main Navigation
Show Summary Details
More options …

Current Directions in Biomedical Engineering

Joint Journal of the German Society for Biomedical Engineering in VDE and the Austrian and Swiss Societies for Biomedical Engineering

Editor-in-Chief: Dössel, Olaf

Editorial Board: Augat, Peter / Buzug, Thorsten M. / Haueisen, Jens / Jockenhoevel, Stefan / Knaup-Gregori, Petra / Kraft, Marc / Lenarz, Thomas / Leonhardt, Steffen / Malberg, Hagen / Penzel, Thomas / Plank, Gernot / Radermacher, Klaus M. / Schkommodau, Erik / Stieglitz, Thomas / Urban, Gerald A.


CiteScore 2018: 0.47

Source Normalized Impact per Paper (SNIP) 2018: 0.377

Open Access
Online
ISSN
2364-5504
See all formats and pricing
More options …

Surgical navigation with QR codes

Marker detection and pose estimation of QR code markers for surgical navigation

Manuel Katanacho
  • Corresponding author
  • Fraunhofer Institute for Production Systems and Design Technology IPK, Pascalstrasse 8 – 9, 10587 Berlin, Germany
  • Email
  • Other articles by this author:
  • De Gruyter OnlineGoogle Scholar
/ Wladimir De la Cadena
  • Fraunhofer Institute for Production Systems and Design Technology IPK, Pascalstrasse 8 – 9, 10587 Berlin, Germany
  • Email
  • Other articles by this author:
  • De Gruyter OnlineGoogle Scholar
/ Sebastian Engel
  • Fraunhofer Institute for Production Systems and Design Technology IPK, Pascalstrasse 8 – 9, 10587 Berlin, Germany
  • Email
  • Other articles by this author:
  • De Gruyter OnlineGoogle Scholar
Published Online: 2016-09-30 | DOI: https://doi.org/10.1515/cdbme-2016-0079

Abstract

The presented work is an alternative to established measurement systems in surgical navigation. The system is based on camera based tracking of QR code markers. The application uses a single video camera, integrated in a surgical lamp, that captures the QR markers attached to surgical instruments and to the patient.

Keywords: marker detection; pose estimation; QR codes; surgical navigation

1 Introduction

Surgical navigation systems are used to visualize the position of medical instruments relative to the patient on three dimensional computer tomography (CT) or magnetic resonance tomography (MRT) patient data during surgery. With the help of this assistance, the surgeon can navigate accurately to the operating field and perform the operation without harming sensitive structures around that region. For localizing the patient and the instruments two established measurement principles exist: optical tracking [1] and electromagnetic tracking [2]. Optical tracking systems use an external stereoscopic camera that measures the position of reflector spheres arrayed on the instruments and the patient. That way the relative transformation between the patient and the instruments is determined. The main disadvantages are the required space in the operating room and the restriction of the medical personnel in accessing the patient, as the camera’s line of sight must not be blocked. Electromagnetic tracking systems use a magnetic field to measure the position of a sensing coil on the instrument or the patient. The key disadvantages are the higher inaccuracy compared to optical tracking and its susceptibility to electrically conductive and ferromagnetic materials.

The developed system based on QR codes is a novel approach for the tracking procedure for surgical navigation systems. The proposed system uses a video camera and flat markers for localization [3].

2 System description

A single high resolution camera that is integrated in a surgical light is used to measure the position and orientation of QR codes attached to the patient and surgical instruments to track. The system is highly integrated, the line of sight problem is minimized and the QR code markers are intuitively in the field of view of the camera. A special feature is that QR codes of different sizes can be flexibly used for various instruments since the QR code size, that is required to determine the position, and transformation from marker to tip can be stored in the QR code. A further benefit of using a QR code as a marker for pose estimation, is that the QR code structure allows to detect many feature points that can be used for precise detection. Figure 1 shows a system setup with a patient marker and an instrument marker.

System description of the surgical navigation measurement system based on QR codes. A high resolution camera integrated in the surgical light acquires images of QR codes attached to a pointer instrument and the patient. The transformation between the QR codes and the camera is computed by detecting the marker position and orientation in the images and reading the QR size that is encoded in the QR codes.
Figure 1

System description of the surgical navigation measurement system based on QR codes. A high resolution camera integrated in the surgical light acquires images of QR codes attached to a pointer instrument and the patient. The transformation between the QR codes and the camera is computed by detecting the marker position and orientation in the images and reading the QR size that is encoded in the QR codes.

2.1 Image processing algorithms

Surgical navigation requires a highly accurate estimation of the patient and instrument pose. In this work, this approach is done by detecting geometrical features and computing the homography transformation between the QR code and the camera.

The work flow of image processing is shown in Figure 2. First of all, the region of interest is detected for every QR code in the acquired images. Then each detected QR code is decoded, i.e. the information that is saved in the QR code is extracted. As input for the pose estimation, the code structure is detected with high precision, i.e. corresponding feature points are computed. Finally the position and orientation is estimated.

Work flow of the image processing algorithms.
Figure 2

Work flow of the image processing algorithms.

2.1.1 Region of interest (ROI) detection

QR codes are characterized by three finder patterns (FIPs) [4]. The FIPs are located in three corners of the QR code. Each finder pattern consists of three contours that can be detected, i.e. i, j and k (see Figure 3). To achieve a robust recognition of the FIPs of each QR code, three geometrical features are considered: hierarchy, concentricity and perimeter proportionality.

Finder pattern (FIP) structure.
Figure 3

Finder pattern (FIP) structure.

  • Hierarchy: Contour i encloses contour j and k

  • Concentricity: The three contours have the same or nearly the same center of mass

  • Perimeter proportionality: The contours have the following specific length ratio:

Perimeteri=75Perimeterj(1)

Perimeterj=53Perimeterk(2)

when the above conditions are fulfilled, a FIP is unequivocally detected.

To establish the desired ROIs it is necessary to group the detected FIPs. This means, that in a frame with multiple markers, the detected FIPs have to be assigned to the corresponding QR code in order to create a ROI of each QR code. The assignment can be considered as a classification problem. To perform the classification, the following features are considered for each FIP: perimeter, area and distance to the origin. The product of these features is a very robust measure to classify the FIPs. The features are sorted and divided into the number of found QR codes. If there are for instance three QR codes, there will be nine FIPs and the feature vector has nine elements. In the sorted vector the first three elements correspond to the first QR code, the 4th, 5th and 6th element to the 2nd QR code and so on.

The ROI creation process is a geometric calculation based on all of the points obtained in the FIPs detection step. Figure 4 shows the involved points and how the ROI is created based on the geometry. The goal is to obtain the four points P1, P2, P3 and Q. The points P1, P2, P3 are obtained by the FIPs and the point Q is the intersection of two lines defined by the two outer finder patterns.

ROI creation based on the detected finder patterns. The points P1, P2, P3 are computed from the detected finder patterns, the point Q is defined by the intersection of two lines defined by the two outer finder patterns.
Figure 4

ROI creation based on the detected finder patterns. The points P1, P2, P3 are computed from the detected finder patterns, the point Q is defined by the intersection of two lines defined by the two outer finder patterns.

2.1.2 Marker detection and pose estimation

For pose estimation the size of the markers that is encoded in the QR codes, is required. Since each QR code holds its size information, various QR code sizes can be used for different applications without giving the system this preknowledge in advance. The pose describes the transformation between the marker plane coordinate system and the camera coordinate system as a rotation matrix R and translation vector t. Two methods are proposed for pose estimation:

  • A method based on points correspondence, it is a conventional method but refined in accuracy

  • A method based on the optimization of the homography through template matching

2.1.3 Pose estimation based on point correspondences

The used QR code consists of 21 × 21 elements. Each single element is detected as a pattern with four corners by line fitting of all edges that lie in a row (see Figure 5). By computing the intersection of all lines, 484 points are detected and can be used for pose estimation. The input image is filtered with a Sobel edge detector in x and y direction. For line fitting, all the edges that lie in a row are used to fit one line. For this purpose a mask is created for each of this edge points. The mask is iteratively applied and the lines are fitted to all of the edge contours. Finally the intersections of all detected lines in both directions are computed.

Detection of point in the QR code by line fitting and computing the line intersections.
Figure 5

Detection of point in the QR code by line fitting and computing the line intersections.

Pose estimation based on optimization of the homography

This is a novel method aimed to obtain the pose of a QR code marker in space with high accuracy. The method performs an iterative template matching between a created QR code template and the QR code in the acquired image until an error function is minimized [5]. This error function depends on the position and orientation of the marker, and its numerical optimization is based on a non-linear unconstrained problem [6]. The algorithm is described in Figure 6.

Algorithm to compute the homography by performing an iterative template matching between a created QR code template and the QR code in the acquired image.
Figure 6

Algorithm to compute the homography by performing an iterative template matching between a created QR code template and the QR code in the acquired image.

The method starts by creating a template of the QR code. With the already computed ROI of the marker in the acquired image, a subimage f of the marker is created. Then an initial homography matrix H0 can be computed with four edge points (xi, yi) of the template and the corresponding edge points (ui, vi) of the ROI by setting up a system of equations based on the following formula, with a scalar si [7]:

si(uivi1)=H(xiyi1)(3)

After that, a new image g is created by transforming the QR code template to the ROI with the current homography Hi. The image f and g are compared to compute their difference and consequently evaluating it as an error value. Finally, depending on the error, the homography is iteratively optimized until the images are as similar as possible, i.e. the error function is optimized. With this optimized homography matrix, it is possible to calculate the pose according to [8].

3 Results and conclusion

To evaluate the precision of the pose estimation procedure reference measurements are done with the Optotrak Certus tracking system [9]. With a fixed pose of the Optotrack camera and the video camera, respectively, the transformations “Optotrack to video camera” and “video camera to QR code” are recorded and evaluated. Measurements are done at five different distance ranges (200–300 mm, 300–500 mm, 500–650 mm, 650–750 mm, 750–900 mm) for angles from 0° to 40° in all directions. The RMS error is computed from 20 measurements in each distance range. The camera for image acquisition is a HD camera from the XIMEA xiQ series with a resolution of 1280 × 1024 px and 60 frames per second.

The accuracy for the iterative homography method varies between 6 mm and 14 mm for the translation at a distance from 200 mm to 900 mm and QR code sizes of 50 × 50 mm and 100 × 100 mm. The rotation accuracy lies between 2° and 5°. The pose estimation based on point correspondences, by fitting lines to the QR code structure, shows better accuracy results compared to the iterative homography method. In this case, the translation accuracy ranges from 6 mm to 8 mm also at a distance from 200 mm to 900 mm and QR code sizes of 50 × 50 mm and 100 × 100 mm. The rotation accuracy ranges from 1° to 4°.

The current setup allows a detection and pose estimation process with one to five frames per second, but can be accelerated by optimizations on the CPU and GPU. QR codes can be detected for a tilting angle up to 60° dependent on the marker size and distance to the camera. The number of QR codes to track in one image is not limited.

The presented measurement system for surgical navigation purposes is a novel approach with several advantages compared to conventional optical or electromagnetic tracking systems. Especially the high level of integration by using one video camera integrated in a surgical light benefits a potential navigation application. The line of sight between the video camera and QR codes is less likely to blocked because the camera is positioned above the patient and the lighting of the surgical lamp is intuitively not interrupted for optimal illumination of the surgery area. The marker size and marker to tip transformation can be encoded in the QR code. Therefore, QR markers of any size can be attached to various instruments without system adaption. The presented algorithm detects several QR codes in an image by detecting the QR code finder patterns and classifying them according to certain features. A region of interest is created for each QR code in the image, which is the basis for exact QR code detection and pose estimation. The algorithms have been tested under several conditions and show robust results even for distorted, rotated, and tilted QR code markers.

It could be visually identified that the detection process for computing the point correspondences is still error-prone resulting in mismatches. Therefore the system accuracy is not sufficient to fulfil the requirements for a surgical navigation application yet, but can be improved in future work.

Author’s Statement

Research funding: The author state no funding involved. Conflict of interest: Authors state no conflict of interest. Material and Methods: Informed consent: Informed consent has been obtained from all individuals included in this study. Ethical approval: The research related to human use complies with all the relevant national regulations, institutional policies and was performed in accordance with the tenets of the Helsinki Declaration, and has been approved by the authors’ institutional review board or equivalent committee.

References

  • [1]

    Balachandran R, Fitzpatrick M, Labadie R. Accuracy of image-guided surgical systems at the lateral skull base as clinically assessed using bone-anchored hearing aid posts as surgical targets. Otol Neurotol. 2008;29:1050–5. Google Scholar

  • [2]

    Baszynski M, Moron Z, Tewel N. Electromagnetic navigation in medicine – basic issues, advantages and shortcomings, prospects of improvement. J Phys Conf Ser. 2010;238:012056. Google Scholar

  • [3]

    Engel S, Katanacho M, Keeve E, Uhlmann E. Verfahren, Anordnung und Computerprogrammprodukt zur Lageerfassung eines zu untersuchenden Objekts 2015, DE Patent DE 10 2015 212 352.9. 

  • [4]

    Kong S. QR code image correction based on corner detection and convex hull algorithm. J Multimed. 2013;8:662–8. Google Scholar

  • [5]

    Briechle K, Hanebeck U. Template matching using fast normalized cross correlation. Institute of automatic Control Engineering, Technische Universitaet Muenchen, 2001. Google Scholar

  • [6]

    King D. Dlib-ml: a machine learning toolkit. J Mach Learn Res. 2009;10:1755–8. Google Scholar

  • [7]

    Abawi D, Bienwald J, Dörner R. Accuracy in optical tracking with fiducial markers: an accuracy function for ARToolKit: 2004: Proceedings of the Third IEEE and ACM International Symposium on Mixed and Augmented Reality (ISMAR), 2004. Google Scholar

  • [8]

    Zhang Z. A flexible new technique for camera calibration. IEEE Trans Pattern Anal Mach Intell. 2000;22:1330–4. Google Scholar

  • [9]

    Mazumder MMG, Kim S, Park SJ. Precision and repeatability analysis of optotrak Certus as a tool for gait analysis utilizing a 3D robot. J Eng Technol Res. 2011;3:37–43. Google Scholar

About the article

Published Online: 2016-09-30

Published in Print: 2016-09-01


Citation Information: Current Directions in Biomedical Engineering, Volume 2, Issue 1, Pages 355–358, ISSN (Online) 2364-5504, DOI: https://doi.org/10.1515/cdbme-2016-0079.

Export Citation

©2016 Manuel Katanacho et al., licensee De Gruyter.. This work is licensed under the Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 License. BY-NC-ND 4.0

Comments (0)

Please log in or register to comment.
Log in