Abstract
Previous research reported catheter pose-dependent virtual angioscopy images for endovascular aortic repair (EVAR) (phantom studies) without any validation with video images. The goal of our study focused on conducting this validation using a video graphics array (VGA) camera. The spatial relationship between the coordinate system of the virtual camera and the VGA camera was computed with a Hand-Eye calibration so that both cameras produced similar images. A re-projection error of 3.18 pixels for the virtual camera and 2.14 pixels for the VGA camera was obtained with a designed three-dimensional (3D) printed chessboard. Similar images of the vessel (3D printed aorta) were acquired with both cameras except for the different depth. Virtual angioscopy images provide information from inside the vessel that may facilitate the understanding of the tip position of the endovascular tools while performing EVAR.
Introduction
Aortic aneurysms are a potentially lethal disease. Dilatation of the aortic diameter greater than 1.5 times its normal size leads to a weakening of the vessel wall architecture with the potential risk of rupture. Endovascular aortic repair (EVAR), a minimally invasive technique to treat aortic aneurysms, involves the deployment of a stent graft along the aneurysm with the aid of guide wires and catheters to cover the aneurysm and establish a channel for blood. The guidance of the endovascular tools from the femoral or brachial arteries to the aortic aneurysm is conducted with fluoroscopy and conventional digital subtraction angiography. These two-dimensional (2D) image modalities involve radiation exposure and may cause acute kidney injury due to the administration of contrast agent.
To reduce EVAR limitations, a recent study reported the integration of a multicore fiber with fiber Bragg gratings and two electromagnetic (EM) sensors into a stent graft system to locate the shape of the endovascular tool with respect to the patient’s anatomy (in that case, a patient-specific aortic phantom) [1]. In addition, virtual angioscopy images were rendered based on the pose (position and orientation) of an EM-tracked catheter tip [2]. These images of the vessel lumen were visualised on a virtual 2D canvas superimposed on the real scenario with the augmented reality glasses Microsoft HoloLens (first generation). These previous studies [1], [2] are part of our ongoing research project Nav EVAR.
The process for obtaining virtual angioscopy images is similar to that when generating virtual endoscopies, both based on preoperative computed tomography (CT) scans. EM-tracked bronchoscopies were also reported in previous literature [3], [4]. These studies showed virtual images and also video images acquired with an endoscope. However, in [2] no validation with video images was conducted so the point of view of the virtual camera was not similar to that of a video camera.
The objective of this study was to continue our work from [2] comparing virtual angioscopy images with video images using an EVAR patient-specific phantom. The spatial relationship between the EM sensor (coordinate system of the virtual camera) and a video graphics array (VGA) camera was computed with a Hand-Eye calibration so that the virtual camera and the VGA camera produced similar images.
Materials and methods
An Aurora Mini 6DOF Sensor (1.8 mm diameter × 9 mm length, Northern Digital Inc.) was attached to an Optikron camera (M-series VGA, USB, 0° view, LED), (Figure 1). The EM field generator chosen in this study was the Aurora Tabletop Field Generator (Northern Digital Inc.) since its thin barrier reduces the distortions produced by ferromagnetic or conductive materials located below it. Regarding the VGA camera, its specifications are as follows: housing 5.95 mm diameter × 8 mm length, matrix size 640 × 480, pixel size 2.5 × 2.5 mm, maximum frame rate 30 frames per second, horizontal field of view approximately 60°, illumination six light-emitting diodes (LEDs). This camera was selected due to its reduced dimensions so it could move along the phantom vessel.

Electromagnetic (EM) sensor at the top of the VGA camera.
The virtual angioscopy images based on the pose of the EM sensor are generated from the vessel segmentation of a preoperative CT scan with the NavEvar application developed for our research project. This application was implemented using the medical image processing and visualization software MeVisLab (MeVis Medical Solutions AG). The EM data is sent from the EM tracking system to the NavEvar application via the open-source software PLUS Toolkit (https://plustoolkit.github.io) [2]. The virtual camera produces images of 512 × 512 pixels and the field of view (in degrees) can be adjusted. After receiving each pose of the EM sensor, the application also captures an image from the VGA camera.
The same point of view can be acquired with the virtual camera and the VGA camera after finding the transformation

Transformations used in Hand-Eye calibration.
Both the VGA camera calibration and the Hand-Eye calibration were implemented using the Open Source Computer Vision Library (OpenCV). This library includes five different methods of the Hand-Eye calibration [6]: Tsai’s (1989), Park’s (1994), Horaud’s (1995), Andreff’s (1999) and Daniilidis’s (1999). The solution chosen was the
and second a minimum RMS error of the inner corners of the calibration pattern for all VGA camera poses. Two RMS errors were computed in this case using the errors defined in Eqs (2) and (3)
where the 3D ground truth pg was obtained by means of placing the tip of an EM-tracked pointer at each inner corner of the calibration pattern, and Mc mapped from the VGA camera coordinate system to its image plane. This last transformation takes account of the VGA camera intrinsic matrix and the distortion coefficients.
After computing the Hand-Eye calibration, the virtual angioscopy images are generated from the virtual camera with the transformation matrix
where the 4 × 4 transformation matrix
After including

3D printed chessboard (left) and rigid 3D printed aorta (right), both phantoms with markers for registration.
After evaluating the settings, an EVAR patient-specific phantom (specifically, a rigid 3D printed aorta with iliac arteries) (Figure 3, right) was acquired with the virtual and VGA cameras. The virtual angioscopy images based on the pose of the EM sensor were generated with the lumen segmentation of a CT scan of the rigid aorta. This 3D image was acquired with the Siemens SOMATOM Definition AS+ CT scanner and the following acquisition parameters: voltage 80 kVp, exposure 8 mAs, slice thickness 0.6 mm and pixel size 0.59 × 0.59 mm. In this study, a field of view of 80° was chosen to show more details from inside the vessel.
Results
The VGA camera calibration (chessboard on paper) provided an RMS re-projection error of 0.36 pixels. Horaud’s Hand-Eye calibration method was selected since it yielded the most stable transformation
Regarding the assessment of the settings to generate similar images with both cameras, the RMS re-projection error was 3.18 pixels for the virtual camera and 2.14 pixels for the VGA camera (Figure 4). For both cameras, the data were computed with the 2D coordinates of the 54 inner corners and 18 poses of the EM sensor/VGA camera while acquiring the 3D printed chessboard.

Pointer tip positions re-projected on the images of virtual camera (left, image cropped) and VGA camera (right). 3D coordinates re-projected (red) and ground truth (blue).
Figure 5 shows images from inside the vessel (rigid 3D printed aorta) acquired with both cameras. In this case, the RMS fiducial registration error was 0.42 mm.

Virtual angioscopy images (left) and images from the VGA camera (right). Vessel lumen (top) and vessel bifurcation (bottom).
Discussion
A 3D printed chessboard was designed in this study to assess the results of the Hand-Eye calibration, which estimated the transformation between the coordinate systems of the virtual camera (specifically, the EM sensor) and the VGA camera. In some cases, the inner corners of this 3D chessboard could not be identified in the VGA images due to illumination problems (specifically, shadows). Decreasing the cube height may overcome this problem. The re-projection errors were similar to those reported in [3], slightly larger in the case of the virtual camera probably due to the camera calibration process (sources of error: 3D printing process, top layer segmentation and registration). Both re-projections errors also include the errors of the Hand-Eye calibration (larger source of error from our opinion), the EM sensor and the pointer tip.
Similar images of the vessel (rigid 3D printed aorta) were acquired with both cameras except for the different perceived depth. Small differences can be identified, probably due to the lumen segmentation. A limitation of these catheter pose-dependent virtual angioscopy images is that the endovascular tool often touches the vessel wall so images at that pose will be useless. A combination with the point of view along the vessel centreline is also recommended. Virtual angioscopy images are generated from a preoperative CT scan that may not resemble the current anatomy during EVAR due to different patient positioning, tissue deformation, respiratory motion and vessel pulsation. Despite this mismatch, these images provide information from inside the vessel, like intravascular ultrasound images (but the latter has no depth information), that may facilitate the understanding of the position of endovascular tools while performing an EVAR procedure or for EVAR training. Intraoperative imaging may be used to update preoperative data with the current anatomy.
Funding source: German Federal Ministry of Education and Research
Award Identifier / Grant number: 13GW0228
Funding source: Ministry of Economic Affairs, Employment, Transport and Technology of Schleswig-Holstein
Research funding: This study was supported by the German Federal Ministry of Education and Research (BMBF, Nav EVAR project, number 13GW0228) and the Ministry of Economic Affairs, Employment, Transport and Technology of Schleswig-Holstein.
Author contributions: All authors have accepted responsibility for the entire content of this manuscript and approved its submission.
Conflict of interest: Authors state no conflict of interest.
Informed consent: Not applicable.
Ethical approval: Not applicable.
References
1. Jäckle, S, García-Vázquez, V, Eixmann, T, Matysiak, F, von Haxthausen, F, Sieren, MM, et al. Three-dimensional guidance including shape sensing of a stentgraft system for endovascular aneurysm repair. Int J Comput Assist Radiol Surg 2020;15:1033–42. https://doi.org/10.1007/s11548-020-02167-2.Search in Google Scholar
2. von Haxthausen, F, Jäckle, S, Strehlow, J, Ernst, F, García-Vázquez, V. Catheter pose-dependent virtual angioscopy images visualized on augmented reality glasses. Curr Dir Biomed Eng 2019;5:289–92. https://doi.org/10.1515/cdbme-2019-0073.Search in Google Scholar
3. Liu, SX, Gutiérrez, LF, Stanton, D. Quantitative evaluation for accumulative calibration error and video-CT registration errors in electromagnetic-tracked endoscopy. Int J Comput Assist Radiol Surg 2011;6:407–19. https://doi.org/10.1007/s11548-010-0518-4.Search in Google Scholar
4. Samavati, M, Ahmadian, A, Abtahi, H, Golnabi, A, Arjmandi Asl, R. A hybrid method for real-time bronchoscope tracking using contour registration and synchronous EMT data. Iran J Radiol 2019;16:e66994. https://doi.org/10.5812/iranjradiol.66994.Search in Google Scholar
5. Jakob, W. Calibration best practices. Available from: https://calib.io/blogs/knowledge-base/calibration-best-practices [Accessed 12 May 2020].Search in Google Scholar
6. OpenCV team. camera calibration and 3D reconstruction. Available from: https://docs.opencv.org/4.1.1/d9/d0c/group__calib3d.html [Accessed 12 May 2020].Search in Google Scholar
© 2020 Verónica García-Vázquez et al., published by De Gruyter, Berlin/Boston
This work is licensed under the Creative Commons Attribution 4.0 International License.