Abstract
Fluoroscopy-guided percutaneous biopsy interventions are mostly performed with traditional free-hand technique. The practical experience of the surgeon influences the duration of the intervention and the radiation exposure for patients and him-/herself. Especially when the placement of heavy and long instruments in double oblique angles is required, manual techniques reach their technical limitations very fast. The system presented herein automatizes the needle positioning using only two 2D scans while the robotic platform guides the intervention. These two images were used to plan the needle pathway and to estimate the pose of the robot using a custom-made end-effector with embedded registration fiducials. The estimated pose was subsequently used to transfer the planed needle path to the robot’s coordinate system and finally to compute the movement parameters in order to align the robot with this plan. To evaluate the system, two phantoms with 11 different targets on it were developed. The targets were punctured, and the application accuracy was measured quantitatively. The solution achieved sub-millimetric accuracy for needle placement (min. 0.23, max. 1.04 in mm). Our approach combines the advantages of fluoroscopic imaging and ensures automatic needle alignment with highly reduced X-ray radiation. The proposed system shows promising potential to be a guidance platform that is easy to combine with available fluoroscopic imaging systems and provides valuable help to the physician in more difficult interventions.
Introduction
Fluoroscopic image guidance is the predominantly applied method in clinical workflows compared with computed tomography [1] or ultrasound [2]. It offers high-resolution real-time imaging functionality, more safety, low dose radiation and effectiveness [3]. In particular, if the needle shall be placed accurately near critical regions percutaneously, the real-time fluoroscopy guidance is a highly valuable tool. Image-guided biopsy interventions such as radiofrequency ablation (RFA), brachytherapy, soft/hard tissue biopsies, or drug delivery require high-accuracy positioning of the intervention associated instruments. In case of free-hand techniques it is particular challenging to follow the planned trajectory pathway. This applies to positioning heavy drills, long biopsy guns or ablation devices in minimally invasive biopsy surgeries, for instance. Especially using ultrasound guidance is that surgeons position the needle with one hand while holding the transducer with the other. This can lead to inaccuracies and prevent the successful intervention due to these major technical limitations or several iterative fine-tunings are necessary during needle insertion. Furthermore, continual X-ray exposure of fluoroscopy, may result in long-term complications such as radiation injuries for the physicians and patients. The approach presented in this work, may help to reduce the X-ray exposure, shorten the intervention time, while providing a high accuracy needle placement in various biopsy interventions. In addition the needle or instrument alignment gets automatized by using a robotic platform that aligns the needle to the planned pathway by using two 2D fluoroscopic images only.
Methods
Robotic platform
iSYS1 robotic biopsy platform (iSYS Medizintechnik GmbH, Austria) is a certified automatic surgical instrument positioning system for CT-guided interventions [4]. It provides manual needle alignment with real-time fluoroscopic closed loop feedback for RFA.
The Robot Positioning Unit (RPU) defines its world coordinate system (4-DOF) with two modules: The lower part (POS) ensures positioning of used instruments in longitudinal axis X and in transversal axis Y while the upper part (ANG) ensures angulation in A (rotation around X axis) and B (rotation around Y) in a certain working volume. The Z axis indicates the origin w.r.t. rotation of the instrument when the robot is at its default-position (home) (Figure 1).

iSYS1 robotic platform working principle.
Custom end-effector and phantom patients
To register the robot with the patient images, a rigid-body patient-to-image registration is necessary. That was achieved by developing a custom-made 3D printed end-effector (Figure 2) that embeds five metal spherical markers (⊘ 5 mm), which were distributed in a way to increase the application accuracy and prevent marker overlapping on the 2D images, when they are scanned from extreme positions or orientations. The Needle Guide (NG) on end-effector can hold a biopsy needle in various diameters and lengths (min. Gauge:8, Length:30 mm, max. Gauge:20, Length:300 mm). It is attached on the RPU’s ANG and POS through joints a) and b). A marker slot c) contains a concentrically fixed marker, whose mechanical relative coordinate to the robot’s home position is known. The NG is locked to the front side on the end-effector concentrically d). The needle e) is slid by the physician manually through the NG in Z direction and its position is regulated over a plastic screw mounted on the side of NG f).

End-effector design.
Two custom-built phantoms were used to validate the accuracy of the automatic needle alignment. A 3D printed model screwed directly to the robot from the bottom side which contains six targets (⊘ 2 mm spherical metal ball, fixed to the model) (Figure 3, Left) and a 3D printed spine phantom placed in a transparent acrylic glass box (20 cm3). The model was filled with transparent candle wax and contained five fixed targets with the same size as first phantom (Figure 3, Right). The wax all around the spine phantom was added in order to provide haptic feedback and as it can deviate the needle direction as common during insertion like in liver, lung, prostate, breast or in other soft tissues of the human body.

Left: 3D printed phantom with an aligned biopsy needle (17 gauge, ⊘ 1.52 mm, L: 20 cm). Right: Spine phantom positioned under the robot. Target-to-entry distance was for both min. 4 cm max. 8 cm.
Data acquisition
Two 2D single plane, DICOM images (1024 × 1024 px) including patient (target markers) and end-effector from different scan directions were acquired by Ziehm Vision RFD 3D Mobile C-arm (Ziehm Imaging GmbH, Germany). The robot gets positioned above the patient in its homed state and both maintain in a fixed position relative to the C-arm’s flat panel detector. Figure 4 shows two sample fluoroscopy acquisitions of the 3D printed phantom, which were scanned from the SI (superior/inferior) plane (Left) and lateral (sagittal) plane (Right) including the end-effector together with the patient, respectively. The images were acquired with pulsed cervical spine mode (total scan time: 1 s for each) using parameters: Kv: 44 mA: 3.2 cGy cm3: 0.60, pulse width: 58%.

Trajectory planning on acquired 2D images from lateral (left) and anterior (right) positions. The hind (blue circle) of the orange arrow (pathway) shows by user defined entry point (E) while the tip indicates the defined target marker (T). The green line that goes through the arrow, indicates the current position of the robot after alignment. Two parallel cyan lines show the intersection epipolar lines resulting after 3D pose estimation. The green rectangle represents the 2D limit of the robot’s positional movements in X and Y, where the entry point can be set while the transparent red area shows the angulation limits in A and B. Both indicate the reachability information of the robot to a planned trajectory after registration.
Automatic marker localization and pose estimation
In order to translate the pathway, planned in image space, to the coordinate space of the robot, the relative position of the robot with respect to fluoro images is required (pose). To compute this pose, we first require the geometrical parameters of the C-arm for each individual image. These can be represented in terms of a standard camera model from computer vision, i.e., the intrinsics, which define the relation from X-ray source to flat detector, and the extrinsics, which define the pose of the system w.r.t. a common coordinate frame for each individual image. We obtain this information from the DICOM Tags provided by the C-Arm vendor. The pose of the robot w.r.t. the common coordinate frame of the fluoro images was determined over automatically localized five registration fiducials while for each individual image a Laplacian of Gaussian based blob detection [5] was applied. The results gets filtered for blobs which have sizes which match the magnification factor of the C-arm system up to a certain tolerance. For each image and each blob detected within, we compute the line between the X-ray source and the blob center. Next we check each such line for one image for an intersection with all lines from all other images with up to a certain distance tolerance. The tolerance is used as the extrinsic parameters given in the DICOM tags are only a rough estimate. Each such intersection marks a possible candidate for a marker position at the end-effector. The next step of the algorithm uses an iterative closest-point (ICP) algorithm to match the known 3D points on the end-effector and a subset of the candidates. This procedure is repeated several times where the subset is chosen in a RANSAC fashion [6]. Afterward, we pick the matching/transformation which yields the smallest matching error in terms of the root mean square error (RMS). The final step of our algorithm, as we pointed out above that the extrinsics from the DICOM tags are rather rough, is that we run a Levenberg-Marquardt optimization [7], which optimizes the pose of the images. During this optimization, we minimize the reprojection error of the projected, matched 3D marker positions and the corresponding detection in the images.
Evaluation
The overall accuracy of the system was validated while puncturing the target markers in both phantoms, respectively, and measuring the 3D positional deviation between the needle and target marker centroids after needle insertion quantitatively. Following required steps were executed for each 11 targets and repeated five times for each individual target: First, the robot was positioned on the phantom and two images from different scan directions were acquired (Tables 1 and 2). Secondly, the registration fiducials on the end-effector were localized, the image-to-patient registration and pose estimation was performed automatically. Next, a trajectory pathway was planned by defining entry and target points on both images (Figure 4) and the robot was triggered to align the needle to the defined trajectory. Afterward, the needle was inserted by sliding it through NG until its tip touches the surface of the target marker. Finally, the needle was fixed at its current position and a 3D CBCT isocentric scan with the current setup was performed that contains patient (target) and needle on the acquired DICOM dataset. The 3D positional deviations between needle centroid and target marker centroid were measured on the work station manually (Figure 5).
Registration and measured system accuracy in mm for overall executed 30 measurements on 3D printed phantom. A scan direction, e.g., 0°/−90° indicates that first image was acquired at SI while second at the lateral plane.
Target/scan direction | Measured targeting accuracy for 3D printed phantom | Reg. RMS | ||||||
---|---|---|---|---|---|---|---|---|
T1 | T2 | T3 | T4 | T5 | T6 | Mean accuracy/STD | ||
0°/−90° | 0.56 | 0.06 | 0.31 | 0.01 | 0.40 | 0.07 | 0.23 ± 0.22 | 0.33 |
0°/−75° | 0.25 | 0.37 | 0.64 | 0.74 | 0.34 | 0.01 | 0.39 ± 0.26 | 0.29 |
−60°/45° | 0.44 | 0.30 | 0.25 | 0.72 | 0.05 | 0.74 | 0.41 ± 0.27 | 0.29 |
0°/−45° | 0.39 | 0.01 | 0.01 | 0.25 | 0.01 | 1.03 | 0.28 ± 0.39 | 0.31 |
45°/−100° | 1.32 | 0.39 | 1.11 | 0.04 | 0.68 | 0.38 | 0.65 ± 0.48 | 0.71 |
Mean Accuracy/STD | 0.59 ± 0.42 | 0.22 ± 0.17 | 0.46 ± 0.42 | 0.35 ± 0.35 | 0.29 ± 0.27 | 0.44 ± 0.43 | 0.39 ± 0.32 | 0.38 ± 0.18 |
Tx, Target 1, Target 2 until Target 6; Reg. RMS, Registration root-mean-square error; STD, Standard deviation.
Overall results for executed 25 measurements on spine phantom in mm.
Target/scan direction | Measured targeting accuracy for spine phantom | Reg. RMS | |||||
---|---|---|---|---|---|---|---|
T1 | T2 | T3 | T4 | T5 | Mean accuracy/STD | ||
0°/−90° | 0.59 | 1.77 | 0.87 | 0.57 | 0.73 | 0.90 ± 0.49 | 0.33 |
0°/−75° | 0.77 | 0.55 | 0.77 | 0.48 | 0.35 | 0.58 ± 0.18 | 0.29 |
−60°/45° | 0.52 | 0.30 | 0.50 | 0.67 | 0.43 | 0.48 ± 0.13 | 0.29 |
0°/−45° | 0.44 | 1.44 | 1.82 | 0.71 | 0.81 | 1.04 ± 0.56 | 0.31 |
45/−100° | 1.54 | 0.77 | 0.65 | 1.00 | 0.59 | 0.91 ± 0.38 | 0.71 |
Mean Accuracy/STD | 0.77 ± 0.44 | 0.96 ± 0.61 | 0.92 ± 0.52 | 0.68 ± 0.19 | 0.58 ± 0.19 | 0.78 ± 0.34 | 0.38 ± 0.18 |

A CBCT volumetric scan. The crosshairs placed on the target marker represent the marker centroid. The orange line that goes through the needle center on Coronal and Sagittal view, is the alignment accuracy. The distance between these two points results the overall targeting accuracy in 3D.
Results
Since the placement of the registration fiducials is an important factor and has a major influence on resulting accuracy [8], which in our case the configuration of these were changed based on the scan direction of each 2D images, best accuracy results were obtained, where fiducials on the end-effector were localized far away from each other. Tables 1 and 2 represent the measured accuracies for each individual target on both phantoms. The replicated soft tissue using candle wax for spine phantom increased the overall inaccuracy for each puncture as expected. 3D positional deviations between the needle and target centroids were measured in acquired volumetric CBCT images. Each dataset was containing 320 slices with 0.6 mm slice thickness.
Discussion and conclusion
The resulting sub-millimetric targeting accuracies independent from scan direction of the proposed work through our internal tests were promising in laboratory conditions. The best targeting accuracy was obtained when the images were acquired from SI and lateral planes (between 0° and −90°) for the first phantom with 0.23 mm. In the second phantom, the −60° and 45° directions delivered better needle placement accuracy in total with 0.48 mm. The sub-millimetric registration RMS errors were observed again from SI and lateral planes for both phantoms independent from selected target (min. 0.29 mm, max. 0.71 mm). Automatically aligning the instruments using only two scans, which enables single or multiple needle intervention without re-positioning the robot or patient, facilitate the percutaneous biopsy or RFA interventions and eliminate technical limitations in this area, while low dose radiation exposure is obtained for patients and physicians.
Research funding: None declared.
Author contributions: All authors have accepted responsibility for the entire content of this manuscript and approved its submission.
Competing interests: Authors state no conflict of interest.
Informed consent: Informed consent is not applicable.
Ethical approval: The conducted research is not related to either human or animals use.
References
1. Ming-De, H, Yuan-Hsiung, T. Accuracy and complications of CT-guided pulmonary core biopsy in small nodules: a single-center experience. Canc Imag 2019;19:51. https://doi.org/10.1186/s40644-019-0240-6.Search in Google Scholar
2. Saad, WE, Waldman, DL. Safety and efficacy of fluoroscopic versus ultrasound guidance for core liver biopsies in potential living related liver transplant donors: preliminary results. J Vasc Interv Radiol 2006;17:1307–12. https://doi.org/10.1097/01.rvi.0000233497.60313.c9.Search in Google Scholar
3. Kurban, LA, Gomersall, L, Weir, J, Wade, P. Fluoroscopy-guided percutaneous lung biopsy: a valuable alternative to computed tomography. Acta Radiol 2008;49:876–82. https://doi.org/10.1080/02841850802225893.Search in Google Scholar
4. Kettenbach, J, Kara, L, Toporek, G, Fuerst, M, Kronreif, G. A robotic needle-positioning and guidance system for CT-guided puncture: ex vivo results. Minim Invasive Ther Allied Technol 2014;23:271–8. https://doi.org/10.3109/13645706.2014.928641.Search in Google Scholar
5. Lindeberg, T. Feature detection with automatic scale selection. Int J Comput Vis 1998;30:79–116. https://doi.org/10.1023/A:1008045108935.10.1023/A:1008045108935Search in Google Scholar
6. Martin, AF, Robert, CB. Random sample consensus: a paradigm for model fitting with applications to image analysis and automated cartography. Comm ACM 1981;24:381–95. https://doi.org/10.1145/358669.358692.Search in Google Scholar
7. Levenberg, K. A method for the solution of certain problems in least squares. Quart Appl Math 1944;2:164–8. https://doi.org/10.1090/qam/10666.Search in Google Scholar
8. West, JB, Fitzpatrick, JM, Toms, SA, Maurer, CR, Maciunas, RJ. Fiducial point placement and the accuracy of point-based, rigid body registration. Neurosurgery 2001;48:810–6. https://doi.org/10.1227/00006123-200104000-00023.Search in Google Scholar
© 2020 Yusuf Özbek et al., published by De Gruyter, Berlin/Boston
This work is licensed under the Creative Commons Attribution 4.0 International License.