The recent developments in image guidance, treatment planning and delivery techniques in radiotherapy have sparked a growing trend towards applying higher radiation doses in fewer treatment fractions, a method referred to as extreme hypofractionation or radiosurgery. The high doses used for radiosurgery increase the demand for exact target localization to maximize the curative effect and minimize the dose to the surrounding healthy tissue. Especially soft-tissue targets moving with respiration, e.g. in the liver , are difficult to treat with high precision. The first radiosurgery system that implemented real-time target tracking for motion compensation was the robotic CyberKnife (Accuray Inc., Sunnyvale, CA). By using a correlation model between a continuous external breathing curve monitored via surface LEDs and discrete internal positions acquired via stereo X-ray (kV-) imaging, the most probable target position can be approximated . Abdominal targets require an implantation of radiopaque gold markers around the tumour for X-ray visibility. This method allows for real-time motion compensation and beam guidance. Yet it does not offer any information about the actual target position since the model yields only the most probable position based on surrogate markers.
In order to overcome these drawbacks, our aim was to include an image guidance system that is real-time capable without using ionizing radiation – ultrasound. With fast four-dimensional (4D) systems using 2D matrix array transducers  now becoming widely available, direct target localization becomes possible. Yet, the integration of ultrasound into an existing radiosurgery platform requires two localization steps. First, the target needs to be localized inside the ultrasound volume. Additionally, the probe has to be detected to transform the target position into treatment coordinates.
In this study, we demonstrate that customized marker geometries can be localized with high accuracy using the CyberKnife’s integrated X-ray system. We describe a geometric calibration method of the kV-imaging system that considerably increases the tracking volume by making use of the full X-ray detector area and allows for localization of ultrasound probes with six degrees of freedom.
2 Material and methods
The real-time motion data provided by an ultrasound station can be integrated into the current tracking system using a coordinate transformation to treatment room coordinates (see Figure 1). To determine the transformation matrix CKTUS from CyberKnife to ultrasound coordinates, the probe must be localized with six degrees of freedom (6DOF). This requires an additional marker geometry to be attached to the probe since the orientation of the probe itself cannot be determined directly from X-ray projections due to missing landmarks.
We use a CyberKnife G4 system with a stereoscopic kV-imaging system consisting of two X-ray tubes and opposing in-floor flat-panel detectors. Since the on-board software cannot localize multiple marker geometries, we developed a software package for the detection, reconstruction and 6DOF-localization of customized marker geometries using the raw X-ray images from the system. The reconstruction of 3D objects from multiple views is a frequent problem in computer vision commonly solved by determining the geometric setup of the imaging system followed by stereo-triangulation.
2.1 Stereo X-ray calibration
Generally, an optical imaging system is approximated by an extended pinhole camera model containing intrinsic and extrinsic camera parameters. The product of the intrinsic matrix containing the focal length (fx, fy), the offset of the image centre (cx, cy) and the pixel scaling factors (sx, sy) and the extrinsic matrix describing the rotation and translation of the camera, is called homography or projection matrix P,
and defines the camera’s imaging behaviour. Matrix multi-plication will project a point P̃W = (XW, YW, ZW, 1)T in world coordinates onto the image plane P̃ = (x, y, 1)T.
A well-established solution to the camera calibration problem is the pattern-based method by Zhang . Several images showing a chessboard with known dimensions in different spatial orientations are acquired. By extracting point correspondences, a system of nonlinear equations can be constructed. It is solved for the projection matrix P minimizing the reprojection error being defined as the mean difference between the measured image coordinates and a simulated projection of the chessboard model utilizing the current projection matrix. For stereo-calibration, camera A is chosen as the reference coordinate system. The transformation R, t from projection centre PCA to PCB
can be estimated by stereo calibration. The given X-ray stereo setup can be described with this fundamental camera model. Yet, the X-ray “cameras” present a unique geometry which differs from that of optical cameras (see Figure 2). The location of the image plane behind the projection centre, the tilt of approximately 45 degrees towards the X-ray beam and the long focal length corresponding to the height of the X-ray tube in the room challenge standard calibration procedures.
Algorithms from the open-source library OpenCV  were adapted for X-ray camera calibration and a special calibration phantom containing a regular grid of 7 × 6 lead markers (diameter 2 mm), representing the corners of a chessboard pattern fixed to a plastic board, was designed (see Figure 3). The calibration procedure was evaluated based on the reprojection errors. Furthermore, the resulting image resolution and tracking volume size were compared to the images processed by the CyberKnife software.
2.2 Marker geometry localization
With known camera parameters, it is possible to reconstruct a 3D world point PW from its coordinates pA and pB in the projection images. The complexity of the correspondence problem is reduced to one dimension since the camera setup leads to co-planar, rectified image pairs (see Results) .
With given pixel coordinates pA and pB corresponding to the same marker point in both X-ray images it is possible to reconstruct its spatial coordinates PM,i by computing the intersection point of their projection lines. The projection lines run between each projection centre and the measured pixel coordinate and have to be converted into a mutual coordinate system (here: X-ray tube A). Due to noise and the feature extraction it is most likely that the lines will not intersect. In this case, PM,i is computed as the point with the smallest Euclidean distance to both projection lines.
To localize the ultrasound probe in the system’s X-ray reference frame, the 6DOF orientation of the attached marker is determined from the reconstructed 3D single marker coordinates, forming a point cloud, by using singular value decomposition . Two different X-ray marker geometries with 20 mm and 50 mm base lengths were designed and attached to the ultrasound transducer (see Figure 3).
In order to evaluate the developed localization method, the marker geometries were mounted on a 6-axis industrial robot (Viper s850, Adept Technology Inc., Livermore, CA) with a precision of 0.03 mm reported by the manufacturer. Image pairs at 150 random positions per marker geometry were acquired within the tracking volume. The tracking accuracy of our method and the CyberKnife’s on-board tracking software was evaluated by comparing the calculated marker locations to the known robot positions.
The mean reprojection error for the geometric camera calibration is 0.09 mm and 0.12 mm for X-ray tube A and B, respectively. The optimized focal lengths are 2.66 m (A) and 2.68 m (B). The rotation matrix R between the projection centres A and B nearly equals the identity matrix, i.e. the camera geometry can be considered rectified.
The developed calibration method makes use of the raw image data while the on-board CyberKnife software pre-processes the images to mimic an image plane with 512 × 512 pixels perpendicular to the X-ray beam before tracking. By using the full detector information (1024 × 1024 pixels), the newly developed method increases the tracking resolution by 28–32% horizontally and almost 80% vertically (see Figure 4). Additionally, the tracking volume is more than doubled from ∼ 3800 cm3 with the on-board software to more than 8000 cm3 with our extended tracking framework.
Using the unprocessed data for tracking and the 20 mm marker leads to a mean translational error of 0.22 mm compared with 0.20 mm for the on-board localization (see Table 1). The translational accuracy for the larger marker is slightly lower with 0.37 mm for the CyberKnife software and 0.38 mm for the extended method. Moreover, the rotational error is reduced from 0.21° to 0.05° and from 0.27° to 0.08° using the developed technique for the 50 mm and 20 mm markers, respectively.
4 Discussion and conclusion
This study demonstrates that customized marker geometries can be localized with high accuracy using the raw image data of the CyberKnife’s integrated X-ray system. Our extended tracking framework is based on the geometric calibration of a stereoscopic X-ray setup with modified camera calibration tools. The low reprojection error and the convergence behaviour suggest that the calibration procedure was successful while the algorithms have a high generalizability and could likely be modified for different camera setups.
The success of the geometric calibration is further validated by the marker localization results which are well in accordance with the values calculated from the CyberKnife and with literature values . Moreover, the rotational accuracy could be increased by a factor of four, staying below 0.1°, which is especially important when the marker position is used for coordinate transformation of a distant target location in ultrasound.
One of the major advantages of the extended tracking framework is the ability to increase the tracking resolution and volume size substantially without even changing the existing X-ray system. The CyberKnife’s internal image processing reduces the raw images to a smaller subimage with decreased resolution before localizing the targets. This was most likely implemented to imitate earlier versions of the system (G1–G3) where the X-ray detectors were located on stands above the ground perpendicular to the X-ray beam . Our calibration-based approach is based on the current setup incorporating the full detector area and image content. This leads to an increased tracking resolution also noticeable in the vertical translational accuracy. Simultaneously, the tracking volume was more than doubled which increases the range of detectable marker locations and makes full use of the applied X-ray dose. A more efficient use of imaging dose is of direct benefit for the patient and is in accordance with the central ALARA principle in radiation protection (As Low As Reasonably Achievable).
While this study only covered the first localization step, we have already demonstrated the ability of the 4D ultrasound system to track 3D structures in real-time with high accuracy . The combination of both steps, i.e. probe localization in X-ray images and target detection in ultrasound volumes, has recently been tested in a phantom experiment . In this pilot study the ultrasound signal was used as the continuous signal component of the CyberKnife’s correlation model substituting the superficial LED markers and achieved comparable results to the conventional model.
In summary, we have shown that geometric calibration of a stereo X-ray system leads to an improved tracking framework with similar accuracy, a higher resolution and a larger field of view compared to the current on-board solution. The extended tracking method allows for accurate probe localization, thus paving the way for integration of real-time 4D ultrasound into existing radiosurgery systems.
Research funding: The author state no funding involved. Conflict of interest: Authors state no conflict of interest. Material and Methods: Informed consent: Informed consent is not applicable. Ethical approval: The conducted research is not related to either human or animal use.
Shirato H, Seppenwoolde Y, Kitamura K, Onimura R, Shimizu S. Intrafractional tumor motion: lung and liver. Semin Radiat Oncol. 2004;14:10–8. Google Scholar
Schweikard A, Glosser G, Bodduluri M, Murphy MJ, Adler JR. Robotic motion compensation for respiratory movement during radiosurgery. Comput Aided Surg. 2000;5:263–77. Google Scholar
Bell MA, Byram BC, Harris EJ, Evans PM, Bamber JC. Liver tracking with a high volume rate 4D ultrasound scanner and a 2D matrix array probe. Phys Med Biol. 2012;57:1359–74. Google Scholar
Zhang Z. A flexible new technique for camera calibration. IEEE T Pattern Anal. 2000;22:1330–34. Google Scholar
Bradsky G, Kaehler A. Learning OpenCV, 1st ed. Köln: O’Reilly Media, Inc.; 2008. Google Scholar
Hartley R, Zisserman A. Multiple view geometry in computer vision, 2nd ed. Cambridge: Cambridge University Press; 2004. Google Scholar
Horn BKP. Closed-form solution of absolute orien-tation using unit quaternions. J Opt Soc Am. 1987;4:629–42. Google Scholar
Fu D, Kuduvalli G. A fast, accurate, and automatic 2D-3D image registration for image-guided cranial radiosurgery. Med Phys. 2008;35:2180–94. Google Scholar
Bruder R, Ernst F, Schläfer A, et al. A framework for real-time target tracking in igrt using three-dimensional ultrasound. Proc 25th Int Congress on Comp Assis Rad Surg (CARS). 2011;306–307. Google Scholar
Blanck O, Jauer P, Ernst F, et al. Pilot-Phantomstudie zur ultraschallgeführten robotergestützten Radiochirurgie, in 44. Jahrestagung der DGMP. 2013;122–23. Google Scholar
About the article
Published Online: 2016-09-30
Published in Print: 2016-09-01
Citation Information: Current Directions in Biomedical Engineering, Volume 2, Issue 1, Pages 409–413, ISSN (Online) 2364-5504, DOI: https://doi.org/10.1515/cdbme-2016-0091.
©2016 Svenja Ipsen et al., licensee De Gruyter.. This work is licensed under the Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 License. BY-NC-ND 4.0