In the recent past, 3D ultrasound has been gaining relevance in many biomedical applications. One main limitation, however, is that typical ultrasound volumes are either very poorly resolved or only cover small areas. We have developed a GPU-accelerated method for live fusion of freehand 3D ultrasound sweeps to create one large volume. The method has been implemented in CUDA and is capable of generating an output volume with 0.5 mm resolution in real time while processing more than 45 volumes per second, with more than 300.000 voxels per volume. First experiments indicate that large structures like a whole forearm or high-resolution volumes of smaller structures like the hand can be combined efficiently. It is anticipated that this technology will be helpful in pediatric surgery where X-ray or CT imaging is not always possible.
In recent years, 3D ultrasound (US) has become a widely available imaging tool, a fact already postulated some time ago . Generation of the US volumes can be done in three ways: using a dedicated 3D probe (typically done in cardiac imaging or in prenatal exams), using a sweep of 2D scans and stitching them together  or by using US computed tomography (USCT) .
These methods are, however, not always ideal: the volume generated by 3D probes is either relatively small (probes for cardiac imaging) or the resolution (both temporal and spatial) is relatively poor (prenatal exam probes). USCT systems are currently not available commercially and are typically intended for very special applications like breast cancer detection.
In this paper, we present a simple and fast extension to an existing cardiac ultrasound station that can generate large ultrasound volumes from freehand 3D ultrasound sweeps. The main advantage over 2D sweeps are as follows:
– acquisition speed: up to 60 US volumes can be recorded per second
– image quality: we combine the unprocessed raw polar volumes instead of the heavily processed 2D image output
– fusion speed: using GPU acceleration, the volumes can be automatically combined in real time
Fusion of ultrasound volumes was also investigated in the past, either using tracking and registration  or registration and orientation tracking , but not with the simplicity and speed of our approach. It is also stated in literature on automatic bone segmentation from 3D US – which will be an important use case for large ultrasound volumes – that typical volume sizes are still too small and that acquisition is too slow .
Ultrasound is a diagnostic tool that, in certain cases, has been shown to be a viable method for diagnosis of fractures in pediatric surgery [1, 7, 14]. Computed tomography (CT) imaging is mostly avoided and X-ray imaging is reduced the necessary minimum. Consequently, diagnosis often relies on magnetic resonance imaging (MRI), which might not be readily available and often requires sedating the patient. This shows that fast and reliable generation of high resolution large volume ultrasound will be an immensely valuable additional imaging modality. Furthermore, being non-invasive and portable, without any adverse effects and not using ionizing radiation, it is clear that using 3D US will help improve patient care. The moderate cost of US stations in comparison to CT, MRI, cone-beam computed tomography (CBCT) or fluoroscopy devices is also an important element potentially increasing surgeons’ acceptance.
Data was acquired using a GE Vivid7 Dimension station (GE Healthcare) and the 3V 4D transducer. The transducer was equipped with a 3D-printed marker block and three infrared-reflecting spheres to make it detectable by an optical tracking system (Polaris Vicra, Northern Digital, Inc.). An experimental setup is shown in Figure 1.
We previously developed an add-on system to an existing 3D ultrasound machine . The US station was extended to allow exporting the raw acquired volumes over Gigabit Ethernet in real time. Additionally, fast conversion of the exported volumes (in polar coordinates) to Cartesian coordinates was shown using GPU acceleration in . Finally, to be able to locate the position of the individual volumes with respect to each other, the transducer was calibrated, both intrinsically and to an optical tracking system . This method returns the position and orientation of any volume in the frame of the tracking marker mounted on the transducer using a homogeneous transformation matrix C.
Given C as a constant transformation of each recorded ultrasound volume, the registration
can be determined using the optical tracking system as shown in Figure 2, where M0 is the initial transformation of the first transducer positioning. Mi describes the position and orientation of the transducer in the camera frame during acquisition of the i-th volume.
The same registration Ti applies for every single voxel of the ultrasound volume recorded at time stamp i as well as the conversion of every volume point from polar coordinates into Cartesian coordinates. That allows the usage of GPU-based massive parallel computation. The reconstruction is completely implemented in CUDA , in contrast to  where GLSL was used, and runs on consumer GPU hardware (NVIDIA GeForce GTX 980). The reconstruction algorithm consists of the following steps:
– Convert a polar coordinates (d, α, β)T of a voxel into Cartesian coordinates (x, y, z)T
– Transform the Cartesian coordinates by Ti and find the corresponding intensity value at the target (large reconstruction) volume
– Accumulate the intensity value at the target’s voxel by using the intensity value from the corresponding beam voxel.
For every single beam voxel of a recorded 3D US volume at time stamp i, this reconstruction algorithm runs an individual GPU threat. With about 300,000 beam points per volume, the advantage of the CUDA implementation over sequential execution on a CPU becomes obvious.
Figure 3 shows a reconstructed volume of a hand. This volume has a resolution of 300 × 1000 × 600 voxels, with a voxel size of 0.5 mm3. It was reconstructed from 814 individual 3D ultrasound volumes within 17.4 s. Where every single 3D volume has a size of 185 × 84 × 22 beam points, with a resolution of 0.3 mm in beam direction and an angular resolution of 0.45° × 0.88°.
Figure 4 shows a reconstructed of a full but clipped volume of a left forearm. This volume has a resolution of 300 × 1000 × 600 voxel, with a voxel size of 0.5 mm3. It was reconstructed from 1014 individual 3D ultrasound volumes within 21.7 s. Every single 3D volume has a size of 185 × 84 × 22 beam points, with a resolution of 0.3 mm in beam direction and an angular resolution of 0.44° ×0.89°. Visualisation was done using Voreen .
Thus, the reconstruction speed is round about 46 volumes per second. Given the station’s acquisition speed of at most 30 US volumes per second, reconstruction is suffi-ciently fast for a real-time application.
Typically, however, individual high-resolution 3D US volumes are much smaller: just about 58 mm × 52 mm × 26 mm in size. That allows a detailed analysis or diagnosis of organs, but only on a small area of a larger organ.Figure 4 shows a fully reconstructed left forearm, clipped at the midsection. For comparing the sizes, in blue one single volume out of the totally used 1014 volumes is highlighted.
The reconstruction of a large US volume from a free-hand 3D sweep raises the possibility of reducing ultrasonic imaging artifacts, e.g. acoustic shadows, reverberations or noise (randomly scattered reflections) and speckle. While acoustic shadows and reverberations depend on the transducer’s position and orientation with respect to the target, these artifacts can be reduced by recording the same structure from different points of view.
Currently, this method depends on recording structures which are not moving. If we want to reconstruct moving targets, e.g. the beating heart or abdominal organs during free breathing, we will have to measure or estimate the motion and compensate for it, i.e. using gating techniques. It is conceivable to use electrocardiography or optical surface markers for this task.
Using the proposed method will, for example, help in diagnosing fractures as well as determining the severity and configuration of bone fragment dislocation. This will support the surgeon in deciding on a therapy that can be tailored even better to the patient’s specific anatomy and condition. The real time character of the method is of paramount importance since it allows using the method even during surgery. Possible applications are monitoring of bone fragment repositioning and proper placement of material for osteosynthesis. Additionally, being able to exactly visualize the surface and volume of an extremity may aid in monitoring swelling or possible post-operative complications (i.e. compartment syndrome). In the long term, it might also prove useful to determine the mineralization status and existence of callus , allowing the surgeon to more easily evaluate the firmness and stability of the affected bone and decide about mobilization progress.
Conflict of interest: Authors state no conflict of interest. Material and Methods: Informed consent: Informed consent has been obtained from all individuals included in this study. Ethical approval: The research related to human use has been complied with all the relevant national regulations, institutional policies and in accordance the tenets of the Helsinki Declaration, and has been approved by the authors’ institutional review board or equivalent committee.
 V. Beltrame, R. Stramare, N. Rebellato, F. Angelini, A. C. Frigo, and L. Rubaltelli. Sonographic evaluation of bone fractures: a reliable alternative in clinical practice? Clinical Imaging, 36 (3): 203 – 208, 2012. 10.1016/j.clinimag.2011.08.013. Search in Google Scholar
 R. Bruder, F. Ernst, A. Schlaefer, and A. Schweikard. A framework for real-time target tracking in radiosurgery using three-dimensional ultrasound. In Proceedings of the 25th International Congress and Exhibition on Computer Assisted Radiology and Surgery (CARS’11), Berlin, Germany. CARS. Published in International Journal of Computer Assisted Radiology and Surgery, 6(S1):S306–S307, 2011. Search in Google Scholar
 R. Bruder, F. Griese, F. Ernst, and A. Schweikard. High-accuracy ultrasound target localization for hand-eye calibration between optical tracking systems and three-dimensional ultrasound. In H. Handels, J. Ehrhardt, T. M. Deserno, H.-P. Meinzer, and T. Tolxdorff, editors, Bildverarbeitung für die Medizin 2011, Berlin, Heidelberg. Springer. Published in Informatik aktuell, pages 179–183, 2011. 10.1007/978-3-642-19335-4_38. Search in Google Scholar
 R. Bruder, P. Jauer, F. Ernst, L. Richter, and A. Schweikard. Real-time 4D ultrasound visualization with the Voreen framework. In ACM SIGGRAPH 2011 Posters, New York, NY, USA. ACM. Published in SIGGRAPH ’11, page 74:1, 2011. 10.1145/2037715.2037798. Search in Google Scholar
 R. Dalvi, I. Hacihaliloglu, and R. Abugharbieh. 3d ultrasound volume stitching using phase symmetry and harris corner detection for orthopaedic applications. In Medical Imaging 2010: Image Processing, San Diego, CA, USA. Published in. Proceedings of SPIE, 7623:762330–762330–8, 2010. 10.1117/12.844608. Search in Google Scholar
 E. Dyer, U. Zeeshan Ijaz, R. Housden, R. Prager, A. Gee, and G. Treece. A clinical system for three-dimensional extended-field-of-view ultrasound. The British Journal of Radiology, 85(1018):e919–e924, 2012. 10.1259/bjr/46007369. PMID: 22972979. Search in Google Scholar
 K. Eckert, O. Ackermann, B. Schweiger, E. Radeloff, and P. Liedgens. Sonographic diagnosis of metaphyseal forearm fractures in children: A safe and applicable alternative to standard X-rays. Pediatric Emergency Care, 28 (9):851–854, 2012. 10.1097/PEC.0b013e318267a73d. Search in Google Scholar
 A. Gee, R. Prager, G. Treece, C. Cash, and L. Berman. Processing and visualizing three-dimensional ultrasound data. The British Journal of Radiology, 77(S2):S186–S193, 2004. 10.1259/bjr/80676194. PMID: 15677360. Search in Google Scholar
 H. Gemmeke, R. Dapp, T. Hopp, M. Zapf, and N. Ruiter. An improved 3D ultrasound computer tomography system. In 2014 IEEE International Ultrasonics Symposium (IUS), 2014, pages 1009–1012. 10.1109/ULTSYM.2014.0247. Search in Google Scholar
 I. Hacihaliloglu, R. Abugharbieh, A. J. Hodgson, R. N. Rohling, and P. Guy. Automatic bone localization and fracture detection from volumetric ultrasound images using 3-d local phase features. Ultrasound in Medicine & Biology, 38 (1):128 – 144, 2012. 10.1016/j.ultrasmedbio.2011.10.009. Search in Google Scholar
 S. G. Kachewar and D. S. Kulkarni. Utility of diagnostic ultrasound in evaluating fracture healing. Journal of Clinical and Diagnostic Research, 8(3):179–180, 2014. 10.7860/JCDR/2014/4474.4159. Search in Google Scholar
 J. Meyer-Spradow, T. Ropinski, J. Mensmann, and K. H. Hinrichs. Voreen: A rapid-prototyping environment for ray-casting-based volume visualizations. IEEE Computer Graphics and Applications, 29 (6):6–13, 2009. 10.1109/MCG.2009.130. Search in Google Scholar
 E. Neri, E. Barbi, I. Rabach, C. Zanchi, S. Norbedo, L. Ronfani, V. Guastalla, A. Ventura, and P. Guastalla. Diagnostic accuracy of ultrasonography for hand bony fractures in paediatric patients. Archives of Disease in Childhood, 99(12):1087–1090, 2014. 10.1136/archdischild-2013-305678. Search in Google Scholar
© 2015 by Walter de Gruyter GmbH, Berlin/Boston
This article is distributed under the terms of the Creative Commons Attribution Non-Commercial License, which permits unrestricted non-commercial use, distribution, and reproduction in any medium, provided the original work is properly cited.