Skip to content
Licensed Unlicensed Requires Authentication Published by De Gruyter (O) September 23, 2020

3D camera-based markerless navigation system for robotic osteotomies

3D-kamerabasiertes markerloses Navigationssystem für robotische Osteotomien
  • Tim Übelhör

    M. Sc. Tim Übelhör is a research associate in the biomedical systems group at the Institute of Automatic Control, RWTH Aachen University. His main research focus is robotic perception in medical environments, including the application and combination of methods from the fields of probabilistic and learning-based algorithms.

    ORCID logo EMAIL logo
    , Jonas Gesenhues

    Dr.-Ing. Jonas Gesenhues is head of the biomedical systems group at the Institute of Automatic Control, RWTH Aachen University. Amongst creating novel technical solutions for medical applications based on advanced control methods, his research focus is on promoting digitization in healthcare by developing new concepts and strategies to stimulate innovation and to simplify the transfer of new ideas and technologies into products.

    , Nassim Ayoub

    Dr. med. Dr. med. dent. Nassim Ayoub is a senior consultant at the Department of Oral and Maxillofacial Surgery, University Hospital RWTH Aachen. The implementation of computer assisted surgery in the field of oral and maxillofacial surgery is his major research focus.

    , Ali Modabber

    Priv.-Doz. Dr. med. Dr. med. dent. Ali Modabber MBA, FEBOMFS senior consultant oral, maxillofacial, and facial plastic surgery is deputy head and managing senior consultant in the Department of Oral and Maxillofacial Surgery, University Hospital RWTH Aachen. He is a senior lecturer with his main research topics being: computer-assisted and robotic craniofacial surgery, microvascular head and neck reconstruction, and regenerative oro-facial medicine.

    and Dirk Abel

    Univ.-Prof. Dr.-Ing. Dirk Abel is head of the Institute of Automatic Control, RWTH Aachen University. Main fields of activity: model-predictive-control, robust control, nonlinear control, identification and simulation of dynamic systems, analysis and synthesis of discretely controlled systems, and rapid control prototyping.

Abstract

A markerless system for the registration of a bone’s pose is presented which reduces the setup time and the damage to the bone to a minimum. For the registration, a particle filter is implemented which is able to estimate a bone’s pose using depth images. In a phantom study, the pose of 3D-printed bones has been estimated at a rate of 90 Hz and with a precision of a few millimeters. The particle filter is stable under partial occlusions and only diverges when the bone is fully occluded. During a cadaver study, the preoperatively planned cutting edges have been projected as augmented reality (AR) templates onto the hip bones of five cadavers. By cutting manually along the AR templates, surgeons were able to extract ten transplants in the same time as with conventional osteotomy templates. Using the presented navigation system can save hours spent on the construction and production of conventional templates. In conclusion, this work represents one step towards a broader acceptance of robotic osteotomies.

Zusammenfassung

Es wird ein markerloses System zur Erfassung einer Knochenpose vorgestellt, welches den Einrichtungsaufwand und die Beschädigung des Knochens auf ein Minimum reduziert. Zur Registrierung wird ein Partikelfilter implementiert, welches die Pose eines Knochens auf Basis von Tiefenbildern schätzen kann. In einer Phantomstudie wurde die Pose von 3D-gedruckten Beckenkammknochen mit einer Bildrate von 90 Hz und einer Präzision von wenigen Millimetern geschätzt. Das Partikelfilter ist unter partiellen Verdeckungen stabil und divergiert erst, wenn der Knochen vollständig verdeckt wird. Während einer Kadaverstudie wurden die präoperativ geplanten Schnittkanten als Augmented Reality (AR)-Schablonen auf den Knochen projiziert. Chirurgen war es möglich, durch das manuelle Schneiden entlang der AR-Schablonen zehn Transplantate genauso schnell zu extrahieren wie mit konventionellen Osteotomieschablonen. Dabei kann die Nutzung des vorgestellten Navigationssystems viele Stunden für die Konstruktion und Produktion der konventionellen Schablonen sparen. Zusammenfassend stellt diese Arbeit einen Schritt in Richtung einer breiteren Akzeptanz robotischer Osteotomien dar.

Funding statement: Funded by the Excellence Initiative of the German federal and state governments. Grant Number: OPSF410.

About the authors

Tim Übelhör

M. Sc. Tim Übelhör is a research associate in the biomedical systems group at the Institute of Automatic Control, RWTH Aachen University. His main research focus is robotic perception in medical environments, including the application and combination of methods from the fields of probabilistic and learning-based algorithms.

Jonas Gesenhues

Dr.-Ing. Jonas Gesenhues is head of the biomedical systems group at the Institute of Automatic Control, RWTH Aachen University. Amongst creating novel technical solutions for medical applications based on advanced control methods, his research focus is on promoting digitization in healthcare by developing new concepts and strategies to stimulate innovation and to simplify the transfer of new ideas and technologies into products.

Nassim Ayoub

Dr. med. Dr. med. dent. Nassim Ayoub is a senior consultant at the Department of Oral and Maxillofacial Surgery, University Hospital RWTH Aachen. The implementation of computer assisted surgery in the field of oral and maxillofacial surgery is his major research focus.

Ali Modabber

Priv.-Doz. Dr. med. Dr. med. dent. Ali Modabber MBA, FEBOMFS senior consultant oral, maxillofacial, and facial plastic surgery is deputy head and managing senior consultant in the Department of Oral and Maxillofacial Surgery, University Hospital RWTH Aachen. He is a senior lecturer with his main research topics being: computer-assisted and robotic craniofacial surgery, microvascular head and neck reconstruction, and regenerative oro-facial medicine.

Dirk Abel

Univ.-Prof. Dr.-Ing. Dirk Abel is head of the Institute of Automatic Control, RWTH Aachen University. Main fields of activity: model-predictive-control, robust control, nonlinear control, identification and simulation of dynamic systems, analysis and synthesis of discretely controlled systems, and rapid control prototyping.

References

1. Alon Wolf and Moshe Shoham. Medical Automation and Robotics. In Shimon Y. Nof, editor, Springer Handbook of Automation, pages 1397–1407. Springer Berlin Heidelberg, Berlin, Heidelberg, 2009. ISBN 978-3-540-78831-7. 10.1007/978-3-540-78831-7_78.Search in Google Scholar

2. G. P. Moustris, S. C. Hiridis, K. M. Deliparaschos and K. M. Konstantinidis. Evolution of autonomous and semi-autonomous robotic surgical systems: A review of the literature. The International Journal of Medical Robotics and Computer Assisted Surgery, 7 (4): 375–392, December 2011. ISSN 14785951. 10.1002/rcs.408.Search in Google Scholar PubMed

3. XiangZhan Kong, XingGuang Duan and YongGui Wang. An integrated system for planning, navigation and robotic assistance for mandible reconstruction surgery. Intelligent Service Robotics, 9 (2): 113–121, April 2016. ISSN 1861-2776, 1861-2784. 10.1007/s11370-015-0189-7.Search in Google Scholar

4. Uli Mezger, Claudia Jendrewski and Michael Bartels. Navigation in surgery. Langenbeck’s Archives of Surgery, 398 (4): 501–514, April 2013. ISSN 1435-2443. 10.1007/s00423-013-1059-4.Search in Google Scholar PubMed PubMed Central

5. Jian-Hua Zhu, Jiang Deng, Xiao-Jing Liu, Jing Wang, Yu-Xing Guo and Chuan-Bin Guo. Prospects of Robot-Assisted Mandibular Reconstruction with Fibula Flap: Comparison with a Computer-Assisted Navigation System and Freehand Technique. Journal of Reconstructive Microsurgery, 32 (09): 661–669, June 2016. ISSN 0743-684X, 1098-8947. 10.1055/s-0036-1584805.Search in Google Scholar PubMed

6. Mengzhe Sun, Yuanhao Chai, Gang Chai and Xiaohu Zheng. Fully Automatic Robot-Assisted Surgery for Mandibular Angle Split Osteotomy. Journal of Craniofacial Surgery, 31 (2): 336–339, 2020. ISSN 1049-2275. 10.1097/SCS.0000000000005587.Search in Google Scholar PubMed

7. R. S. Andersen, O. Madsen, T. B. Moeslund and H. B. Amor. Projecting robot intentions into human environments. In 2016 25th IEEE International Symposium on Robot and Human Interactive Communication (RO-MAN), pages 294–301, August 2016. 10.1109/ROMAN.2016.7745145.Search in Google Scholar

8. Philip Tack, Jan Victor, Paul Gemmel and Lieven Annemans. 3D-printing techniques in a medical setting: A systematic literature review. BioMedical Engineering OnLine, 15 (1): 115, December 2016. ISSN 1475-925X. 10.1186/s12938-016-0236-4.Search in Google Scholar PubMed PubMed Central

9. Adrian Schneider, Simon Pezold, Kyung-won Baek, Dilyan Marinov and Philippe C. Cattin. Direct Calibration of a Laser Ablation System in the Projective Voltage Space. In Nassir Navab, Joachim Hornegger, William M. Wells and Alejandro Frangi, editors, Medical Image Computing and Computer-Assisted Intervention – MICCAI 2015, volume 9349, pages 274–281. Springer International Publishing, Cham, 2015. ISBN 978-3-319-24552-2, 978-3-319-24553-9. 10.1007/978-3-319-24553-9_34.Search in Google Scholar

10. Manuel Wuthrich, Peter Pastor, Mrinal Kalakrishnan, Jeannette Bohg and Stefan Schaal. Probabilistic object tracking using a range camera. In 2013 IEEE/RSJ International Conference on Intelligent Robots and Systems, pages 3195–3202, Tokyo, November 2013. IEEE. ISBN 978-1-4673-6358-7, 978-1-4673-6357-0. 10.1109/IROS.2013.6696810.Search in Google Scholar

11. Christian Gentner, Siwei Zhang and Thomas Jost. Log-PF: Particle Filtering in Logarithm Domain. Journal of Electrical and Computer Engineering, 2018: 1–11, 2018. ISSN 2090-0147, 2090-0155. 10.1155/2018/5763461.Search in Google Scholar

12. Ran Hao, Orhan Özgüner and M. Cenk Çavuşoğlu. Vision-Based Surgical Tool Pose Estimation for the da Vinci® Robotic Surgical System. In Proceedings of the ... IEEE/RSJ International Conference on Intelligent Robots and Systems. IEEE/RSJ International Conference on Intelligent Robots and Systems, 2018:1298, October 2018. 10.1109/IROS.2018.8594471.Search in Google Scholar PubMed PubMed Central

13. Yang Li, Florian Richter, Jingpei Lu, Emily K. Funk, Ryan K. Orosco, Jianke Zhu and Michael C. Yip. SuPer: A Surgical Perception Framework for Endoscopic Tissue Manipulation With Surgical Robotics. IEEE Robotics and Automation Letters, 5 (2): 2293–2300, April 2020. ISSN 2377-3766. 10.1109/LRA.2020.2970659.Search in Google Scholar

14. Li-Ming Su, Balazs P. Vagvolgyi, Rahul Agarwal, Carol E. Reiley, Russell H. Taylor and Gregory D. Hager. Augmented Reality During Robot-assisted Laparoscopic Partial Nephrectomy: Toward Real-Time 3D-CT to Stereoscopic Video Registration. Urology, 73 (4): 896–900, April 2009. ISSN 0090-4295. 10.1016/j.urology.2008.11.040.Search in Google Scholar PubMed

15. Cong Ma, Guowen Chen and Hongen Liao. Automatic Fast-Registration Surgical Navigation System Using Depth Camera and Integral Videography 3D Image Overlay. In Guoyan Zheng, Hongen Liao, Pierre Jannin, Philippe Cattin and Su-Lin Lee, editors, Medical Imaging and Augmented Reality, pages 392–403. Springer International Publishing, 2016. ISBN 978-3-319-43775-0.10.1007/978-3-319-43775-0_36Search in Google Scholar

16. Francesco Volonté, François Pugin, Pascal Bucher, Maki Sugimoto, Osman Ratib and Philippe Morel. Augmented reality and image overlay navigation with OsiriX in laparoscopic and robotic surgery: Not only a matter of fashion. Journal of Hepato-Biliary-Pancreatic Sciences, 18 (4): 506–509, 2011. ISSN 1868-6982. 10.1007/s00534-011-0385-6. _eprint: https://onlinelibrary.wiley.com/doi/pdf/10.1007/s00534-011-0385-6.Search in Google Scholar PubMed

17. P. Vávra, J. Roman, P. Zonča, P. Ihnát, M. Němec, J. Kumar, N. Habib and A. El-Gendi. Recent Development of Augmented Reality in Surgery: A Review. https://www.hindawi.com/journals/jhe/2017/4574172/, 2017. ISSN 2040-2295.Search in Google Scholar

18. K.-W. Baek, W. Deibel, D. Marinov, M. Griessen, A. Bruno, H.-F. Zeilhofer, Ph Cattin and Ph Juergens. Clinical applicability of robot-guided contact-free laser osteotomy in cranio-maxillo-facial surgery: In-vitro simulation and in-vivo surgery in minipig mandibles. The British Journal of Oral & Maxillofacial Surgery, 53 (10): 976–981, December 2015. ISSN 1532-1940. 10.1016/j.bjoms.2015.07.019.Search in Google Scholar PubMed

19. Jamil Jivraj, Ryan Deorajh, Phillips Lai, Chaoliang Chen, Nhu Nguyen, Joel Ramjist and Victor X. D. Yang. Robotic laser osteotomy through penscriptive structured light visual servoing. International Journal of Computer Assisted Radiology and Surgery, 14 (5): 809–818, May 2019. ISSN 1861-6410, 1861-6429. 10.1007/s11548-018-01905-x.Search in Google Scholar PubMed

20. Burton Ma and Randy E. Ellis. Surface-Based Registration with a Particle Filter. In David Hutchison, Takeo Kanade, Josef Kittler, Jon M. Kleinberg, Friedemann Mattern, John C. Mitchell, Moni Naor, Oscar Nierstrasz, C. Pandu Rangan, Bernhard Steffen, Madhu Sudan, Demetri Terzopoulos, Dough Tygar, Moshe Y. Vardi, Gerhard Weikum, Christian Barillot, David R. Haynor and Pierre Hellier, editors, Medical Image Computing and Computer-Assisted Intervention – MICCAI 2004, volume 3216, pages 566–573. Springer Berlin Heidelberg, Berlin, Heidelberg, 2004. ISBN 978-3-540-22976-6, 978-3-540-30135-6. 10.1007/978-3-540-30135-6_69.Search in Google Scholar

21. Silvio Pflugi, Till Lerch, Rakesh Vasireddy, Nane Boemke, Moritz Tannast, Timo Michael Ecker, Klaus Siebenrock and Guoyan Zheng. Augmented Marker Tracking for Peri-acetabular Osteotomy Surgery: A Cadaver Study. In CAOS 2017. 17th Annual Meeting of the International Society for Computer Assisted Orthopaedic Surgery, pages 54–57, 2017. 10.29007/9mbb.Search in Google Scholar

22. D.G. Lowe. Object recognition from local scale-invariant features. In Proceedings of the Seventh IEEE International Conference on Computer Vision, pages 1150–1157 vol. 2, Kerkyra, Greece, 1999. IEEE. ISBN 978-0-7695-0164-2. 10.1109/ICCV.1999.790410.Search in Google Scholar

23. A. Aldoma, M. Vincze, N. Blodow, D. Gossow, S. Gedikli, R. B. Rusu and G. Bradski. CAD-model recognition and 6DOF pose estimation using 3D cues. In 2011 IEEE International Conference on Computer Vision Workshops (ICCV Workshops), pages 585–592, November 2011. 10.1109/ICCVW.2011.6130296.Search in Google Scholar

24. A. Fuchs, D. Kundrat, M. Schultz, A. Krüger and T. Ortmaier. Automatisierte Überwachung der Laserosteotomie mittels optischer Kohärenztomographie. November, page 5, 2012.Search in Google Scholar

25. Raphael Jakubovic, Daipayan Guha, Shaurya Gupta, Michael Lu, Jamil Jivraj, Beau A. Standish, Michael K. Leung, Adrian Mariampillai, Kenneth Lee, Peter Siegler, Patryk Skowron, Hamza Farooq, Nhu Nguyen, Joseph Alarcon, Ryan Deorajh, Joel Ramjist, Michael Ford, Peter Howard, Nicolas Phan, Leo da Costa, Chris Heyn, Gamaliel Tan, Rajeesh George, David W. Cadotte, Todd Mainprize, Albert Yee and Victor X. D. Yang. High Speed, High Density Intraoperative 3D Optical Topographical Imaging with Efficient Registration to MRI and CT for Craniospinal Surgical Navigation. Scientific Reports, 8 (1): 1–12, October 2018. ISSN 2045-2322. 10.1038/s41598-018-32424-z.Search in Google Scholar PubMed PubMed Central

26. Micha Pfeiffer, Carina Riediger, Jürgen Weitz and Stefanie Speidel. Learning soft tissue behavior of organs for surgical navigation with convolutional neural networks. International Journal of Computer Assisted Radiology and Surgery, 14 (7): 1147–1155, July 2019. ISSN 1861-6410, 1861-6429. 10.1007/s11548-019-01965-7.Search in Google Scholar PubMed

27. Michael Danielczuk, Matthew Matl, Saurabh Gupta, Andrew Li, Andrew Lee, Jeffrey Mahler and Ken Goldberg. Segmenting Unknown 3D Objects from Real Depth Images using Mask R-CNN Trained on Synthetic Data. In 2019 International Conference on Robotics and Automation (ICRA), pages 7283–7290, May 2019. 10.1109/ICRA.2019.8793744.Search in Google Scholar

28. Chen Wang, Danfei Xu, Yuke Zhu, Roberto Martin-Martin, Cewu Lu, Li Fei-Fei and Silvio Savarese. DenseFusion: 6D object pose estimation by iterative dense fusion. In The IEEE Conference on Computer Vision and Pattern Recognition (CVPR), June 2019.10.1109/CVPR.2019.00346Search in Google Scholar

29. David Joseph Tan and Slobodan Ilic. Multi-forest Tracker: A Chameleon in Tracking. In 2014 IEEE Conference on Computer Vision and Pattern Recognition, pages 1202–1209, Columbus, OH, USA, June 2014. IEEE. ISBN 978-1-4799-5118-5. 10.1109/CVPR.2014.157.Search in Google Scholar

30. Changhyun Choi and Henrik I. Christensen. RGB-D object tracking: A particle filter approach on GPU. In 2013 IEEE/RSJ International Conference on Intelligent Robots and Systems, pages 1084–1091, Tokyo, November 2013. IEEE. ISBN 978-1-4673-6358-7, 978-1-4673-6357-0. 10.1109/IROS.2013.6696485.Search in Google Scholar

31. Cristina Garcia Cifuentes, Jan Issac, Manuel Wüthrich, Stefan Schaal and Jeannette Bohg. Probabilistic articulated real-time tracking for robot manipulation. IEEE Robotics and Automation Letters, 2 (2): 577–584, 2016.10.1109/LRA.2016.2645124Search in Google Scholar

32. G. Bradski. The OpenCV Library. Dr. Dobb’s Journal of Software Tools, 2000.Search in Google Scholar

33. Z. Zhang. A flexible new technique for camera calibration. IEEE Transactions on Pattern Analysis and Machine Intelligence, 22 (11): 1330–1334, November 2000. ISSN 1939-3539. 10.1109/34.888718.Search in Google Scholar

34. Morgan Quigley, Ken Conley, Brian P. Gerkey, Josh Faust, Tully Foote, Jeremy Leibs, Rob C. Wheeler and Andrew Y. Ng. ROS: An open-source robot operating system. In ICRA 2009, number 3.2. Kobe, Japan, 2009.Search in Google Scholar

35. R.Y. Tsai and R.K. Lenz. A new technique for fully autonomous and efficient 3D robotics hand/eye calibration. IEEE Transactions on Robotics and Automation, 5 (3): 345–358, June 1989. ISSN 2374-958X. 10.1109/70.34770.Search in Google Scholar

36. S. Garrido-Jurado, R. Muñoz-Salinas, F.J. Madrid-Cuevas and M.J. Marín-Jiménez. Automatic generation and detection of highly reliable fiducial markers under occlusion. Pattern Recognition, 47 (6): 2280–2292, June 2014. ISSN 00313203. 10.1016/j.patcog.2014.01.005.Search in Google Scholar

37. Joan Sola. Quaternion kinematics for the error-state KF. Laboratoire dAnalyse et dArchitecture des Systemes-Centre national de la recherche scientifique (LAAS-CNRS), Toulouse, France, Tech. Rep., 2012.Search in Google Scholar

38. Katharina Pentenrieder. Quaternion KalmanFilter. http://campar.in.tum.de/Chair/KalmanFilter, May 2005.Search in Google Scholar

39. Sebastian Thrun, Wolfram Burgard and Dieter Fox. Probabilistic Robotics. Intelligent Robotics and Autonomous Agents. MIT Press, Cambridge, Mass, 2005. ISBN 978-0-262-20162-9. OCLC: ocm58451645.Search in Google Scholar

40. Avishek Chatterjee and Venu Madhav Govindu. Noise in Structured-Light Stereo Depth Cameras: Modeling and its Applications. arXiv:1505.01936 [cs], May 2015.Search in Google Scholar

41. Nadia Schillreff and Frank Ortmeier. Learning-based Kinematic Calibration using Adjoint Error Model:. In Proceedings of the 15th International Conference on Informatics in Control, Automation and Robotics, pages 372–379, Porto, Portugal, 2018. SCITEPRESS – Science and Technology Publications. ISBN 978-989-758-321-6. 10.5220/0006870403720379.Search in Google Scholar

42. Michael Grupp. Evo: Python package for the evaluation of odometry and SLAM. 2017.Search in Google Scholar

Received: 2020-03-03
Accepted: 2020-04-30
Published Online: 2020-09-23
Published in Print: 2020-10-25

© 2020 Walter de Gruyter GmbH, Berlin/Boston

Downloaded on 22.2.2024 from https://www.degruyter.com/document/doi/10.1515/auto-2020-0032/html
Scroll to top button