Jump to ContentJump to Main Navigation
Show Summary Details
More options …

Paladyn, Journal of Behavioral Robotics

Editor-in-Chief: Schöner, Gregor

1 Issue per year

Open Access
Online
ISSN
2081-4836
See all formats and pricing
More options …

VisGraB: A Benchmark for Vision-Based Grasping

Gert Kootstra / Mila Popović
  • Cognitive Vision Lab, The Mærsk Mc-Kinney Møller Institute, University of Southern Denmark, Campusvej 55, DK-5230 Odense, Denmark
  • Other articles by this author:
  • De Gruyter OnlineGoogle Scholar
/ Jimmy Alison Jørgensen
  • Robotics Lab, The Mærsk Mc-Kinney Møller Institute, University of Southern Denmark, Campusvej 55, DK-5230 Odense, Denmark
  • Other articles by this author:
  • De Gruyter OnlineGoogle Scholar
/ Danica Kragic
  • Computer Vision and Active Perception Lab, CSC, Royal Institute of Technology (KTH), Stockholm, Sweden
  • Other articles by this author:
  • De Gruyter OnlineGoogle Scholar
/ Henrik Gordon Petersen
  • Robotics Lab, The Mærsk Mc-Kinney Møller Institute, University of Southern Denmark, Campusvej 55, DK-5230 Odense, Denmark
  • Other articles by this author:
  • De Gruyter OnlineGoogle Scholar
/ Norbert Krüger
  • Cognitive Vision Lab, The Mærsk Mc-Kinney Møller Institute, University of Southern Denmark, Campusvej 55, DK-5230 Odense, Denmark
  • Other articles by this author:
  • De Gruyter OnlineGoogle Scholar
Published Online: 2012-05-17 | DOI: https://doi.org/10.2478/s13230-012-0020-5

Abstract

We present a database and a software tool, VisGraB, for benchmarking of methods for vision-based grasping of unknown objects with no prior object knowledge. The benchmark is a combined real-world and simulated experimental setup. Stereo images of real scenes containing several objects in different configurations are included in the database. The user needs to provide a method for grasp generation based on the real visual input. The grasps are then planned, executed, and evaluated by the provided grasp simulator where several grasp-quality measures are used for evaluation. This setup has the advantage that a large number of grasps can be executed and evaluated while dealing with dynamics and the noise and uncertainty present in the real world images. VisGraB enables a fair comparison among different grasping methods. The user furthermore does not need to deal with robot hardware, focusing on the vision methods instead. As a baseline, benchmark results of our grasp strategy are included.

Keywords: grasping of unknown objects; vision-based grasping; benchmark

References

  • [1] Y. Bekiroglu, J. Laaksonen, J. A. Jørgensen, V. Kyrki, and D. Kragic. Assessing grasp stability based on learning and haptic data. IEEE Transactions on Robotics, 27(3):616-629, 2011CrossrefWeb of ScienceGoogle Scholar

  • [2] L. Bodenhagen, D. Kraft, M. Popović, E. Baśeski, P. E. Hotz, and N. Krüger. Learning to grasp unknown objects based on 3d edge information. In Proceedings of the 8th IEEE international conference on Computational intelligence in robotics and automation, 2009Google Scholar

  • [3] J. Bohg and D. Kragic. Learning grasping points with shape context. Robotics and Autonomous Systems, 58(4):362-377, 2010Web of ScienceCrossrefGoogle Scholar

  • [4] J. Cortsen, J. A. Jørgensen, D. Sølvason, and H. G. Petersen. Simulating robot handling of large scale deformable objects: Manufacturing of unique concrete reinforcement structures. In Proceedings of the IEEE International Conference on Robotics and Automation (ICRA), 2012Google Scholar

  • [5] J. Cortsen and H. G. Petersen. Advanced off-line simulation framework with deformation compensation for high speed machining with robot manipulators. IEEE – ASME Transactions on Mechatronics, 2012Google Scholar

  • [6] N. Curtis and J. Xiao. Efficient and effective grasping of novel objects through learning and adapting a knowledge base. In Proceedings of the IEEE/RSJ International Conference on Intelligent Robots and Systems, 2008Google Scholar

  • [7] S. El-Khoury and A. Sahbani. Handling objects by their handles. In Proceedings of IROS 2008 Workshop on Grasp and Task Learning by Imitation, 2008Google Scholar

  • [8] L.-P. Ellekilde and J. A. Jørgensen. Usage and verification of grasp simulation for industrial automation. In Proceedings of Automate 2011, Chicago, Illinois, 2011Google Scholar

  • [9] M. Everingham, L. Van Gool, C. K. I. Williams, J. Winn, and A. Zisserman. The pascal visual object classes (voc) challenge. International Journal of Computer Vision, 88(2):303-338, June 2010Web of ScienceCrossrefGoogle Scholar

  • [10] C. Ferrari and J. Canny. Planning optimal grasps. In Proceedings of the 1992 IEEE International Conference on Robotics and Automation (ICRA), Nice, France, 1992Google Scholar

  • [11] C. Goldfeder, P. K. Allen, C. Lackner, and R. Pelossof. Grasp planning via decomposition trees. In Proceedings of the IEEE International Conference on Robotics and Automation (ICRA’07), 2007Google Scholar

  • [12] C. Goldfeder, M. Ciocarlie, H. Dang, and P. K. Allen. The columbia grasp database. In Proceedings of the International Conference on Robotics and Automation (ICRA), Kobe, Japan, 2009Google Scholar

  • [13] G. Griffin, A. Holub, and P. Perona. Caltech-256 object category dataset. Technical Report 7694, California Institute of Technology, 2007Google Scholar

  • [14] K. Hübner, S. Ruthotto, and D. Kragic. Minimum volume bounding box decomposition for shape approximation in robot grasping. In Proceedings of the IEEE International Conference on Robotics and Automation (ICRA’08), 1628-1633, 2008Google Scholar

  • [15] J. A. Jørgensen, L. P. Ellekilde, and H. G. Petersen. RobWorkSim - an open simulator for sensor based grasping. In Proceedings of Joint 41st International Symposium on Robotics (ISR 2010) and the 6th German Conference on Robotics, Munich, 2010Google Scholar

  • [16] J. A. Jørgensen, A. R. Fugl, and H. G. Petersen. Accelerated hierarchical collision detection for simulation using cuda. In Proceedings of the Seventh Workshop on Virtual Reality Interactions and Physical Simulations (VRIPHYS 2010), 97-104, Copenhagen, Denmark, 2010Google Scholar

  • [17] J. A. Jørgensen and H. G. Petersen. Usage of simulations to plan stable grasping of unknown objects with a 3-fingered schunk hand. In Proceedings of the IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), Nice, France, 2008Google Scholar

  • [18] J. A. Jørgensen and H. G. Petersen. Grasp synthesis for dextrous hands optimised for tactile manipulation. In Proceedings of the Joint 41st International Symposium on Robotics (ISR 2010), Münich, Germany, 2010Google Scholar

  • [19] A. Kjær-Nielsen, A. G. Buch, A. E. K. Jensen, B. Møller, D. Kraft, N. Krüger, H. G. Petersen, and L.-P. Ellekilde. Ring on the hook: Placing a ring on a moving and pendulating hook based on visual input. Industrial Robot: An International Journal, 38(3):301-314, 2011Web of ScienceCrossrefGoogle Scholar

  • [20] G. Kootstra, M. Popović, J. A. Jørgensen, D. Kragic, H. G. Petersen, and N. Krüger. VisGraB: A benchmark for vision-based grasping. http://visgrab.sdu.dk

  • [21] G. Kootstra, M. Popović, J. A. Jørgensen, K. Kuklinski, K. Miatliuk, D. Kragic, and N. Krüger. Enabling grasping of unknown objects through a synergistic use of edge and surface information. International Journal of Robotics Research, under reviewWeb of ScienceGoogle Scholar

  • [22] B. León, S. Ulbrich, R. Diankov, G. Puche, M. Przybylski, A. Morales, T. Asfour, S. Moisio, J. Bohg, J. Kuffner, and R. Dillmann. Opengrasp: a toolkit for robot grasping simulation. In Proceedings of the Second international conference on Simulation, modeling, and programming for autonomous robots, SIMPAR’10, pages 109-120, Berlin, Heidelberg, 2010, Springer-VerlagGoogle Scholar

  • [23] A. T. Miller and A. T. Miller. Graspit!: A versatile simulator for robotic grasping. IEEE Robotics and Automation Magazine, 11:110-122, 2004Google Scholar

  • [24] A. L. Olsen and H. G. Petersen. Inverse kinematics by numerical and analytical cyclic coordinate descent. Robotica, 29(3):619-626, 2011Web of ScienceCrossrefGoogle Scholar

  • [25] C. Papazov and D. Burschka. Stochastic optimization for rigid point set registration. In Proceedings of the 5th International Symposium on Visual Computing (ISVC’09), volume 5875 of Lecture Notes in Computer Science, pages 1043-1054. Springer, 2009Google Scholar

  • [26] R. Pelossof, A. Miller, P. Allen, and T. Jebara. An SVM learning approach to robotic grasping. In Proceedings of the IEEE Conference on Robotics and Automation, 2004Google Scholar

  • [27] M. Popović, G. Kootstra, J. A. Jørgensen, D. Kragic, and N. Krüger. Grasping unknown objects using an early cognitive vision system for general scene understanding. In Proceedings of the IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), 2011.Google Scholar

  • [28] M. Popović, D. Kraft, L. Bodenhagen, E. Başeski, N. Pugeault, D. Kragic, T. Asfour, and N. Krüger. A strategy for grasping unknown objects based on co-planarity and colour information. Robotics and Autonomous Systems, 58(5):551-565, 2010CrossrefWeb of ScienceGoogle Scholar

  • [29] N. Pugeault, F. Wörgötter, and N. Krüger. Visual primitives: Local, condensed, and semantically rich visual descriptors and their applications in robotics. International Journal of Humanoid Robotics (Special Issue on Cognitive Humanoid Vision), 7(3):379-405, 2010Web of ScienceGoogle Scholar

  • [30] D. Rao, Q. V. Le, T. Phoka, M. Quigley, A. Sudsang, and A. Y. Ng. Grasping novel objects with depth segmentation. In Proceedings of the IEEE/RSJ International Conference on Intelligent Robots and Systems, 2010Google Scholar

  • [31] A. Saxena, J. Driemeyer, and A. Y. Ng. Robotic grasping of novel objects using vision. The International Journal of Robotics Research, 27(2):157-173, 2008CrossrefWeb of ScienceGoogle Scholar

  • [32] D. Scharstein and R. Szeliski. A taxonomy and evaluation of dense two-frame stereo correspondence algorithms. International Journal of Computer Vision, 47(1-3):7-42, 2002Google Scholar

  • [33] R. Stolkin, A. Greig, and J. Gilby. A calibration system for measuring 3D ground truth for validation and error analysis of robot vision algorithms. Measurement Science and Technology, 17:2721-2730, Oct. 2006Google Scholar

  • [34] S. Ulbrich, D. Kappler, T. Asfour, N. Vahrenkamp, A. Bierbaum, M. Przybylski, and R. Dillmann. The opengrasp benchmark suite: An environment for the comparative analysis of grasping and dexterous manipulation. In Proceedings of the IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), 2011Google Scholar

About the article

Received: 2011-12-10

Accepted: 2012-04-28

Published Online: 2012-05-17

Published in Print: 2012-06-01


Citation Information: Paladyn, Journal of Behavioral Robotics, ISSN (Online) 2081-4836, DOI: https://doi.org/10.2478/s13230-012-0020-5.

Export Citation

© Gert Kootstra et al.. This work is licensed under the Creative Commons Attribution-NonCommercial-NoDerivatives 3.0 License. BY-NC-ND 3.0

Citing Articles

Here you can find all Crossref-listed publications in which this article is cited. If you would like to receive automatic email messages as soon as this article is cited in other publications, simply activate the “Citation Alert” on the top of this page.

[1]
Gert Kootstra, Mila Popović, Jimmy Alison Jørgensen, Kamil Kuklinski, Konstantsin Miatliuk, Danica Kragic, and Norbert Krüger
The International Journal of Robotics Research, 2012, Volume 31, Number 10, Page 1190
[2]
Berk Calli, Aaron Walsman, Arjun Singh, Siddhartha Srinivasa, Pieter Abbeel, and Aaron M. Dollar
IEEE Robotics & Automation Magazine, 2015, Volume 22, Number 3, Page 36
[3]
Javier Perez, Jorge Sales, Antonio Penalver, David Fornas, Jose Javier Fernandez, Juan Carlos Garcia, Pedro J. Sanz, Raul Marin, and Mario Prats
IEEE Robotics & Automation Magazine, 2015, Volume 22, Number 3, Page 85

Comments (0)

Please log in or register to comment.
Log in