Jump to ContentJump to Main Navigation
Show Summary Details
More options …

Paladyn, Journal of Behavioral Robotics

Editor-in-Chief: Schöner, Gregor

Covered by SCOPUS

CiteScore 2018: 2.17

SCImago Journal Rank (SJR) 2018: 0.336
Source Normalized Impact per Paper (SNIP) 2018: 1.707

ICV 2017: 99.90

Open Access
See all formats and pricing
More options …

Identifying relevant feature-action associations for grasping unmodelled objects

Mikkel Tang Thomsen
  • The Maersk Mc-Kinney Moller Institute, Faculty of Engineering, University of Southern Denmark, Niels Bohrs Allé 1, DK-5230 Odense M, Denmark
  • Other articles by this author:
  • De Gruyter OnlineGoogle Scholar
/ Dirk Kraft
  • The Maersk Mc-Kinney Moller Institute, Faculty of Engineering, University of Southern Denmark, Niels Bohrs Allé 1, DK-5230 Odense M, Denmark
  • Other articles by this author:
  • De Gruyter OnlineGoogle Scholar
/ Norbert Krüger
  • The Maersk Mc-Kinney Moller Institute, Faculty of Engineering, University of Southern Denmark, Niels Bohrs Allé 1, DK-5230 Odense M, Denmark
  • Other articles by this author:
  • De Gruyter OnlineGoogle Scholar
Published Online: 2015-03-30 | DOI: https://doi.org/10.1515/pjbr-2015-0006


Action affordance learning based on visual sensory information is a crucial problem within the development of cognitive agents. In this paper, we present a method for learning action affordances based on basic visual features, which can vary in their granularity, order of combination and semantic content. The method is provided with a large and structured set of visual features, motivated by the visual hierarchy in primates and finds relevant feature action associations automatically. We apply our method in a simulated environment on three different object sets for the case of grasp affordance learning. For box objects,we achieve a 0.90 success probability, 0.80 for round objects and up to 0.75 for open objects, when presented with novel objects. In thiswork,we demonstrate, in particular, the effect of choosing appropriate feature representations. We demonstrate a significant performance improvement by increasing the complexity of the perceptual representation. By that, we present important insights in how the design of the feature space influences the actual learning problem.

Keywords : Human Vision; Affordance Learning; Cognitive Robotics


  • [1] A. Saxena, J. Driemeyer, and A. Y. Ng, “Robotic grasping of novel objects using vision,” The International Journal of Robotics Research, vol. 27, no. 2, pp. 157–173, 2008. [Online]. Available: http://ijr.sagepub.com/content/27/2/157.abstract CrossrefGoogle Scholar

  • [2] G. Kootstra, M. Popovic, J. Jørgensen, K. Kuklinski, K. Miatliuk, D. Kragic, and N. Kruger, “Enabling grasping of unknown objects through a synergistic use of edge and surface information,” The International Journal of Robotics Research, vol. 31, no. 10, pp. 1190–1213, 2012. [Online]. Available: http://ijr.sagepub.com/ content/31/10/1190.abstract CrossrefWeb of ScienceGoogle Scholar

  • [3] A. Kleinhans, S. Thill, B. Rosman, R. Detry, and B. Tripp, “Modelling primate control of grasping for robotics applications,” in Second Workshop on Affordances: Visual Perception of Affordances and Functional Visual Primitives for Scene Analysis (in conjunction with ECCV 2014), 2014. Google Scholar

  • [4] G. Granlund, “The complexity of vision,” Signal Processing, vol. 74, 1999. Google Scholar

  • [5] H. Zhou, H. S. Friedman, and R. von der Heydt, “Coding of border ownership in monkey visual cortex,” The Journal of Neuroscience, vol. 20, no. 17, pp. 6594–6611, 2000. [Online]. Available: http://www.jneurosci.org/content/20/17/6594.abstract Google Scholar

  • [6] N. Krüger, P. Janssen, S. Kalkan, M. Lappe, A. Leonardis, J. H. Piater, A. J. Rodríguez-Sánchez, and L.Wiskott, “Deep hierarchies in the primate visual cortex: What can we learn for computer vision?” IEEE Trans. Pattern Anal. Mach. Intell., vol. 35, no. 8, pp. 1847–1871, 2013. Web of ScienceGoogle Scholar

  • [7] A. Treisman and G. Gelade, “A feature integration theory of attention,” Cognitive Psychology, vol. 12, pp. 97–136, 1980. Google Scholar

  • [8] L. Montesano and M. Lopes, “Learning grasping affordances from local visual descriptors,” in Proceedings of the 2009 IEEE 8th International Conference on Development and Learning, ser. DEVLRN ’09. Washington, DC, USA: IEEE Computer Society, 2009, pp. 1–6. [Online]. Available: http://dx.doi.org/10.1109/ DEVLRN.2009.5175529 Google Scholar

  • [9] M. Stark, P. Lies, M. Zillich, J. Wyatt, and B. Schiele, “Functional object class detection based on learned affordance cues,” in Computer Vision Systems, ser. Lecture Notes in Computer Science, A.Gasteratos, M. Vincze, and J. Tsotsos, Eds. Springer Berlin Heidelberg, 2008, vol. 5008, pp. 435–444. [Online]. Available: http://dx.doi.org/10.1007/978-3-540-79547-6_42 CrossrefGoogle Scholar

  • [10] G. Fritz, L. Paletta, M. Kumar, G. Dorffner, R. Breithaupt, and E. Rome, “Visual learning of affordance based cues,” in Proceedings of the 9th International Conference on From Animals to Animats: Simulation of Adaptive Behavior, ser. SAB’06. Berlin, Heidelberg: Springer-Verlag, 2006, pp. 52–64. [Online]. Available: http://dx.doi.org/10.1007/11840541_5 CrossrefGoogle Scholar

  • [11] D. Rao, Q. V. Le, T. Phoka, M. Quigley, A. Sudsang, and A. Y. Ng, “Grasping novel objects with depth segmentation,” in Intelligent Robots and Systems (IROS), 2010 IEEE/RSJ International Conference on. IEEE, 2010, pp. 2578–2585. Google Scholar

  • [12] M. Ciocarlie, K. Hsiao, E. G. Jones, S. Chitta, R. B. Rusu, and I. A. Şucan, “Towards reliable grasping and manipulation in household environments,” in Experimental Robotics. Springer, 2014, pp. 241–252. Google Scholar

  • [13] J. Stückler, R. Steffens, D. Holz, and S. Behnke, “Eflcient 3d object perception and grasp planning for mobile manipulation in domestic environments,” Robotics and Autonomous Systems, vol. 61, no. 10, pp. 1106–1115, 2013. Google Scholar

  • [14] M. Richtsfeld and M. Zillich, “Grasping unknown objects based on 21/2d range data,” in Automation Science and Engineering, 2008. CASE 2008. IEEE International Conference on. IEEE, 2008, pp. 691–696. Google Scholar

  • [15] K. Huebner, S. Ruthotto, and D. Kragic, “Minimum volume bounding box decomposition for shape approximation in robot grasping,” in Robotics and Automation, 2008. ICRA 2008. IEEE International Conference on. IEEE, 2008, pp. 1628–1633. Google Scholar

  • [16] N. Curtis, J. Xiao, and S. Member, “Eflcient and effective grasping of novel objects through learning and adapting a knowledge base,” in IEEE International Conference on Robotics and Automation (ICRA), 2008, pp. 2252–2257. Google Scholar

  • [17] R. Detry, C. H. Ek, M. Madry, and D. Kragic, “Learning a dictionary of prototypical grasp-predicting parts from grasping experience,” in IEEE International Conference on Robotics and Automation, 2013. Google Scholar

  • [18] C. Goldfeder, P. K. Allen, C. Lackner, and R. Pelossof, “Grasp planning via decomposition trees,” in Robotics and Automation, 2007 IEEE International Conference on. IEEE, 2007, pp. 4679– 4684. Google Scholar

  • [19] A. Herzog, P. Pastor, M. Kalakrishnan, L. Righetti, T. Asfour, and S. Schaal, “Template-based learning of grasp selection,” in Robotics and Automation (ICRA), 2012 IEEE International Conference on. IEEE, 2012, pp. 2379–2384. Google Scholar

  • [20] J. Laaksonen, E. Nikandrova, and V. Kyrki, “Probabilistic sensorbased grasping,” in Intelligent Robots and Systems (IROS), 2012 IEEE/RSJ International Conference on. IEEE, 2012, pp. 2019– 2026. Google Scholar

  • [21] Y. Bekiroglu, R. Detry, and D. Kragic, “Learning tactile characterizations of object-and pose-specific grasps,” in Intelligent Robots and Systems (IROS), 2011 IEEE/RSJ International Conference on. IEEE, 2011, pp. 1554–1560. Google Scholar

  • [22] A. Bierbaum and M. Rambow, “Grasp affordances from multifingered tactile exploration using dynamic potential fields,” in Humanoid Robots, 2009. Humanoids 2009. 9th IEEE-RAS International Conference on, Dec 2009, pp. 168–174. Google Scholar

  • [23] T. A. J. Bohg, A. Morales and D. Kragic, “Data-driven grasp synthesis – a survey,” IEEE Transactions on Robotics, vol. 30, no. 2, pp. 289–309, 2014. Google Scholar

  • [24] I. Lenz, H. Lee, and A. Saxena, “Deep learning for detecting robotic grasps,” CoRR, pp. –1–1, 2013. Google Scholar

  • [25] Y. Jiang, S. Moseson, and A. Saxena, “Eflcient grasping from rgbd images: Learning using a new rectangle representation,” in ICRA’11, 2011, pp. 3304–3311. Google Scholar

  • [26] S. Fidler, M. Boben, and A. Leonardis, “Learning hierarchical compositional representations of object structure,” in Object Categorization: Computer and Human Vision Perspectives, S. Dickinson, A. Leonardis, B. Schiele, and M. Tarr, Eds. Cambridge University Press, 2009, pp. 196–215. Google Scholar

  • [27] D. W. Scott, Multivariate Density Estimation: Theory, Practice, and Visualization (Wiley Series in Probability and Statistics), 1st ed. Wiley, Sept. 1992. [Online]. Available: http://www.amazon.com/exec/obidos/redirect?tag= citeulike07-20&path=ASIN/0471547700 Google Scholar

  • [28] R. Becher, P. Steinhaus, R. Zöllner, and R. Dillmann, “Design and implementation of an interactive object modelling system,” in Proceedings Conference Robotik/ISR 2006, München, May 2006. Google Scholar

  • [29] archvied3D, “Archive3d free online cad model database,” http://www.archive3d.net. Google Scholar

  • [30] L.-P. Ellekilde and J. A. Jørgensen, “Robwork: A flexible toolbox for robotics research and education,” Robotics (ISR), 2010 41st International Symposium on and 2010 6th German Conference on Robotics (ROBOTIK), pp. 1 –7, june 2010. Google Scholar

  • [31] J. A. Jørgensen, L.-P. Ellekilde, and H. G. Petersen, “RobWork- Sim - an Open Simulator for Sensor based Grasping,” Robotics (ISR), 2010 41st International Symposium on and 2010 6th German Conference on Robotics (ROBOTIK), pp. 1 –8, june 2010. Google Scholar

  • [32] M. Fischer and D. Henrich, “3d collision detection for industrial robots and unknown obstacles using multiple depth images,” in Advances in Robotics Research, T. Kröger and F. Wahl, Eds. Springer Berlin Heidelberg, 2009, pp. 111–122. [Online]. Available: http://dx.doi.org/10.1007/978-3-642-01213-6_11 CrossrefGoogle Scholar

About the article

Received: 2014-07-15

Accepted: 2015-01-04

Published Online: 2015-03-30

Citation Information: Paladyn, Journal of Behavioral Robotics, Volume 6, Issue 1, ISSN (Online) 2081-4836, DOI: https://doi.org/10.1515/pjbr-2015-0006.

Export Citation

© 2015 Mikkel Tang Thomsen et al.. This work is licensed under the Creative Commons Attribution-NonCommercial-NoDerivatives 3.0 License. BY-NC-ND 3.0

Comments (0)

Please log in or register to comment.
Log in