R. Detry, D. Kraft, O. Kroemer, L. Bodenhagen, J. Peters, N. Krüger, J. Piater
June 16, 2011
We address the issue of learning and representing object grasp affordance models. We model grasp affordances with continuous probability density functions ( grasp densities ) which link object-relative grasp poses to their success probability. The underlying function representation is nonparametric and relies on kernel density estimation to provide a continuous model. Grasp densities are learned and refined from exploration, by letting a robot “play” with an object in a sequence of grasp-and-drop actions: the robot uses visual cues to generate a set of grasp hypotheses, which it then executes and records their outcomes. When a satisfactory amount of grasp data is available, an importance-sampling algorithm turns it into a grasp density. We evaluate our method in a largely autonomous learning experiment, run on three objects with distinct shapes. The experiment shows how learning increases success rates. It also measures the success rate of grasps chosen to maximize the probability of success, given reaching constraints.