Publication Date
2002
Abstract
This paper is a preliminary account of current work on a visual system that learns to aid in robotic grasping and manipulation tasks. Localized features are learned of the visual scene that correlate reliably with the orientation of a dextrous robotic hand during haptically guided grasps. On the basis of these features, hand orientations are recommended for future gasping operations. The learning process is instancebased, on-line and incremental, and the interaction between visual and haptic systems is loosely anthropomorphic. It is conjectured that critical spatial information can be learned on the basis of features of visual appearance, without explicit geometric representations or planning.
Recommended Citation
Piater, Justus H., "Learning Visual Features to Predict Hand Orientations" (2002). Computer Science Department Faculty Publication Series. 148.
Retrieved from https://scholarworks.umass.edu/cs_faculty_pubs/148
Comments
This paper was harvested from CiteSeer