Tight integration of sensing hardware and control is key to mastery of manipulation in cluttered, occluded, or dynamic environments. Vision-based tactile sensors, which directly captures high spatial resolution images of the contact surface and are synergistic with computer vision and recent image-based deep learning techniques, are a promising variant. This work describes the development of a high-resolution tactile-sensing finger, GelSlim, for robot grasping. This finger, inspired by previous GelSight sensing techniques (Johnson and Adelson 2009), features an integration that is slimmer, more robust, and with more homogeneous output than previous vision-based tactile sensors. We also demonstrate the following low-level features encode in the tactile imprints the sensor outputs are useful for various manipulation tasks. 1) Contact location and geometry information can be used to evaluate the grasping stability and thus infer a good regrasp policy, 2) Motion flow of the markers printed on the sensor surface is the key to reconstructing the force distribution applied to the sensor surface and also to incipient slip detection, 3) the small rotation of the object during the contact encoded in the tactile image sequence is useful to localize the external contact and plays a key role in the insertion task.
Contact: Siyuan Dong