Manipulation tasks can often be decomposed into sub-tasks and primitive actions. This decomposition is often driven by tactile cues and signatures. We, as humans, are capable of inferring these sub-tasks and producing policies to efficiently and reliable execute them. In this line of work, we are interested in emulating this behavior in robotic systems. More precisely, how do we decompose tasks? How do we infer useful abstractions such as "this door is locked" or "this door is open"? These higher level encodings of tasks provide important cues for what to do. For example, if the door is locked then the decision for action is to find a key, irrespective of the underlying state of the robot. We are interested in building mechanisms to autonomously produce these abstractions and policies in the physical world.
Contact: Nima Fazeli