We introduce Omnipush, a dataset with high variety of planar pushing behavior.
The dataset contains 250 pushes for each of 250 objects, all recorded with RGB-D and high precision state tracking.
The objects are constructed to explore key factors that affect pushing --the shape of the object and its mass distribution-- which have not been broadly explored in previous datasets and allow to study generalization in model learning.
Paper: Omnipush: accurate, diverse, real-world dataset of pushing dynamics with RGBD images. (PDF).
Maria Bauza, Ferran Alet, Yen-Chen Lin, Tomás Lozano-Pérez, Leslie P. Kaelbling, Phillip Isola, and Alberto Rodriguez.
Contact: Maria Bauza (email@example.com)
We have used 250 new objects to record this dataset. In the picture below, you can see some examples of the shapes considered for these objects.
Each object is built by magnetically attaching different sides to a central piece. Additonally, the sides can carry extra weight which allows us to alter the mass distribution.
We also provide a rendering script in Python to visualize different shapes: visualization code.
And processing code to exemplify how to process the bag files and visualize both the RGB images and the ground truth poses from state tracking. Below there is an example of the videos that can be generated using the code:
This dataset allows to study the effect of the mass ditribution on the motion of objects.
|Zero: Normal(0,1)||-||4.25||.997||21.9 mm|
|Upperbound on Bayes error||-||< -2.15||.165||3.6mm|
As it is well-known in the robotics community, collecting accurate robotic data is hard and demanding. Below we detail
some of our experiences and conclusions after collecting the Omnipush dataset:
1. It takes time: collecting the final version of this dataset took 12 hours a day for 2 weeks, i.e., more than 150 hours. Planning as accurately as possible how much time it will take can prevent you from aiming at something that is unfeasible. Try to be realistic, unexpected things happen. In our case, data collection for each object takes around 15min, so figuring out how to be efficient doing other things while also collecting data became really important.
2. Check everything: To make the system as autonomous as possible we had, for instance, to check continuously that all sensors worked (RGBD camera, tracking system, robot), that the frequency was good and that the memory space was enough. Collect a few times a small amount of data before collecting a large amount of data, even subtle errors can force you to recollect it all.
Similarly, make sure to avoid the possibility of any robot collision or object falling outside the workspace even when sensors are unaccurate.
3. Automate as much as possible: Minimal human intervention is key beyond expected. Human intervention can add unexpected errors including setting the wrong object or moving calibrated cameras. For the Omnipush dataset, we made sure that we had properly built all objects by checking the first collected image of that object. We realized that 5 objects had been wrongly assembled and had to recollect data for them.
4. Process some data beforehand: collecting all the dataset before using any of the data is a bad idea. Some questions that can be answered almost from the beginning: Does the recording frequency match the desired one? How frequent are the errors in the data? How many data files do you expect to discard because of errors? Include those into your estimates of how long it will take to collect the data.
5. Hardware improvements save time: it is worth to think carefully on what could go wrong or will need human intervention, and try to find ways to change the setup so that you can prevent or improve them. In our system, we can relocate the object at the center of the workspace every time it gets close to its edges by adding a circular spot that the pusher can engage to move the object. Also, hardware can break, have replacements.