Abstract
Current sim-to-real methods process sensory data uniformly, leading to computational inefficiency and problems with the sim-to-real transfer, as policies tend to overfit to scenes, rather than learn robust features. Drawing inspiration from the human selective gaze mechanism, we present a novel method called informed point cloud sampling to address these issues in reinforcement learning with point clouds. Our method can be applied within a Teacher-Student framework to prioritize task-relevant regions. By incorporating an auxiliary distance estimation head during training, our system can effectively identify object centers through the combination of distance estimates and current end-effector positions. This can be further exploited to generate object-centric observations, removing irrelevant information and increasing robustness to different settings. We apply our proposed method to robotic grasping in the real world. Experimental results demonstrate that our method achieves performance comparable to baseline methods while using significantly reduced point cloud density, improving computational efficiency, and leading to a robust sim-to-real transfer. Our method’s effectiveness is validated through comprehensive simulation and real-world experiments, showing promise for robust robotic grasping.
Method
Uniform Sampling (left) vs. Object Tracking with Auxiliary Head and Informed Sampling (right)
Here as a visuell example we show the common way of uniformely sampling and our informed sampling-method from a point cloud. We only use the object estimation output generated by the policy to set the object center and sample the point cloud using our proposed method. The policy is able to track the object on the table.
Quantitative Experiments
To evaluate the proposed methods, we perform an ablation study to assess the extension to the baseline in simulation. Further, we evaluate the efficiency of the proposed point cloud sampling method. In real-world experiments, we investigate the trained policies in terms of grasping success and robustness to deviations, such as different scenes, perturbations, and camera positions.Grasping Experiments
The grasping experiments we conducted of all twelve objects.\[ \begin{array}{l | c c | c } \textbf{Object} & \textbf{Ours} & \textbf{Avg. Grasp Time} & \textbf{Wang et al.} \\ \hline Screwdriver & \textbf{5/5} & 9.00\,s & N/A\\ Can & \textbf{4/5} & 9.75\,s & 3/5\\ Mug & \textbf{5/5} & 8.20\,s & 4/5\\ Banana & 5/5 & 14.20\,s & N/A \\ Brick & 5/5 & 9.60\,s & 5/5\\ Soup Can & 3/5 & 15.70\,s & 3/5\\ Sugar Box & \textbf{5/5} & 8.60\,s & 4/5\\ Cracker Box & 2/5 & 17.50\,s & \textbf{3/5}\\ Mustard & 4/5 & 12.50\,s & 4/5\\ Ball & 4/5 & 22.25\,s & N/A\\ Bowl & \textbf{5/5} & 9.80\,s & 4/5\\ Bleacher & 4/5 & 13.50\,s & 4/5\\ \hline In Comparison & \textbf{37/45} & - & 34/45\\ Success Rate & \textbf{82.2%} & - & 75.6\,\%\\ \hline All & 51/60 & 12.54\,s & -\\ Success Rate & 85.0% & - & - \\ \end{array} \]