DOI: 10.1177/02783649231193710 ISSN:

Robust grasping across diverse sensor qualities: The GraspNet-1Billion dataset

Hao-Shu Fang, Minghao Gou, Chenxi Wang, Cewu Lu
  • Applied Mathematics
  • Artificial Intelligence
  • Electrical and Electronic Engineering
  • Mechanical Engineering
  • Modeling and Simulation
  • Software

Robust object grasping in cluttered scenes is vital to all robotic prehensile manipulation. In this paper, we present the GraspNet-1Billion benchmark that contains rich real-world captured cluttered scenarios and abundant annotations. This benchmark aims at solving two critical problems for the cluttered scenes parallel-finger grasping: the insufficient real-world training data and the lacking of evaluation benchmark. We first contribute a large-scale grasp pose detection dataset. Two different depth cameras based on structured-light and time-of-flight technologies are adopted. Our dataset contains 97,280 RGB-D images with over one billion grasp poses. In total, 190 cluttered scenes are collected, among which 100 are training set and 90 are for testing. Meanwhile, we build an evaluation system that is general and user-friendly. It directly reports a predicted grasp pose’s quality by analytic computation, which is able to evaluate any kind of grasp representation without exhaustively labeling the ground-truth. We further divide the test set into three difficulties to better evaluate algorithms’ generalization ability. Our dataset, accessing API and evaluation code, are publicly available at www.graspnet.net.

More from our Archive