Subsecond 3D Mesh Generation for Robot Manipulation

Qian Wang, Omar Abdellall*, Tony Gao*, Xiatao Sun, Daniel Rakita,
Yale University,
*Equal contribution

Overview

Overview

Fig. 1: Our system for sub-second 3D mesh generation from RGB-D input. The system combines three stages: (1) Open-vocabulary segmentation using Florence-2 and SAM2 with depth enhancement via Depth Anything v2 (0.2s), (2) Accelerated mesh generation using FlashVDM-distilled Hunyuan3D 2.0 (0.5s), and (3) Object registration via RANSAC and ICP to align the mesh with observed point cloud (0.15s). The 0.85s total runtime marks a critical step toward real-time robotic applications.

Abstract

3D meshes are a fundamental representation widely used in computer science and engineering. In robotics, they are particularly valuable because they capture objects in a form that aligns directly with how robots interact with the physical world, enabling core capabilities such as predicting stable grasps, detecting collisions, and simulating dynamics. Although automatic 3D mesh generation methods have shown promising progress in recent years, potentially offering a path toward real-time robot perception, two critical challenges remain. First, generating high-fidelity meshes is prohibitively slow for real-time use, often requiring tens of seconds per object. Second, mesh generation by itself is insufficient. In robotics, a mesh must be contextually grounded, i.e., correctly segmented from the scene and registered with the proper scale and pose. Additionally, unless these contextual grounding steps remain efficient, they simply introduce new bottlenecks. In this work, we introduce an end-to-end system that addresses these challenges, producing a high-quality, contextually grounded 3D mesh from a single RGB-D image in under one second. Our pipeline integrates open-vocabulary object segmentation, accelerated diffusion-based mesh generation, and robust point cloud registration, each optimized for both speed and accuracy. We demonstrate its effectiveness in a real-world manipulation task, showing that it enables meshes to be used as a practical, on-demand representation for robotics perception and planning.

Video

Qualitative Results

Qualitative Results

Fig. 2: Qualitative Comparison of Generated Meshes and Registration Results. Our method (left) achieves high geometric quality nearly identical to the slow H3D baseline (middle), while the fast SF3D baseline (right) produces significant artifacts.

BibTeX

@article{wang2025subsecond,
  title={Subsecond 3D Mesh Generation for Robot Manipulation},
  author={Wang, Qian and Abdellall, Omar and Gao, Tony and Sun, Xiatao and Rakita, Daniel},
  journal={arXiv preprint arXiv:2512.24428},
  year={2025}
}