Hybrid Diffusion Policies with Projective Geometric Algebra for Efficient Robot Manipulation Learning

Xiatao Sun1, Yuxuan Wang2, Shuo Yang2, Yinxing Chen1, Daniel Rakita1
1Yale University, 2University of Pennsylvania
Architecture
Fig. 1: Overview of the hPGA-DP network architecture.

Abstract

Diffusion policies are a powerful paradigm for robot learning, but their training is often inefficient. A key reason is that networks must relearn fundamental spatial concepts, such as translations and rotations, from scratch for every new task. To alleviate this redundancy, we propose embedding geometric inductive biases directly into the network architecture using Projective Geometric Algebra (PGA). PGA provides a unified algebraic framework for representing geometric primitives and transformations, allowing neural networks to reason about spatial structure more effectively. In this paper, we introduce hPGA-DP, a novel hybrid diffusion policy that capitalizes on these benefits. Our architecture leverages the Projective Geometric Algebra Transformer (P-GATr) as a state encoder and action decoder, while employing established U-Net or Transformer-based modules for the core denoising process. Through extensive experiments and ablation studies in both simulated and real-world environments, we demonstrate that hPGA-DP significantly improves task performance and training efficiency. Notably, our hybrid approach achieves substantially faster convergence compared to both standard diffusion policies and architectures that rely solely on P-GATr.

Fusing Projective Geometric Algebra with Diffusion

Hybrid backbone: hPGA-DP wraps diffusion with PGA components. A P-GATr state encoder converts observations (robot links + task objects) into multivector latents; a P-GATr action decoder maps denoised latents back to actions. Both preserve geometric structure and E(3)-equivariance.

Geometric inductive bias via PGA: Points, planes, and motions live as multivectors in 𝔾3,0,1, so Euclidean transforms are native instead of relearned. This encodes spatial relations compactly and consistently across tasks.

Standard denoiser in the middle: Between encoder and decoder, a conventional U-Net or Transformer handles the diffusion denoising steps. This keeps the proven generative power of standard backbones while operating on geometry-informed latents.

Training recipe: Noise is added to action latents; the denoiser predicts it conditioned on encoded observations. The decoder is supervised only on later denoising steps (loss masking with threshold η) so it learns from well-denoised latents instead of pure noise. Total loss combines noise-prediction MSE and a masked decoder reconstruction term.

Evaluations

sim_results

Fig. 2: Top: simulation tasks in robosuite, with colored 3D bounding boxes indicating task-relevant objects. Bottom left: success rates for diffusion policies with different network backbones for various tasks, and mean epoch training time (MET) for each network on all tasks together. Bottom right: plot of success rate for state-based policies with U-Net, Transformer, hPGA-U, and hPGA-T for 100 training epochs of the Stack task.

Our experiments show that by "priming" the network with geometric knowledge, hPGA-DP reaches peak success rates in significantly fewer training epochs compared to vanilla Diffusion Policies.

real_results

Fig. 3: Top left: the dual-arm system for real-world experiments. Top right: top and bottom row show the block stacking task and drawer interaction task respectively. Bottom: results for real-world experiments. SR: success rate, CT: cumulative training time measured in minutes.

In the real-world tasks, hPGA-DP achieves higher success rates within the same epoch budget than U-Net or Transformer baselines. Although each epoch is slightly slower, the hybrid reaches target performance in fewer epochs, yielding lower cumulative training time.

BibTeX

@article{sun2025hybrid,
  title={Hybrid diffusion policies with projective geometric algebra for efficient robot manipulation learning},
  author={Sun, Xiatao and Wang, Yuxuan and Yang, Shuo and Chen, Yinxing and Rakita, Daniel},
  journal={arXiv preprint arXiv:2507.05695},
  year={2025}
}