SPIDER
Scalable Physics-Informed DExterous Retargeting
Abstract
Learning dexterous and agile policy for humanoid and dexterous hand control requires large-scale
demonstrations, but collecting robot-specific data is prohibitively expensive.
In contrast, abundant human motion data is readily available from motion capture, videos, and virtual
reality.
Due to the embodiment gap and missing dynamic information like force and torque, these demonstrations
cannot be directly executed on robots.
We propose Scalable Physics-Informed DExterous Retargeting (SPIDER), a
physics-based retargeting
framework to transform and augment kinematic-only human demonstrations to dynamically feasible robot
trajectories at scale.
Our key insight is that human demonstrations should provide global task structure and objective, while
large-scale physics-based sampling with curriculum-style virtual contact guidance
should
refine trajectories to ensure dynamical feasibility and correct contact sequences.
SPIDER scales across diverse 9 humanoid/dexterous hand embodiments and 6
datasets, improving
success rates by 18% compared to standard sampling, while being 10× faster than reinforcement
learning (RL) baselines, and enabling the generation of a 2.4M frames dynamic-feasible robot dataset
for policy learning.
By aligning human motion and robot feasibility at scale, SPIDER offers a
general, embodiment-agnostic
foundation for humanoid and dexterous hand control.
As a universal retargeting method, SPIDER can work with diverse quality
data, including single RGB camera
video and can be applied to real robot deployment and other downstream learning methods like RL to
enable
efficient closed-loop policy learning.
Framework Overview
Method: SPIDER is a framework for mapping
robot-infeasible human motion to feasible
dexterous robot actions by massive physics-based sampling with virtual contact guidance. After
physics-based retargeting, we can generate large-scale dynamically feasible datasets directly deployable
on robots in the real world.
Pipeline
Pipeline: SPIDER takes in human motion + object
motion with their mesh, and generates
dynamically feasible robot trajectories that can be executed on the robot.
Direct Deployment on Robots
Being dynamically feasible, the generated trajectories can be directly executed on the robot. The rollout data is augmented with domain randomization.
Pick Spoon from Bowl
Play Guitar
Rotate Bulb
Unplug
Pick Cup
Pick Spoon
Rotate Cube
Pick Duck
Pick Lego
Pick Toy
Retargeting for Dexterous Hands
Interactive Visualization
Click on the tabs to view the retargeted trajectories. Switch from
log_time to sim_time to view the generated
motion in the simulation time.
Simulation Videos
Allegro Hand
Inspire Hand
Schunk Hand
XHand
Ability Hand - Tea
Ability Hand - Board Wiping
Inspire Hand - Board Lifting
Retargeting for Humanoid Robots
Interactive Visualization
Click on the tabs to view the retargeted trajectories. Switch from
log_time to sim_time to view the generated
motion in the simulation time.
Data Augmentation
As a physics-based retargeting method, SPIDER can diversify single demonstration into multiple feasible trajectories with
new objects and environments.
Contact Guidance
Contact guidance is used to ensure desired contact sequences is achieved.
With Contact Guidance - Allegro
Without Contact Guidance - Allegro
With Contact Guidance - Constraint G1
Without Contact Guidance - Constraint G1
BibTeX
@misc{pan2025spiderscalablephysicsinformeddexterous,
title={SPIDER: Scalable Physics-Informed Dexterous Retargeting},
author={Chaoyi Pan and Changhao Wang and Haozhi Qi and Zixi Liu and Homanga Bharadhwaj and Akash Sharma and Tingfan Wu and Guanya Shi and Jitendra Malik and Francois Hogan},
year={2025},
eprint={2511.09484},
archivePrefix={arXiv},
primaryClass={cs.RO},
url={https://arxiv.org/abs/2511.09484},
}