← Back

Learning Realistic Expressions for Humanoid Face Robots

Yongji Fu, et al.

ICRA 2026 (submission), Under review, 2025

abstract

Humanoid face robots need expressions that look natural to a human observer while remaining physically executable on a motor-driven face with bounded actuation. This work learns a mapping from target facial signals to the robot’s low-level actuator commands that reproduces human-like micro-dynamics — not just static key poses — and stays stable under the hardware’s mechanical limits. The pipeline covers data collection, retargeting, and a learned controller that balances visual fidelity against physical feasibility.

Motivation

Humanoid face platforms fail in two directions: either the motion looks mechanical (physically valid but visually dead) or the learned controller drives the hardware outside its safe operating envelope chasing visual realism. We want expressions that look human and run on real motors.

Approach

  • Retargeting from human reference. Target facial signals are mapped into the robot’s actuator space with explicit constraints for torque, range, and coupling between adjacent degrees of freedom.
  • Learned controller. A controller predicts actuator trajectories that reproduce both the key pose and the micro-dynamics of a target expression, trained with objectives that jointly score perceptual fidelity and physical feasibility.
  • Closed-loop evaluation. The model is deployed on the physical face hardware and evaluated on both objective motion metrics and subjective human judgment.

Video