← 返回

仿人面部机器人逼真表情学习

Yongji Fu, et al.

ICRA 2026(投稿中), 审稿中, 2025

摘要

仿人面部机器人的表情需要在”对观察者而言看起来自然”与”在有限驱动能力的机械面部上可执行” 之间达到平衡。本文学习一种从目标面部信号到机器人底层执行器指令的映射,不只复现静态关键 表情,还复现了类人化的微动态,同时保持机械极限内的稳定。工作包括数据采集、重定向,以及 一个在视觉逼真度与物理可行性之间做折衷的学习控制器。

Motivation

Humanoid face platforms fail in two directions: either the motion looks mechanical (physically valid but visually dead) or the learned controller drives the hardware outside its safe operating envelope chasing visual realism. We want expressions that look human and run on real motors.

Approach

  • Retargeting from human reference. Target facial signals are mapped into the robot’s actuator space with explicit constraints for torque, range, and coupling between adjacent degrees of freedom.
  • Learned controller. A controller predicts actuator trajectories that reproduce both the key pose and the micro-dynamics of a target expression, trained with objectives that jointly score perceptual fidelity and physical feasibility.
  • Closed-loop evaluation. The model is deployed on the physical face hardware and evaluated on both objective motion metrics and subjective human judgment.

Video