Continually Learning Interactive Robot
October 1, 2025
Motivation
Most deployed robots freeze at the end of pretraining — their skills, object vocabulary, and language grounding stop evolving once the data mix is locked. But a genuinely helpful household or lab robot needs to keep picking up new tasks, new objects, and new ways of being instructed, over weeks and months of use. This project treats continual learning as a first-class property of the system, not a training trick.
What the system does
- Online skill acquisition. New manipulation and navigation skills are learned from a small number of demonstrations or language-only corrections, and integrated without catastrophically forgetting earlier skills.
- Grounded language updates. When a user refers to a new object or property (“the wrinkled one”, “the side that feels rough”), the agent updates its language–perception alignment on the fly, rather than waiting for a retraining cycle.
- Self-iteration loop. The agent replays recent successes and failures, distills them into compact update targets, and uses them to refine its own policy and world model between sessions.
Where it fits
This project is the practical anchor of my broader research goal: robot and agent systems that continuously learn and iteratively self-improve through interaction with the physical world. It feeds directly into the visuo-tactile world-model and tactile–language work on the publications page.