Overview of EfficientZero-Multitask (EZ-M).
EZ-M on HumanoidBench-Hard: stand, walk, run, pole, slide, hurdle, sit, reach
Developing generalist robots capable of mastering diverse skills remains a central challenge in embodied AI. While recent progress emphasizes scaling model parameters and offline datasets, such approaches are limited in robotics, where learning requires active interaction. We argue that effective online learning should scale the number of tasks, rather than the number of samples per task.
This regime reveals a structural advantage of model-based reinforcement learning (MBRL). Because physical dynamics are invariant across tasks, a shared world model can aggregate multi-task experience to learn robust, task-agnostic representations. In contrast, model-free methods suffer from gradient interference when tasks demand conflicting actions in similar states. Task diversity therefore acts as a regularizer for MBRL, improving dynamics learning and sample efficiency.
We instantiate this idea with EfficientZero-Multitask (EZ-M), a sample-efficient multi-task MBRL algorithm for online learning. Evaluated on HumanoidBench, a challenging whole-body control benchmark, EZ-M achieves state-of-the-art performance with significantly higher sample efficiency than strong baselines, without extreme parameter scaling. These results establish task scaling as a critical axis for scalable robotic learning.
EZ-M extends EfficientZero-v2 to online multi-task RL with: (1) Task-sharing model architecture — shared representation, dynamics, reward, and value heads conditioned on learnable task embeddings; (2) Path consistency — regularizer aligning reward and value predictions along imagined rollouts; (3) Temporal consistency — latent state alignment between predicted and encoded transitions; (4) Independent experience replay — per-task buffers to mitigate data imbalance.
Overview of EfficientZero-Multitask (EZ-M).
EZ-M achieves SOTA on HumanoidBench-Medium (9 tasks) and HumanoidBench-Hard (14 tasks) with 1M steps. With only 16M parameters, EZ-M outperforms BRC (1B) while requiring ~4× less training time.
HumanoidBench-Hard. Normalized task-average scores on 14 contact-rich tasks (with hand control). EZ-M achieves SOTA on 10/14 tasks with 1M steps, outperforming BRC (1B params) and TD-MPC2.
Task Scaling. Performance improves as more tasks (N=1,4,9) are trained simultaneously under the same step budget. Validates that task diversity acts as a dynamics regularizer.
Ablation. Independent Experience Replay (IER) is most critical; Path Consistency (PathCons) provides substantial gains. Task embedding (TE) is indispensable for Dynamics and Reward.
Gradient Similarity. Left: EZ-M maintains high similarity for related tasks (walk-run) vs. low for distant tasks (walk-crawl). Right: EZ-M has higher similarity than BRC, indicating better knowledge transfer via shared dynamics.
@article{liu2026scaling,
title={Scaling Tasks, Not Samples: Mastering Humanoid Control through Multi-Task Model-Based Reinforcement Learning},
author={Liu, Shaohuai and Ye, Weirui and Du, Yilun and Xie, Le},
journal={arXiv preprint arXiv:2603.01452},
year={2026}
}