HeadlinesBriefing favicon HeadlinesBriefing.com

Robots learn from data, not rules: the $6.1 billion shift

MIT Technology Review AI •
×

Robotics once chased science‑fiction fantasies but delivered mostly assembly‑line arms. That gap kept Silicon Valley wary until a surge of capital—$6.1 billion poured into humanoid projects in 2025, quadruple the 2024 sum. The shift stems from a new learning paradigm: instead of hand‑crafted rules, engineers let machines discover behavior through massive trial‑and‑error simulations. Researchers now train virtual twins of kitchens, factories, and homes.

Early attempts like Cynthia Breazeal’s 2014 Jibo tried to embed personality with scripted dialogue, raising $3.7 million but faltering against Siri‑style assistants. By 2018, labs such as OpenAI replaced scripts with simulated environments, training the Dactyl hand on millions of randomized worlds to bridge the reality gap. Domain randomization let the system handle variations in lighting, friction and material stretch. It also helped with cloth manipulation.

Google’s robotics team built on that foundation with RT‑1 and its successor RT‑2, feeding 700 recorded tasks and internet‑scale images into a transformer that maps language to motor commands. The models execute familiar actions with 97 % success and generalize to unseen instructions at 76 % accuracy. Industry sees these models as templates. These advances prove that large‑scale data, not handcrafted code, now drives practical robot learning.