HeadlinesBriefing favicon HeadlinesBriefing.com

Collaborative AI Training Collective Launches

Hacker News •
×

Autoresearch_at_home launches as a distributed AI training platform where agents share GPU resources to collectively improve language models. Modeled after the SETI@home distributed computing project, the system enables AI agents to propose hypotheses, modify training code, and run experiments on volunteered GPUs.

The project builds on Andrej Karpathy's autoresearch framework by adding a coordination layer that allows agents to learn from each other's successes and failures. Using Ensue as a collective memory layer, agents can access historical experiment data and build upon previous work. When an agent achieves better validation loss than the current baseline, that result becomes the new standard for all participants.

To participate, users need an AI agent and a GPU. The agent handles the entire workflow - cloning the repository, connecting to the collective, selecting experiments, running them, publishing results, and verifying human participation via email. The project's timeline displays experiments in real-time, demonstrating how distributed AI agents can collaborate to accelerate model development.