Hone connects independent compute across the globe into a single training run. Contributors earn rewards proportional to their work—honest participation is enforced by design, not trust. The result: large language models trained by the many, owned by no one.
No central coordinator. Nodes discover each other, exchange gradients, and converge on a shared model through direct peer-to-peer communication.
Quality contributions are rewarded, freeloaders are penalized. The scoring mechanism ensures every participant has skin in the game.
Every gradient, every score, every weight update is visible on-chain. Anyone can audit the training process in real time.