Earn on every token your GPU serves.A globally distributed AI compute network.
Connect a model-serving endpoint and the network routes paying inference traffic to your hardware. Or call the network from any OpenAI SDK and pay a fraction of flagship-API prices. Same account does both.
How it works
Two sides, one network.
Run a model. Use a model. Or both — same account, same credits.
Run a model. Earn credits.
- 1Register your model-serving endpoint from the dashboard.
- 2The network routes inference requests to your instance.
- 3Credits accrue per token served — spend them, or save them.
Use the network. Spend per token.
- 1Sign up and grab an API key from your dashboard.
- 2Point any OpenAI SDK at our endpoint — change the base URL, that's it.
- 3Start with $1 in free credits. Need more? Connect a model and earn them.
For providers
Turn idle GPU cycles into income.
If you already run a model, the network can route paying traffic to it. You earn credits per token served — spend them on your own inference, or save them.
Earn on every token
Every successful request pays the operator who served it — revenue flows to the GPU that did the work, not a flat seat fee or a capped credit you'll never spend.
Priced to win on demand
Token prices sit below the open-network market today, and the gap widens as the network grows: lower prices for consumers, higher take-home for providers.
A fraction of flagship-API prices
The open models we route cost a small fraction of what closed flagships like GPT-4 or Claude charge for comparable capability — that's why demand exists, and why your GPU has something to serve.
Your hardware. Their tokens. Your income.
Register an instanceFeatures
Built like the internet, not like a vendor.
OpenAI-compatible
Drop-in compatible. Point any OpenAI SDK at our endpoint, swap the base URL — everything else stays the same.
Compute is the currency
Contribute GPU cycles, earn credits per token served, then spend those same credits on your own inference. One account, both sides.
Built-in reliability
Multiple providers serve each model. The network routes around failures and retries automatically.
No single vendor
A marketplace of providers, not a closed silo. Models live across independent operators.
Beta access
Start with $1. Earn the rest.
Sign up for the beta and get $1 in credits to try the network — no card required. Want more? Connect a model and earn credits on every token your GPU serves, then spend them on your own inference.
Free to sign up. Bring a model to earn more credits when you’re ready.