I just read the most comprehensive piece I’ve ever seen on the state of decentralized AI. 🤯

👍 It’s a blog post by Vinny at EV3 Research, which you can read here.

👍 Another post by my friend Sal from Commune AI covers more technical aspects of compute verification here.

💡At Koii, we’ve been working to make decentralized computing not only possible but profitable, and today I want to give you the secret sauce we’ve learned so far.

🧵To Catch You Up…
Here’s the gist, in a few words:

  1. We all know that AI takes a lot of resources (compute, energy, and data)

  2. We might in the future be able to crowdsource those from the existing pool of consumer devices globally, which has the potential to save costs

  3. When we use large pools of devices, we can also sometimes run things in parallel, which makes them faster

  4. There is also the potential to use existing consumer devices, thereby including billions of people in the AI supply chain, which could be a solution to the jobs robots are already displacing (more on this, and statistics)

  5. The problem? If we don’t own the computers, how can we know that the code is being run on them as intended?

Currently the vast majority of compute in the world goes unnoticed, running in the background behind products that you use every day, such as Google, Instagram, Tiktok, or X. These companies all rent massive buildings full of computers, and they control them very closely for security reasons.

🙏 Vinny and Sal’s posts both cover a lot of the deep technical tradeoffs for distributed AI systems, but what I’d like to focus on here is the economics that underpin these systems.

🥧 SPECIFICALLY, I’d like to discuss how token holders and DePIN node operators can get a piece of the action while the computer scientists work out a GTO (game theory optimal) solution.

⚔️ The dAI Arms Race
It’s this last point (3) that is really important. Right now, there’s an arms race underway, with well funded companies competing to be the first to unlock this consumer capacity. Unlike other types of compute capacity, dAI (decentralized artificial intelligence) is not a deterministic operation– meaning, it is not always repeatable. In a deterministic system, 1+1=2, and you can check the math quickly and easily by repeating the process.

The big value add of AI is to generate new concepts and ideas, like ChatGPT, which is something that’s hard to verify directly, and can lead to some extreme edge cases in computer science. IN FACT, it’s not unreasonable to say that decentralizing AI inference is one of the biggest unsolved problems in computer science.

In the blog, Vinny goes on to list several prominent projects, and details how they are approaching this complex problem; Bittensor, Sector 9, Gensyn, to name a few.

Before we get into that, let’s step back for a moment and understand why this problem is so complicated.

🧑‍🏫What is Decentralized AI?
AI is complicated, so it might help to imagine something that’s not an AI. As an example, consider racing on 3 small bicycles instead of just one large one. Three sets of handlebars, tricky, right?

When it’s one vs. three, it might be manageable, and we might actually win the race, especially if we can race the track in three smaller parts in parallel, instead of in series. We can also probably carry more stuff along the way. If we can run our compute workloads in parallel, it can cut the time by the number of possible parallel processes (sometimes this is equal to the number of devices, for certain types of jobs).

Okay so here’s where it gets a bit crazy…

WHAT IF we want to use ONE HUNDRED BICYCLES instead of just one? (yes I’m serious)


As you can imagine, it’s harder to drive 100 small bicycles than one big one. That’s the rub. But, if they’re faster, cheaper, and we can race the track in 100 small parts at the same time, we will absolutely win compared to one big machine.


Don’t get caught cheating.

Fighting the Bad Guys
When you have lots of drivers, each one needs to be monitored, verified, and rewarded separately. The last part is actually really important, because if we have some prize money for our bicycle race, we need to distribute it among our 100 drivers, and that means if some of the drivers cheat, the others get less rewards. (and we might not finish the race on time!)

Typically in distributed computing, we call the monitoring an ‘Audit’ and the reward a ‘Bounty’. Properly configured, Audits and Bounties can provide a fairly good proxy for direct work, so assuming it’s cheaper to get 100 small machines vs 1 large one, then we can save a lot of money.

🥷Assuming none of the riders are cheating, otherwise all 100 of us will lose the race compared with the other riders who just have one bicycle.

Audit Replication
One thing that might not be apparent from the bicycle example is the effective replication of work required to audit a compute job. To put it simply, the easiest way to check if something is done properly is to do it again, and confirm the result. Obviously we don’t want to do everything twice, so we prefer to randomly check some subset of the work, and then check more if we detect issues. In most current systems, we see each device (node) spending ⅓ of its time verifying others, and this means around a 66% efficiency for the system as a whole.

The ratio of work done to work checked can be referred to as the audit replication factor, which represents the amount of work we need to do to distribute the job across many devices. For contrast, a centralized option (one large machine) has an efficiency of 100%, while splitting it to two or three machines might have an efficiency of 91%, where each machine must randomly audit 3% of the work done by other machines.

What we’ve been working on at Koii is how to reduce the overall replication factor to around 9%, which is about 1/10 compute cycles spent on audits for each device.

Raudit=Audit CyclesWork Cycles

It’s not a complex concept, but can heavily influence the overall value of decentralization, since by default, the cost reduction as a result of decentralization must be enough to cover the efficiency costs of audits and verification.

Note: Check out the Koii Whitepaper for more thoughts on this, as well as some relevant math papers.

For those less mathematically inclined, efficiency is calculated as the remainder of the waste:

eaudit=1 - Raudit = 1 -Audit CyclesWork Cycles

Broadly speaking, there are a few ways to reduce audit cycles and improve efficiency.

  1. Reputation
    Reduce the rate of audits for some machines, while maintaining a consistent audit rate for new machines that have yet to ‘vest-in’ (see Eric Friedman, Paul Resnick, Rahul Sami, “Chapter 27, Manipulation-Resistant Reputation Systems”, https://www.cs.cmu.edu/~sandholm/cs15-892F13/algorithmic-game-theory.pdf)

  2. Modular & Deterministic Operations
    Some specific things that a computer does can be audited more efficiently than they can be replicated, such as transformations and operations where a succinct proof can exist, though these are still mostly research topics and have not been formally shown to improve efficiency. (see ZKSnarks and ZKVM topics)

A Note on Zero-Knowledge Proofs
While a lot of people are currently proposing ZK verification for distributed inference (read: zero knowledge artificial intelligence) most of these ‘succinct’ proofs save verification time at the cost of increasing compute time. While these have strong use cases in high-security applications such as healthcare, where we want to verify the compute before we trust the result, they are not useful in reducing the overall compute cost. That means they won’t help with common products like chatbots, search, or social newsfeed algorithms, where we need higher efficiency on the compute side to justify decentralization. Since it’s unlikely that these high-security applications will actually ever decentralize their hosting (at least not soon) these ZK primitives are a long way from market utility for most applications.

A Path Forward for DePIN Profitability
To reiterate, the question for the DePIN and Web3 industry is how to make decentralized computing economically viable. This is our 🐺 werewolf, and a lot of people have been selling silver bullets since 2015.

At Koii, our approach is not to try to find a silver bullet, but instead to identify many lead bullets which can in concert kill the werewolf. As an example, consider the training process of an AI, which might include several components:

In this model, we may not be able to decentralize all of these components economically right now. Doing so could require new technology primitives which might take years to develop. Incrementally building out decentralized capacity is an economic problem, not a computer science problem. Right now, ingesting web data, tagging and transforming it, and storing it on the edge are all easy to verify and can harmlessly be deduplicated or repeated as necessary.

🤘This is economically viable RIGHT NOW. 🤘

What this means is that if you JUST decentralize these components, you can save massive cost factors on the overall AI development pipeline, while leaving synthesis and inference centralized for now.

This is where we think things will happen at Koii, and right now we are set on rewarding as many people as possible by helping them run sybil resistant nodes for these activities.

💸 One More Thought on Economics 💸
If you have a sybil resistant system, everyone involved is rewarded more. If your system can be botted, everyone loses. This means it is critical to separate components into individually sybil resistant modules, instead of trying to do it all in one system.

In the best case, tokens should exist for each layer of the stack, with automated market making to interact between them. Anything less than this leads to an overly centralized risk of failure, and could jeopardize user experience and enterprise SLAs.

This industry has been waiting for computer scientists for too long– let’s get on with building things and decentralizing what we can. There’s no reason to wait, we are just leaving money on the table while we try to compete in a big-brain measuring contest.