Cloudless: Computing at the Edge
New use cases will naturally drive more computing away from centralized cloud platforms to the edge. The future is cloudless.
Doc Searls sent me a link to this piece from Chris Anderson on cloudless computing. Like the term zero data that I wrote about a few weeks ago, cloudless computing is a great name that captures an idea that is profound.
Cloudless computing uses cryptographic identifiers, verifiable data, and location-independent compute1 to move apps to the data wherever it lives, to perform whatever computation needs to be done, at the edge. The genius of of the name cloudless computing is that it gets us out of the trenches of dapps, web3, blockchain, and other specific implementations and speaks to an idea or concept. The abstractions can make it difficult get a firm hold on the ideas, but it's important to getting past the how so we can speak to the what and why.
You be rightly skeptical that any of this can happen. Why will companies move from the proven cloud model to something else? In this talk, Peter Levine talks specifically to that question.
One of the core arguments for why more and more computing will move to the edge is the sheer size of modern computing problems. Consider one example: Tesla Full Self Driving (FSD). I happen to be a Tesla owner and I bought FSD. At first it was just because I am very curious about it and couldn't stand to not have first-hand experience with it. But now, I like it so much I use it all the time and can't imagine driving without an AI assist. But that's beside the point. To understand why that drives computing to the edge, consider that the round trip time to get an answer from the cloud is just too great. The car needs to make decisions onboard for this to work. Essentially, to put this in the cloudless perspective, the computation has to move to where the data from the sensors is. You move the compute to the data, not the other way around.2
And that's just one example. Levine makes the point, as I and others have done, that the Internet of Things leads to trillions of nodes on the Internet. This is a difference in scale that has real impact on how we architect computer systems. While today's CompuServe of Things still relies largely on the cloud and centralized servers, that model can't last in a true Internet of Things.
The future world will be more decentralized than the current one. Not because of some grand ideal (although those certainly exist) but simply because the problems will force it to happen. We're using computers in more dynamic environments than the more static ones (like web applications) of the past. The data is too large to move and the required latency too low. Cloudless computing is the future.
Notes
Anderson calls this deterministic computer. He uses that name to describe computation that is consistent and predictable regardless of how the application gets to the data, but I'm not sure that's the core idea. Location independence feels better to me.
An interesting point is that training the AI that drives the car is still done in the cloud somewhere. But once the model is built, it operates close to the data. I think this will be true for a lot of AI models.
Photo Credit: Cloudless Sunset from Dorothy Finley (CC BY 2.0 DEED - cropped)
Phil looks like Edge computing is coming of age. Bravo!