Picos at the Edge; Technometria - Issue #30
The future of computing is moving from the cloud to the edge. How can we create a decentralized, general-purpose computing mesh? Picos provide a useful model.
Rainbow’s End is one of my favorite books. A work of fiction, Rainbow’s End imagines life in a near future world where augmented reality and pervasive IoT technology are the fabric within which people live their lives. This book, from 2006, is perhaps where I first began to understand the nature and importance of computing at the edge. A world where computing is ambient and immersive can’t rely only on computers in the cloud.
We have significant edge computing now in the form of powerful mobile devices, but that computing is not shared without roundtrips to centralized cloud computing. One of the key components of 5G technology is compute and storage at the edge—on the cell towers themselves— that distributes computing and reduces latency. Akamai, CloudFront, and others have provided these services for years, but still in a data center somewhere. 5G moves it right to the pole in your backyard.
But the vision I’ve had since reading Rainbow’s End is not just distributed, but decentralized, edge computing. Imagine your persistent compute jobs in interoperable containers moving around a mesh of compute engines that live on phones, laptops, servers, or anywhere else where spare cycles exist.
IPFS does this for storage, decentralizing file storage by putting them in shared space at the edge. With IPFS, people act as user-operators to host and receive content in a peer-to-peer manner. If a file gets more popular, IPFS attempts to store it in more places and closer to the need.
You can play with this first hand at NoFilter.org, which brands itself as a “the world’s first unstoppable, uncensorable, undeplatformable, decentralized freedom of speech app.” There’s no server storing files, just a set of Javascript files that run in your browser. Identity is provided via Metamask which uses an Ethereum address as your identifier. I created some posts on NoFilter to explore how it works. If you look at the URL for that link, you’ll see this:
https://nofilter.org/#/0xdbca72ed00c24d50661641bf42ad4be003a30b84
The portion after the #
is the Ethereum address I used at NoFilter. If we look at a single post, you’ll see a URL like this:
https://nofilter.org/#/0xdbca72ed00c24d50661641bf42ad4be003a30b84/QmTn2r2e4LQ5ffh86KDcexNrTBaByyTiNP3pQDbNWiNJyt
Note that there’s an additional identifier following the slash after my Ethereum address. This is the IPFS hash of the content of that post and is available on IPFS directly. What’s stored on IPFS is the JSON of the post that the Javascript renders in the browser.
{
“author”: “0xdbca72ed00c24d50661641bf42ad4be003a30b84”,
“title”: “The IPFS Address”,
“timestamp”: “2021–10–25T22:46:46–0–6:720”,
“body”: “<p>If I go here:</p><p><a href="https://ipfs.io/ipfs/QmT57jkkR2sh2i4uLRAZuWu6TatEDQdKN8HnwaZGaXJTrr";><span data-auto-link="true" data-href="https://ipfs.io/ipfs/QmT57jkkR2sh2i4uLRAZuWu6TatEDQdKN8HnwaZGaXJTrr";>https://ipfs.io/ipfs/QmT57jkkR2sh2i4uLRAZuWu6TatEDQdKN8HnwaZGaXJTrr&;lt;/span></a><br></p><p>I see this:…”
}
As far as I can tell, this is completely decentralized. The identity is just an Ethereum address that anyone can create using Metamask, a Javascript application that runs in the browser. The files are stored on IPFS, decentralized on storage providers around the net. They are rendered using Javascript that runs in the browser. So long as you have access to the Javascript files from somewhere you can write and read articles without reliance on any central server.
Decentralized Computing
My vision for picos is that they can operate on a decentralized mesh of pico engines in a similar decentralized fashion. Picos are already encapsulations of computation with isolated state and programs that control their operation. There are two primary problems with the current pico engine that have to be addressed to make picos independent of the underlying engine:
Picos are addressed by URL, so the pico engine’s host name or IP address becomes part of the pico’s address
Picos have a persistence layer that is currently provided by the engine the pico is hosted on.
The first problem is solvable using DIDs and DIDComm. We’ve made progress in this area. You can create and use DIDs in a pico. But they are not, yet, the primary means of addressing and communicating with the pico.
The second problem could be addressed with IPFS. We’ve not done any work in this area yet. So I’m not aware of the pitfalls or problems, but it looks doable.
With these two architectural issues out of the way, implementing a way for picos to move easily between engines would be straightforward. We have import and export functionality already. I’m envisioning something that picos could control themselves, on demand, programatically. Ultimately, I want the pico to chose where it’s hosted based on whatever factors the owner or programmer deems most important. That could be hosting cost, latency, availability, capacity, or other factors. We would have to build the decentralized directory to discover engines advertising certain features or factors, and a means to pay them. A smart contract might work for this.
A trickier problem is protecting picos from malevolent engines. This is the hardest problem, as far as I can tell. Initially, collections of trusted engines, possibly using staking, could be used.
There are plenty of fun, interesting problems if you’d like to help.
Use Picos
If you’re intrigued and want to get started with picos, there’s a Quickstart along with a series of lessons. If you need help, contact me and we’ll get you added to the Picolabs Slack. We’d love to help you use picos for your next distributed application.
If you’re interested in the pico engine, the pico engine is an open source project licensed under a liberal MIT license. You can see current issues for the pico engine here. Details about contributing to the engine are in the repository’s README.
Bonus Material
The information revolution of the past thirty years blossoms into a web of conspiracies that could destroy Western civilisation. At the centre of the action is Robert Gu, a former Alzheimer’s victim who has regained his mental and physical health through radical new therapies, and his family. His son and daughter-in-law are both in the military - but not a military we would recognise - while his middle school-age granddaughter is involved in perhaps the most dangerous game of all, with people and forces more powerful than she or her parents can imagine.
“I’m going to take you out to the edge to show you what the future looks like.” So begins a16z partner Peter Levine as he takes us on a “crazy” tour of the history and future of cloud computing – from the constant turns between centralized to distributed computing, and even to his “Forrest Gump rule” of investing in these shifts.
End Notes
That’s all for this week. Thanks for reading.
Please follow me on Twitter.
If you enjoyed this, please consider sharing it with a friend or twenty. Just forward this email, or point them at my news page.
I’d love to hear what you enjoyed and what you’d like to see more (or less) of. And if you see something you think I’d enjoy, let me know. Just reply to this email.
P.S. You may be receiving this email because you signed up for my Substack. If you’re not interested, simply unsubscribe.
Photo Credit: SiO2 Fracture: Chemomechanics with a Machine Learning Hybrid QM/MM Scheme from Argonne National Laboratory (CC BY-NC-SA 2.0)
© 2021 Phillip J. Windley. Some rights reserved. Technometria is a trademark of PJW LC.
By Phil Windley
I build things; I write code; I void warranties
In order to unsubscribe, click here.
If you were forwarded this newsletter and you like it, you can subscribe here.
Powered by Revue