Perhaps the reason why interoperability is often more of a pipe dream (literally, as you pipe data manually from one API to another) than reality, is that it’s hard to truly capture value from a standard or a protocol. It’s easier to extract value from a platform or an app.
We see this in crypto. Ethereum, as a platform, will always be worth more than Polygon, Polkadot, or Chainlink.
We also see this with the web, where standards & protocols are subsidized by platform owners (Google, Apple, Microsoft).
We also see this in the smart home / IoT space. Apple HomeKit, Google Assistant, and Amazon Alexa are competing standards subsidized by the owners of their respective platforms. The money is made from the hub devices each company sells.
I suppose we also see this with nations. A “world government” looks more like the G7 or UN than it does the world governments we see in sci-fi stories.
Is it possible at all for economically-sustainable standards to arise and be maintained by a neutral party, rather than a land grab by app/platform owners?
If this isn’t possible economically, then perhaps what that means is that “human-centered design” is a theory that is incompatible with economic reality, for humans want to do jobs that necessarily requires composition of multiple tools.
The “fat protocol thesis” I think has taken advantage of a useful semantic drift in order to make its point. But it’s wrong.
Protocols that are really platforms, like Ethereum, are still apps. VMs, shells, kernels, etc, are apps too.
The difference between an app/platform and a protocol/standard is that an app/platform operates in hierarchical layers. An app runs on an app which runs on app in a platform that ultimately runs on a hardware platform.
A protocol/standard deals with networks, not hierarchies.
A protocol coordinates apps/platforms at the same level. They exist in the spaces between apps/platforms at the same level.
They are not “meta” or “infra”, they are “inter”. Nothing sits on it, or lives in it, or is stored on it.
Today, you can make money by pretending to have protocol aspirations by building new “infra” or “meta” layers.
In other words, platforms and aggregators @stratechery
You can’t claim to build a “payments protocol”, when payments platforms already exist, and your protocol does not network between them neutrally.
We have meta-payments like Apple Pay or PayPal, or infra-payments, like Adyen or Stripe.
What would inter-payments look like?
I have an AmEx, and I want to swipe it and have the payment routed to a merchant on Visa.
Or, I have Bitcoin to spend, and I want it exchanged to USD and then routed to a merchant on MasterCard.
Notice how much more user-friendly this is.
Inter-systems, rather than a new meta or infra system, actually takes away cognitive load for the user!
They don’t need to hold multiple credit cards from different networks.
i.e. They don’t have to worry what underlying system or network is doing the job, it just does it.
Human-centered design, at least in theory, is about the removal of unnecessary cognitive load, not more than is required to do the job.
Infra and meta systems tend to introduce more cognitive load, not reduce it.
All designers can do in practice is try to reduce cognitive load when it comes to the user interface. The visuals, the interactions, the copy.
I can’t help thinking that this is extremely low-leverage optimization work.
Meta-systems, I find, are the most deceptively alluring systems to work on. Because systems in a hierarchy are abstractions (a higher-level system abstracts a lower-level one), you might (falsely) think that you’re reducing cognitive complexity, since you can “abstract it out”.
But you can’t abstract everything and have it be as useful as the underlying systems you are abstracting.
At some point, in order to be internally consistent, you’ll limit the functionality that exists in the lower-level system, and limit your users from doing them.
Call it “incompleteness” if you want.
It’s frustrating because, as anyone who’s tried to do home DIY, cooking, or programming can attest, eventually you’ll have to learn the lower-level system anyway because the abstraction is leaky.
A meta-system X that claims to be able to compose functionality from sub-system A, B, and C will always be have functionality less complete than A + B + C.
Expectation: X = A + B + C
Reality: X = 0.8A + 0.5B + 0.3C
Compare this with Unix’s pipe, an inter-system which connects Unix programs that don’t have to be aware of each other’s implementation
Piping the output of one program to the input of another is lossless, not lossy, like a meta-system is.
Compare Unix, also, with a system we use today to pipe data, the HTTP API (or gRPC perhaps), where the input of one program needs to be aware of the output of the preceding program, because they have a pre-defined structure which can also change, unpredictably.
Watching this, I can’t help but think that the reason this is the case, is that we separated the jobs of software “design” and “engineering” at birth.
Listen to how articulate the designers of Unix are, when discussing how and why they implemented the pipeline model.
It always starts with people and the things they want to get done, stated in plain English. It’s easy to follow.
If people today designed inter-systems like the Unix pipeline starting with the things people want to get done, what other easy-to-use and easy-to-follow systems will result?
I bet that it wouldn’t look like the mess that is web development today, for sure.
This is promising.
But ultimately reactive (these are the limits to our existing systems, let’s fix them, within existing limits).
Not proactive (these are the jobs people want to get done, how do we design a system that matches how they think)?
Tangentially, this is why Bitcoin as a store-of-value makes sense, rather than trying to be a medium-of-exchange or unit-of-account.
With the right inter-system(s), Bitcoin doesn’t need to be either. You can store value in Bitcoin, but transact and account in USD.
Cryptocurrencies trying to be payments systems are weirdly out of touch, because they’re trying to make a new meta-system or a new infra-system (or both) where ones already exist and work well.
I keep coming back to crypto and payments because they deal with a fairly standard concept (money), is distributed, with robust (but not necessarily anti-fragile or efficient) inter-systems already in operation.
I wonder if we can learn from finance, and build (hopefully antifragile) inter-systems to disintermediate the cloud silos we see today in consumer Internet and B2B SaaS.
But also, finance is a complicated domain. Most domains SaaS deals with are complex in nature.
Perhaps the innovation needed here is not technological, but economic.
Money, as a standard, is after all an economic invention.
The legal concept of “intellectual property” might be a starting point for treating data as an economic primitive, like we treat money or land.
Data (the creation, collection, storage, transmission, access, ownership, and representation thereof) have real-world consequences that are perhaps larger than the domain of computing.
The way we treat data as the province of data scientists or software engineers is, well, provincial
It’s as if oil and gold were only the province of chemists
In the past 2 years, we’ve already seen how lawmakers are ill-suited to think about data in a legal & economic context
• • •
Missing some Tweet in this thread? You can try to
force a refresh
But the list of directions aren’t a map. They’re meant to be overlaid atop a map to make any sense.
Where is the actual map? Most companies don’t have them or even the tools to make one.
@johncutlefish@jimhead@intercom The insidious thing about calling that list of directions a 'map' is that everyone assumes the actual territory is implicitly known and well-understood, not to mention some the suspension of disbelief that for the “next quarter” the territory doesn’t shift as you navigate.
People like to demo multiplayer work tech to show small groups of people doing synchronous work together.
But the value of multiplayer spaces is allowing big groups of people to collaborate asynchronously on an ever-evolving artifact. oculus.com/experiences/qu…
With the ability to jump in and out. Bursts of activity.
Occasionally, collaborators overlap resulting in momentary synchronous collaboration.
An async model with smart synchronous conflict resolution means you get Git without merge conflicts.
The assumption that work happens only synchronously is dangerous.
Synchronous-only tools (like this Oculus app, video conferencing tools, or chat) encourage a high-presence, “interruptions and context switching is good”, low-time-preference culture.
💎Luxury apps are the favorite of the time-poor, cash-rich, attention-deficit multitaskers who need to get to Inbox Zero or they will drown in info overload and can’t “get shit done” anymore.
⚖️Egalitarian apps are like the libraries and parks of the Web, they are everywhere and you feel welcome. Inspired by “multiplayer” games, but they are deeply collaborative instead of competitive.
Anyone can benefit from using them. But because of that, there is no “edge”.