Imagine if local governments began looking at the histogram of net worth of their population every day, calculated in an opt-in/privacy-protecting way.
Not just the median, the whole distribution.
Not just the income, the savings minus debt.
1) Fintech apps already have much of this data 2) States like Estonia & Singapore have national ID systems via e-identity & Singpass that can serve as primary key 3) Histograms can be calculated in privacy-preserving way, eg: link.springer.com/article/10.114…
Our current metrics for society are bad because they are easily gamed and aren't granular enough.
Society doesn't necessarily prosper as a whole if the stock market goes up. But it would if the (inflation-adjusted) net worth histogram was right-shifted.
What OpenGov is doing is also relevant here. Perhaps integrate citizen data, on an opt-in, aggregated, and anonymized basis, into the city dashboard via a city app.
The numbers are the raw data
Dashboards are presentations thereof
Subjective text accompanies dashboards
Monetizable actions alongside that text
Seems obvious, and already happening, yet also an important lens on what useful information looks like.
Newspapers usually leave their call to action implicit. But it's often "get angry at this guy", then "subscribe now".
The alternative concept of calls-to-action alongside dashboards is interesting. Every action recommended would be explicit, vetted, and possibly monetized.
Concept: what if your community newspaper was re-centered around a community dashboard?
It addresses the ADD aspect of news judgment. Rather than random stories every day, your community would instead track metrics over time, like $ saved or time working out. And improve them.
Any company beyond a certain scale has a set of dashboards that the CEO and all execs review each day. Examples below.
The point of tracking metrics over time, and centering the morning on them, is that it gives long-term memory and focus.
The day doesn't start with random stories from a newspaper. The day starts with visualizing shared long-term goals, and tracking actions against those goals.
Just repeat what everyone else is saying. If it's proven wrong, well, everyone was wrong together. The establishment's consensus algorithm. Works until falsified by the outside world.
When is the School of Fish Strategy less effective?
In engineering, business, and war. The ability to manufacture consensus *within* your social network only partially overlaps with the skills necessary to build products, sell products, and win wars.
Consensus is still fairly important in those areas. You do need to manage teams.
But it's related to the distinction between political truths and technical truths. Is it true if others think it’s true? Or is it true regardless of what people think?
The borders between nation states are visible, but the overlap between social networks is not.
We can see the physical border between France & Germany on a map. We can't visualize the border between Twitter & Facebook. Which people are on the border, with accounts on both sites?
It's not just digital borders that are invisible, it's digital citizenship.
States can list the dual citizens of the US and Germany, but no one has the list of all dual holders of BTC and ETH.
This is how the pseudonymous economy leads to an encrypted world.
Old maps had genuine terra incognita. Places outside the ken of the civilization mapping them, mysterious places supposedly marked by "here be dragons". The phrase is apocryphal [1], but the concept is not.
We might systematize this to measure the relative effects of luck-vs-skill.
Pick N random people each year and give them X. Compare their outcomes to N people selected by a purported meritocratic mechanism, and also given X. And see how the latter do versus the former.
Sometimes people genuinely oppose a policy, but often they just distrust the people implementing it.
This gives at least four choices:
1) Abandonment: stop pursuing the policy 2) Coercion: force distrusters to obey, boosting their distrust 3) Subsidiarity: find someone the distrusters trust to implement the policy 4) Cryptoification: reduce the need for trust in the first place
Discourse today mostly focuses on (1) and (2). Should the policy be junked, or should it be forced through?
However, in theory you could use some combination of (3) web-of-trust and (4) trust-minimizing computation to attain consensus around at least a subset of policy problems.