After doing some thinking on the privacy/surveillance spectrum, here is my current perspective on design goals for self-sovereign identity (~weakly held).
👇
Like most complex spectrums, the only real “solutions”—by virtue of ignoring the complexity altogether— are either end of the spectrum: total privacy or full transparency. Neither is tenable.
"Privacy for the weak, transparency for the powerful."
This isn't necessarily bad—the _complexity_ is quantifying skin in the game for qualitative actions.
There is an implicit or explicit "bill of rights" attached to any identity system (including the reality-based one we have now).
"Never once has a police state failed to use technology and surveillance to control a population." — The End of Trust
That said, the (expected) implementation begets Orwellian tragedy.
9.5/ Encourage adoption of identity protocols (which take power away from the powerful) is a hard problem.
1. no self-determinism (forced participation via monopoly on violence)
2. protocolization of moral compass imbued with the values of creator
3. aggregate score doesn't account for real human behavior
Whether or not you think turning a minority group's town into an open-air prison is bad (it _is_, btw), the core problem is that the tech encodes values at all.
Humans constantly evaluate and judge other's reputation; this is normal. Does the act of protocolizing reputation ("I rate you +1") inherently conflict with human behavior? I internally rate peers all the time, but I don't want Yelp for People.
medium.com/cultivated-wit…
Is the process of encoding a human moral compass doomed to fail by virtue of being attempted? Is an encoded value system inauthentic?