My Authors
Read all threads
this thread is about digital data ethics, an emerging field of a consulting practise, partly academic, partly journalistic, and in urgent need for a close critical reading.
at first we have a division of labour: a) the IT sector, the developers, project managers, UI designers etc. vs b) the ethical boards, the IT journalists, sociologists, experts claiming to represent the public interests. only a very small percentage has overlapping skillsets.
while developers and designers often lack the scope and skillset fuzzily described as "social awareness", ignoring entire areas of real word problems, and the full social impact and side effects of their products, their methods are called "agile" today, incremental, step by step.
the idea of the prescriptive approach of data ethics is still part of the age of the "mythical man month" based on top down specification and formal generalized "principles". plus this reverse design approach operates only in the negative, inversive and stochastically improbable.
instead of reading code and design specifications when they are in agile development, to discuss errors and gaps in specification, which would require read and write skills of digital literacy in solidarity with the producers of software another road is taken, called "the law".
the focus of civil society groups on jurisdicial design prescriptions is practically filling the gap of productive capacity in the area of research and development, hoping to enable new advanced markets but in fact increasing bureaucratic overhead and circumvention methods.
it's easy to find the underlying problems in the incongruence of governance of corporate infrastructures and leftover democratic institutions. After decades of privatization the incapacity of reverse and remote fixing produced a byzantine scholastics of juridical symptomatology.
a reverse design approach risks inaccuracy, blocking both good and bad development, or worse by creating a culture of double standards and circumvention by ignoring the force of the factual. one reason is a philosophical ineptness based on transcendential model building.
another approach by not pretending to own prognostic powers and asses all branches of contingent futures, as in the assessment method, a pragmatic ethics of immanence can be injected into the agile process to improve the incremental descriptions of what is and shall be.
this leads to the requirement of increased skillsets in crossdisciplinary education as well as team building to enable critical close reading of documentation, specification, code, constrains without the need to bounce back to a level of general public, journalism, storytelling.
if the sources are published in the public, open source, open data and open research, the collective dialectical discourse capacity and intellectual resonance should optimistically find a "golden path". taking a shortcut by tweaking the rules of the game (law) should be avoided.
in the case of #DP3T the fundamental flaws in the system analysis which ignore the structure of the workflow at the institutional side of manual tracing should be fixed by describing the specfication of an interface to a federated and not centralized backend architecture.
the goal of the app is to counter a pandemic and not a feature creep on the level of law making while redirecting attention away from important issues such as banning companies such as palantir, nso, clearview and preventing a institutional centralisation of private health data.
the cry wolf community has not understood the layered architecture of the problem at hand. by supporting decentralisation on the data gathering layer, they ignored the centralisation of the backend where personal data is stored, practically proposing a privacy diffusion protocol.
the 'compliance with data security and data protection requirements' of the backend infrastructure which is still under development goes together with a centralized database architecture and probably will be merely based on access control lists.
lzg.nrw.de/_php/login/dl.…
epidemiologic inference privacy paradox: the more unreliable data on case evidences, the more general the inductive model building leading to a higher chance of lockdowns. preventing the incremental improvements of tools to gather case data can increase the risk of restrictions.
#DP3T you can see how many informal interactions based on good will are needed between app user and health authority. the token by the doctor needs a crypto-backend which is unspecified in the documentation. IMO it looks like an academic prototype.
when alice is assuming to have symptoms the 'real' flow chart has many more states, conditions and branches as described in the documentation. all private data is exchanged outside of the app, its an air gapped approach, but also one of high operational/contextual unreliablity.
all doctors and test labs have to use a new kind of token system which needs an undocumented process to receive such valid tokens or generate them. the token validation procedure needs a code review. looks like "self reporting" by default with the risk to produce false positives.
back to the ethical point of view. transfering the reliability concerning privacy to the outside makes the app itself more "neutral" but also more vulnerable & unreliable, dependent on outside factors & data flows, following a libertarian design principle of self-accountability.
the contingency of social connectivity on the physical network layer (0) could be balanced if the signals on layer 1 would not be used to prioritize on the obfuscation of privacy by improvement on the fault tolerance of the suggested protocol.
#DP3T still uses a central server for seeding keys and refreshing them frequently. the logs will be uploadable voluntarily. the frequently refreshed keys generate a traceable geoip. enough loopholes already but no adressing how to iteratively improve specifity and sensitivity.
so we are debating the cultural tropes of network architectures bounced back to a level of politics and law making deciding on the contingencies of software development paths based on an academic proof of concept, which is then implemented (and probably fixed) by apple/google.
Q4/2020: the way to fix the protocol will probably use caching, machine learning, triangulation and various backend procedures and edge ML to reduce the number of false positives and false negatives. a main requirement is the full interoperablity with a health authority backend.
interesting variations of the specification of requirements
cdc.gov/coronavirus/20…
and decentralized contact tracing protocol
pact.mit.edu/wp-content/upl…
uk is a good example how to mess things up badly. the consulting of the government has mixed up communication with scientific analysis and combined various use cases of different apps. an unforgiving auditing just documents the result is no result. infolawcentre.blogs.sas.ac.uk/files/2020/05/…
it would be rather helpful to create an open forum for the corona-app debate and accumulate relevant texts, approaches, research papers etc. from there an overview and level of detail is possible to make better top down decisions. reuters.com/article/us-hea…
again, the main problem with #DP3T is the false dichotomy of decentralisation/centralisation while neglecting the regionalized infrastructure of the health authority backend, the root of trust of validated tests, and an effective and scalable interoperability with manual tracing.
it would need at least a reference implementation of a middleware backend, to become a complete meshwork-client-server architecture. then other apis and microservices could be build on it. its an unfinished specification and harmfully uniformed debate culture surrounding it.
in order to keep acountabilty in iterative design processes defined interfaces for auditing side-channels for authorized access by scienstists and data ethics experts are possible. dl.acm.org/doi/abs/10.114…
the false dichotomy of centralized vs. decentralized is increasing, covering up a discussion about how the health system behind is tested and reorganized during the crisis. bbc.com/news/technolog…
just as any middle class narrative of neoliberal consumer protection the tale of this "free choice" is based on a preexisting smoothly operating backend w/ functional supply chains, state institutions and a "luxury" of a non-crisis situation, not to be taken for granted any more.
instead the problem must be thought from the backend, easily overwhelmed with manual tracing of infection chains. only the minimum amount of private data should be used for a personalized pandemic health interface with a feedback loop between the own actions and regional stats.
Missing some Tweet in this thread? You can try to force a refresh.

Enjoying this thread?

Keep Current with Pit Schultz

Profile picture

Stay in touch and get notified when new unrolls are available from this author!

Read all threads

This Thread may be Removed Anytime!

Twitter may remove this content at anytime, convert it as a PDF, save and print for later use!

Try unrolling a thread yourself!

how to unroll video

1) Follow Thread Reader App on Twitter so you can easily mention us!

2) Go to a Twitter thread (series of Tweets by the same owner) and mention us with a keyword "unroll" @threadreaderapp unroll

You can practice here first or read more on our help page!

Follow Us on Twitter!

Did Thread Reader help you today?

Support us! We are indie developers!


This site is made by just two indie developers on a laptop doing marketing, support and development! Read more about the story.

Become a Premium Member ($3.00/month or $30.00/year) and get exclusive features!

Become Premium

Too expensive? Make a small donation by buying us coffee ($5) or help with server cost ($10)

Donate via Paypal Become our Patreon

Thank you for your support!