Honestly, I wish people would just realise that algorithmic bias and bureaucratic stupidity are *exactly the same thing* so we could start unpicking the rationalisations implicit in both, as they're synergetic: you have to get them both to tackle either successfully.
Putting aside whether this is even a good use of the term 'algorithm', which you can usually substitute for 'wizard' without any loss of meaning, the issue is that we keep pretending we can *trivially* solve certain sorts of problems with certain sorts of tools, when we can't.
It doesn't matter whether the implicitly specified knowledge representation generated through training is encoded in some distributed set of educated human neurons or some artificial kludge of ML systems, it's implicitness is a logical feature of the problem it is targeting.
There are exceptions to this. If you want to calculate the prime factors of a number, or run a cryptographic algorithm based on it or similar mathematical problems, then there can simply be *best* ways to do it, and they can be automated. If we ran RSA on HR we'd be in trouble.
At the very least, we'd need some impressive error checking procedures. It's not *running* such programs that is computationally hard, it's *discovering* them. We can say some very precise things about what proof search is and is not. But simulation search? That's trickier.
The only tacit models of searching for mathematical rules that let us adequately simulate some system in our environment in ways that might allow us to interact with it usefully, within tolerances determined by that usefulness, is mathematised empirical science itself.
But there's an all too tempting impulse to take a shortcut and assume that, not only can empirical discovery be fudged till it looks like mathematical discovery, but this can be fudged till it looks like mathematical deduction. Throw enough probabilities in there and it'll work.
This is a deep and abiding form of logical false-consciousness I find everywhere, and it's generally a refined form of the engineer's intrinsic optimism: "give me a couple days and some materials and I'll hack something together that'll do the job well enough!"
The problem being that this kind of optimistic satisficing all too easily gets hyperbolically projected into the assumption that there are satisfying optima for every such problem. To quote Voltaire: 'the perfect is the enemy of the good.' Perfectionist fallacies get everywhere.
In the process, we forget that the problems in relation to which these hallucinatory optima make any sense are something *captured* by mathematics, rather than *defined* by it. The assumption that such apparent optimal solutions are stable under perturbation is *optimism*.
I intend every last connotation loaded into that word by Candide and fling it in the face of the information technicians of contemporary society, be they designing human behaviour, computational infrastructure, or, increasingly, both at the same time.
"Just one more form" they say, balancing a stack on the back of their charges, blissfully unaware of the strain or its potentially catastrophic consequences.

"Just one more variable, one more tranch of data" they insist, sure that this time, the bugs will magically disappear.
The dynamics of information imply a politics. The first rule of this politics is that nothing is free. One cannot simply acquire more data, let alone process it, in ways that guarantee efficiency will increase linearly. There are pervasive and pernicious non-linearities here.
I beg you. Look at the examinations system. Just look at it, and ask yourself these questions: "What is it supposed to do? What is supposed to measure, and how is this supposed to modulate the process it is measuring in a way that improves this thing?" Education is fucked.
This is the closest I come to getting on the Adorno-Heidegger train regarding the evils of instrumental reason. The slightest nudge and the metrics we've balanced on the things we care about slide off, onto nearby goals it's easier to optimise for. Mandatory stupidity awaits.
The only thing that keeps cognitive processes (individual/collective, natural/artificial, multi-modal hybrid) locked on to the objects they intend are independent mutually correcting/reinforcing representations: i.e., more opportunities for error and its correction.
This is a sort of mutual responsibility: keeping each other in check. This 'personal' model is how we should think about the integration of administrative and computational infrastructure. More often we get an 'impersonal' one in which mutual blame diffuses such responsibility.
I give you the Trinity of automatic incompetence:

The Unholy Spirit: "Not my problem mate, I just collect the data."

The Idiot Son: "Not my problem mate, I just follow the rules."

The Absent Father: "Not my problem mate, I just use the data they give me to make the rules."
Perverse incentives will creep in wherever the checks aren't balanced. The more delicate the balance you've created, the easier it will slip when the problem is perturbed by any exceptional circumstance: the easier these exceptions become the rules, rather than changing them.
Optimism encourages us to ratchet this balance as far as it will go, stretching metrics to *do* more than they were intended to, to *include* more than they were designed for, to permit levels of *precision* and *comparability* that they cannot sustain while stably referring.
I've talked about technical debt a little recently. This is what we might call 'technical leveraging'. The irony is that those who do it think they're creating solutions to ranges of small problems rather than creating newer bigger ones. Worse, their efforts are synergetic.
There's an analogy with anomalies in scientific research programs to be had here. The little changes and extensions are not unlike ad hoc explanations of exceptions to a model's predictions. The accumulated weight of minor explanatory sins that drags a paradigm down.
The difference is that in organisations with practical rather than strictly theoretical goals, these are often unforced errors: accumulated good intentions that pave its road to hell. The ad hoc fixes are how its members work around these decisions to pursue the original goal.
This is the epistemic resistance to rationalisation that James C. Scott is so concerned to advertise and analyse. People keep on doing their jobs even as their organisation's implicit representations of what these jobs are drift further away from its stated intentions.
Technical leverage and representational drift. These are our eternal enemies. They will not go away, no matter how much toxic optimism we flood our organisations with, be it managerial or computational, which is to say, no matter how cybernetic. True cybernetics is fallibilist.
Which is not to say 'conservative'. There's a wisdom to conservatism that radicals ignore at their peril, but it's a wisdom that can and should be turned against the object it fears: the mistakes that accompany every (necessary) change. The liberal mean is blissful ignorance.
The liberal mean is cruel optimism. The blank, indifferent face of the bureaucrat. The sleak, inflexible interface of the kiosk. The rank, degenerate incompetence of the managerial class refusing to hold each other responsible for any fucking thing, no matter how important.
The conservative bemoans (inevitable) change, or worse, capitalises on it for fun and profit. The radical confronts its inevitability, and the inevitability of the errors that come with it. They are motivated by one simple thing: that whatever they know, *this* doesn't work.
I like to think that I'm a progressive, but here's the tawdry truth of progress shorn of radicalism: every claim that things can be improved without anyone shouldering the risk of improving them is an *evil* fucking lie; a pleasant hope that starves children and breaks the frail.
Commitment requires constraint. Enthusiasm tempered by a friendly word from a cheerful cynic. The more the merrier. Not all risks are great. There are undoubtedly many low risk improvements to the present order. It's finding and convincing people to try them that's hard.
There are too many fucking forms to fill in, for one thing.
You've got to laugh, lest you'll cry. There's strange joy to be found in absurdity and I only hope I for the wisdom to keep finding it there, amongst other exceptional things. Solidarity in failure, you hopeless radicals.🖖
CODA: As ever, there's a few other bits of writing directly relevant to this.

1. 'Immanentizing the Eschaton' - on theological assumptions about perfectibility common to various strands of thought: deontologistics.co/2019/10/22/tfe…
2. 'Incompetence, Malice, and Evil' - a more considered discussion of the relationship between incompetence and evil and its contemporary manifestations: deontologistics.co/2019/10/22/tfe…
3. 'Universal Leverage' - a thread on the decline of universities that discusses leveraging and the plague of metrics:

• • •

Missing some Tweet in this thread? You can try to force a refresh
 

Keep Current with pete wolfendale

pete wolfendale Profile picture

Stay in touch and get notified when new unrolls are available from this author!

Read all threads

This Thread may be Removed Anytime!

PDF

Twitter may remove this content at anytime! Save it as PDF for later use!

Try unrolling a thread yourself!

how to unroll video
  1. Follow @ThreadReaderApp to mention us!

  2. From a Twitter thread mention us with a keyword "unroll"
@threadreaderapp unroll

Practice here first or read more on our help page!

More from @deontologistics

3 Mar
Since my Null Journal idea seems to have been popular, it’s probably a good idea for me to say something more about how I think distribution/validation should work in philosophy (and potentially elsewhere). Let me start with some context.
I have frighteningly little concrete job experience outside of seminar teaching. But the main exception to this was running a journal for 3 years (plijournal.com). I was an editorial board member, the editor for two issues, and administrator for longer than that.
I oversaw the whole sausage, from CFP, through review, meetings, editing, formatting, printing, distribution, and finances. I redesigned the whole back end and balanced the books in the process, liaising with libraries coming through intermediaries and individual subscribers.
Read 30 tweets
2 Mar
I seriously believe that philosophy needs something like ArXiv: a place to store and distribute work not simply in progress, or prior to validation, but independently of it. Philpapers is too close to the current model of validation ('publication') and its disciplinary norms.
As a quick hack, I think someone start an open access journal with the explicit editorial policy 'we reject nothing', as a way simply to make referencing work that isn't gated by validation, so that we might develop better modes of validation independently of distribution.
Call it 'The Null Journal':

A: "Have you read the new issue of The Null Journal?"

B: "No! Who reads that anyway?"

A: "No one. No one reads any journals. It's not what they're for."

B: "What does the editorial board look like?"

A: "∅"
Read 6 tweets
2 Mar
Finally, I have a legitimate excuse to listen to Oingo Boingo on a morning:
Thank you to @autogynefiles for reminding me of the most important lesson an 80s nerd comedy ever taught me, which is that no one is ready for the sex girls. No one.

I feel that @UnclePhobic and @dynamic_proxy need to hear this message. True no horny praxis is baking lemon meringue pie.
Read 5 tweets
2 Mar
I wish I had the energy for one of my usual sincere answers to jokey questions, because this one is excellent. Alas, sleep beckons. Chomsky on syntax is at least computationally interesting. Chomsky on semantics...
Speaking as an anti-Fodorian computationalist, I think the best place to go if you're interested in pursuing something like the Montague program of applying formal tools to natural language is the interface between programming language semantics and knowledge representation.
I've had some good conversations with @FroehlichMarcel
about these issues of late if anyone wants to try searching the endlessly churning feed. Otherwise, there's a couple quick things I can point at:
Read 8 tweets
1 Mar
I agree with this, of course, but we should remember what framing wealth distribution through taxation encourages us to forget. It's as much about relations between currencies as it is units of currency. It's uncomfortable to say, but some of us have too much purchasing power.
It's easy to agree to tax the rich, even if the political reality of power structure mean that such abstract agreement cannot be concretely realised. It's much harder to agree to a smaller share of the fruits of the global production process. Stay aware of that difficulty.
It's the basis of a form of economic complicity that hurts not just those outside of rich nations but also the poor within them. Neoliberalism's 'spatial fix' to problems with local labour by outsourcing it to poorer nations helped crush labour power at home.
Read 17 tweets
1 Mar
Good for @CrispinSartwell, I suppose. I would respectfully suggest that logic programming (e.g. Prolog) is a bad model of computational mindedness for the most part (though @chrisamaphone's Ceptre might let us think about narrative identity). The nub of the issue is choice.
I have no qualms with someone identifying as an animal, be it a familiar genre of hominid or something more interesting (Sciuridae sapiens), precisely insofar as identification is an expression of personal autonomy, that Kantian pearl without price. I choose differently.
The disagreement emerges when the capacity for choice itself comes into question. Here is @CrispinSartwell's central (rhetorical) question. As a philosopher whose work is dedicated to driving home this point, I would like to answer it, in brief. Image
Read 22 tweets

Did Thread Reader help you today?

Support us! We are indie developers!


This site is made by just two indie developers on a laptop doing marketing, support and development! Read more about the story.

Become a Premium Member ($3/month or $30/year) and get exclusive features!

Become Premium

Too expensive? Make a small donation by buying us coffee ($5) or help with server cost ($10)

Donate via Paypal Become our Patreon

Thank you for your support!

Follow Us on Twitter!