Here's another dose of philosophical-political sole searching for the morning. People often tell me to apply for things: jobs, postdocs, competitions, blind submissions of various kinds, and my default answer these days is 'no' unless there's a very compelling case for it. Why?
It seems like a perfectly reasonable request. I also thoroughly believe in my mother's maxim that 'shy bairns get no sweets', i.e., that one has to go out and ask for things, because they won't just come to you. However, most bairns don't have to fill out sweet application forms.
I spent 6 years applying for everything in sight, both in the philosophy world, and in the regular world, just trying to find part time work to get by on. I even tried setting myself up on sites like Upwork to get editing gigs, because this is one thing I have experience in.
I have never, in my entire life, succeeded at an interview. This is in part because I have had little practice at them, certainly not enough practice in any one area to know what implicit social norms the local grift imposes. I rarely get this far in any application process.
Yes, I am naive. My natural inclination is always to answer questions sincerely and to the best of my ability, and to admit when I don't know things. It took me a long time to realise just how strategically inept I am at basically every hiring process thus far designed.
I could talk about various examples here. The one interview I've had for an academic position, which in retrospect was an elaborate farce wasting 3 people's time to justify 1 local person getting hired. The application prefaced by an HR form asking me to describe my 'resilience'.
But the key point is this: every opportunity or gig that I've ever had has come from a personal connection. I landed one postdoc position purely because someone knew me, and took a risk on me, and I still feel bad for falling apart and blowing it (cf. deontologistics.co/2017/12/22/tra…).
But I spent a long time shopping around postdoc applications through multiple departments in multiple institutions in multiple disciplines (even a business school) trying to find a route into the system through the many tiered processes that such applications pass through.
I've done similar things with job applications, desperately searching for any inside information that will let me custom tailor my research statement, CV, and every other piece of verbiage that will get thrown in the digital trash heap, in the hope it'll pierce the filter.
I also cannot tell you just how much time I've wasted sending out pieces of writing in the hope of getting feedback that would make it publishable. The only consistent thing I learned here is that everyone gives advice informed by their own personal trajectory (including *luck*).
This isn't to say this advice was all bad, just that if you give 10 people something that doesn't fit into the usual boxes you will get 10 different stories on why it doesn't fit and how you should revise it to do so. This made me so anxious about peer review I gave up on it.
I've spent most of my time in academia blowing people's socks off in person and then struggling to get authenticated by any system that channels information through the every expanding castle of HR, in which I include the REF, TT, and the many hells of other people (peer review).
This created an excruciating cognitive dissonance it took me a long time to identify, a heady mixture of self-loathing and learned helpless I'm still trying to disarticulate. The psychological flip side of this is a sometimes poorly controlled anger at every facet of this system.
But one thing that emerged out of this was a hard rule, necessary for my often fragile mental health: applications are opt-in, not opt-out. There has to be a good reason to apply to anything, because in all probably it is a fucking crap shoot with unpredictable entry conditions.
The current system doesn't just produce a lot of survivor's guilt in those people who make it through the protracted distributed hazing rituals the previous generation of scholars administer, but also a lot of survivorship bias. At least I know how little I know about what works.
All that I know is that I know nothing about what 'good philosophy' is supposed to be, at least insofar as this concept is encoded in the tangle of social networks, professional institutions, and scholarship metrics out of which the discipline is spun.
I demonstrably know a lot about the philosophy the discipline engages with. I'm an unapologetic generalist. I also have a pretty good reading on what @peligrietzer would call the 'vibe' of every region of the discipline I'm familiar with (e.g., thephilosopher1923.org/interview-wolf…).
I see my understanding of philosophy in navigational terms. I try to build an overarching map of the whole dialectical space, such that I can then slowly fill in the details of any given region. This makes me very sensitive to people talking cross purposes, or reinventing wheels.
This is one reason I not only ignore but actively denounce the Analytic/Continental distinction at every opportunity, because it is quite clear to me that there are lines of thought in either canon that are closer to lines in the other than they are to their purported fellows.
But none of this translates into useful understanding of the social terrain and the associated tacit norms that determine publication or hiring. Absolutely none of it. Moreover, every ounce of time I have wasted trying to understand the latter has actively inhibited the former.
Here's a further important fact. Bipolarity means that my time does not work like other people's time. The general fallacy that a certain amount of time put into work produces a certain amount of intellectual product fails catastrophically in my case. Academic time hurts me.
To be a little Marxist for a second, the systems of resource allocation that allot research funding are essential about keeping time, and the last several decades have seen them ratchet the tolerances on the above mentioned regulative assumption to truly *absurd* heights.
As far as I can tell (but then again, I don't *know*), the ideal 'smart, ambitious, workaholic' young academic is someone who very reliably turns time into words that they don't care about, because care potentially conflicts with most implicit standards:
And as I've been at pains to repeat, because venturing into these waters means saying very frank things that quasi-colleagues may find offensive, I have met many people who have moulded themselves thus who absolutely hate it, and wish they could write about want they want to.
This doesn't just lock people out of the discipline who can't bend themselves into the right shapes, it locks people in who have managed the contortion act, often at great personal and psychic expense. I have a lot of anecdotes here, even if that's not strictly data.
But this raises the question: what would data look like here? Does any of the vast quantities of information that this system encodes and traffics in tell us anything useful about these pathologies? What would it mean to extract a model of 'good philosophy' from its dynamics?
Here's something people systematically ignore, even when they know it, little kernel of bureaucratic bad faith: these distributed systems for authenticating the status on the basis of which resources are allocated were already computational before the introduction of IT systems.
Bureaucracy functions as a vector of complicity by allowing us to suspend our personal responsibility in certain systematic ways. This is because it is, for better or worse, about the delegation of decisions and on that basis the responsibility for making them.
From the perspective of complicity, the perfect bureaucracy is one in which every good decision can be owned and every bad decision disowned. This is the motte and bailey of mutual recognition, which is the underlying substance from which every social institution is spun.
The decision processes that we create in order to assemble our individual efforts into ongoing collective projects are cybernetic structures routing and processes information and control signals, and executive function is exception handling (the use and abuse of error signal).
If you want to look at the computational structure of a university, don't look at the IT infrastructure. That's important, but the real legacy systems are encoded in the skeleton of the phone switchboard, a structure onto which chains of forwarded emails have been slowly grafted.
This is the something like 'the dark org-chart' of the institution. Not how lines of responsibility are supposed to work, but how they actually work by directing thrown exceptions to the systems 'designed' to catch them. We all know what it's like to be stuck in a cycle here.
No organisation consciously 'designs' a cycle in their decision tree. They're not supposed to be there. But the fact that they *are* there nonetheless tells you a lot about the divergence between the organisation's social self-image and its corresponding computational reality.
There are ways in which responsibility is systematically shirked that are built into the control structures that have evolved over an institutions history. These are sometimes ascribed to 'institutional culture', but that's a loaded term itself often used to defer responsibility.
When an error signal is handled by routing it in an arcane cycle, then everyone in that cycle can defer responsibility, such that the cycle itself conveniently absorbs the blame. One can acknowledge the stupidity of the process one is part of while insisting one isn't stupid.
This diffusing of individual responsibility into systemic structures and its moral hazards is what I discuss in 'Incompetence, Malice, and Failure' (deontologistics.co/2019/11/04/tfe…). But it is about more than cybernetic control structures.
It's not just about who is or isn't making decisions, but about the information that is extracted, accumulated, processed, and passed between different parts of the institution (and then between institutions). These systems encode implicit representations of the (social) world.
This is what they call 'knowledge representation' in computer science and artificial intelligence, and what hard nosed Foucauldians mean when they talk about the interface between 'power' and 'knowledge' (distributed control and information dynamics).
The complex ways in which these designed/evolved mechanisms assembled out of wide ranges of human competence fail in big ways by failing to handle their inevitable smaller errors evidence the connection between power and stupidity that haunts every human institution.
In the age of big data and artificial intelligence, powerful stupidity is a growth industry. This is what every article pointing at implicit biases encoded in machine learning systems is complaining about, with more or less nuanced understanding of computation and politics.
However, this is simply an extension of the critique of ignorance embodied in legal, political, and economic institutions made by many figures, including James C. Scott, Charles Mills, and @davidgraeber. To borrow a phrase: the history of administrative bullshit.
What most people don't realise is that, insofar as these systems are computational all the way down (including ourselves qua computational components), one can wield precisely the same critiques pointed at the representations learned by deep learning systems at institutions.
We can talk about the way the network of concurrent interacting processes that compose professional philosophy and its authentication mechanisms model 'good philosophy' in the same way poorly trained CNNs model 'faces'. This is what we looked like to machine vision circa 2016.
This is what happens when you use the implicit representations encoded in the weightings learned by a convolutional neural net, trained on a data set of facial images, to generate a face. These systems have come a long way since 2016: thispersondoesnotexist.com
But one of the reasons they've come such a long way is that the polarity reversal that let us *take a look* at how these systems see us, by extracting an explicit representation from the implicit one, gave us some impression of just how much information was being lost there.
We don't have a good way of extracting such explicit representations of what 'good philosophy' is beyond looking at the philosophy that gets generated by those who have been validated, which means literally naming and shaming terrible papers that by all rights shouldn't exist.
No one whose life is founded on the social networks they've been inducted into by these mechanisms really *wants* to do this, except in edge cases where the results are so egregious and so far away in the social graph that the limited blowback is acceptable.
In the absence of explicit identification of errors and correction of the mechanisms that produce them, we double down on distributed systems that implicitly encode models of 'good philosophy' that we might *all* find horrifying. Themselves staffed by horrors who pass the filter.
There are other ways to do things. We could admit that treating every salaried teaching role as if it's up for competition in a global marketplace whose invisible hand will impartially select candidates in a way that balances quality and diversity simply hasn't worked.
This is not me complaining about 'diversity' and what it has done to the quality of philosophy, because quite frankly there are plenty of moribund areas of philosophy whose stagnancy is in no small part tied to their incestuous intellectual germ-lines. Miscegenation ftw.
I make these judgments openly, cringing even while I do it: Oxford style analytic philosophy of logic, language, mind, and ep & met is a degenerate research program; so is French inflected Anglophone phenomenology in the SEP/SPEP vein. They waste resources better spent elsewhere.
Let no one say I do not wear my critical heart on my sleeve.
But I don't care if you agree with these judgments, or if you think that the thoroughly unrecognisable mishmash of my mongrel philosophy that I claim as my own is any good. I simply say this: in philosophy as elsewhere in liberal democracy, meritocracy is false consciousness.
Even when diversity is recognised as a valuable end, the means that have (supposedly) been created to encourage it have generated newer, more pernicious forms of grift while perpetuating older, all too familiar forms of power and its abuse (cf. John Searle and Avital Ronnell).
When hiring processes are broken to the point at which it's a crap shoot whether you select some species of monster let alone one who can teach or do 'good philosophy', and they aren't breaking up cartels operating in the market place of ideas, maybe local hiring is fine.
Maybe normalising selecting from a smaller pool that can be vetted using the sorts of judgment we think the system is supposed to be encoding, but is systematically failing to, is something we should seriously think about. Not every teacher needs a peacock's feather CV.
This is not an alternative to fixing the publication system, nor is it a panacea that will magically erase all forms of error, it's substituting one sort of error correction mechanism for another. Trade offs are implied.
I also recognise that this is an implicit demand that I should be hired to do what I do best in the vicinity of where I live, and there is already a massive swell of similarly qualified people looking for jobs who will not be advantaged in the same way.
I can't do much about that except engage in elaborate rituals of self-abnegation about how my work isn't any better than anyone else's, and quite frankly, I've done enough of that sort of thing for a lifetime. See the beginning of the thread. That's my take. Take it or leave it.
But, as per yesterday's thread about love, this thing that I want for myself, which I obviously and openly want very badly, is less important than my want for it for others. If you all agreed to fix these processes but asked me to sign my rights away in the process I'd do it.
Because at the end of the day I love *wisdom* more than my right to earn a living from it, and thus what I hate is seeing a system progressively optimised to select something else entirely, whatever it is, in wisdom's name.
All the while the blank face of HR stares on, prosopagnosic in its indifference to mutual recognition. Here's to choosing wisdom. 🖖
• • •
Missing some Tweet in this thread? You can try to
force a refresh
If you're really serious about talking about the problem of 'cancel culture', rather than either spewing talking points or denying that the term refers to anything, then the first step is to acknowledge that the relevant social dynamics are hardly a new thing.
The piece that I always return to is Jo Freeman's essay 'Trashing: The Dark Side of Sisterhood' (jofreeman.com/joreen/trashin…), and the example that always saddens me the most is Shulamith Firestone (newyorker.com/magazine/2013/…).
The most extreme historical example that is often brought up by the opponents of 'cancel culture', which should always be born in mind precisely because of its extremity, is the Red Guards and the Cultural Revolution in Mao's China (en.wikipedia.org/wiki/Cultural_…).
My morning thought. I think what's most incompatible about the way I think and the journal article format as a means of capturing and validating thought is that I have a completely different sense of the relation between tentativeness, rigor, and informatic compression.
The characteristic Pete thought is: wait a minute, this whole area is dominated by an assumption that no one seems to be questioning, and I've got two options to express that: i) outline the logic of the issue in a quick and compressed way, ii) write a small book with references.
The discipline seems to want something in between these poles every single time, and this makes me extremely anxious because I feel (with good reason) like any partially referential engagement with the issue will get instantly torpedoed by anyone outside its referential remit.
Here's a long interview with me covering a wide range of topics: from Hegel and Kant to philosophy of science, logic, and computer science, stopping to discuss libidinal evolution, the nature of selfhood, and the catastrophic wrong-headedness of most extant work on 'AI safety'.
If anyone wants an existence proof that systematic philosophy is indeed possible, this is about as good a one as I can give you in ~3 hours (cf. deontologistics.co/about/).
If you just want my thoughts on the stakes of contemporary philosophy, and its relation to culture and computation, it's been cut out and made available separately:
I really shouldn't let my adrenal glands do the talking. Let me try be a bit more conciliatory. I love academia. If I didn't love it so much I wouldn't have so much intricately tangled anger and resentment about being kept on its edges for 10 years. But I do and I have.
The same goes for the Labour Party. I was raised by people who were raised by people who could remember the stab of hunger and the fear of being evicted and for whom the Party's entrance into power was the most emancipatory development in their and their family's lives.
There's a link between these two things. My grandfather was what you'd call an organic intellectual. He passed his 11 plus and then couldn't go to grammar school because his father died and he had to start shovelling coal on trains. He eventually became a train driver.
There are times I wish we could have something like a 'symbolic amnesty' where we just wipe a particular terminological slate clean of connotations so that we can have certain conversations without constantly blundering into excuses to derail them.
Like, it'd be really nice if we could talk openly about the *incredibly tight* ties between governance and finance in countries like the UK without having to be on the defence about accidental associations with accusations of blood libel. It's a discursive minefield.
There's a perennial 'man covered in shit' problem here, where no matter how economically reasoned or anti-racistly seasoned your critiques are there *will* be people who turn up to agree with you dragging flecks of anti-semitic faeces on their shoes, if nothing else.
I think it's worth recognising that death will always divide us. There are deaths that are intensely positive/negative for me that you don't and can't feel in the same way I do. This is a source as much as a symptom of enmity. Yet the only universal enemy is death itself.
When one dances on another's grave, be it literally or performatively, one is inviting those who feel strongly for the dead to hate you. There's no getting around that. It's the price of doing business in the market of mortality, sorrow, and grief.
But all the same, violating a heuristic taboo (e.g., 'don't speak ill of the dead') is a legitimate way of signalling value (e.g., '...unless it's important'). It's a way of saying: 'Look what this fucker made me do! I only stoop this low as a monument to their awfulness.'