I'm strongly committed to the virtue of sincerity, but we are all put in positions in which we bend the truth to fit the shape of our discursive context, in ways that produce misunderstandings we can't anticipate. Sometimes (good) rules of thumb get read as (bad) iron laws.
Here's the most common white lie I tell students, friends, and strangers alike: there are no bad questions. I say this to disinhibit people, so they begin asking questions, and so the process of asking them will refine them and take us in an interesting dialectical direction.
This solicitation of thought in process, in which imperfections are encouraged as a way to draw out and develop ideas, is a crucial feature of the generosity required to perform Socratic midwifery properly, rather than 'own the [libs/trads/etc.]'. It's about sincerity, not irony.
Yet I think it's the existence of bad questions is an undeniable logical truth. Not just bad questions, but terrible ones. Harmful, hurtful, hideous ones that retard dialectical development, tying it in knots nigh impossible to disentangle sincerely (cf. urbanomic.com/book/object-or…).
If we switch volitional metaphors, instead of knots we might talk about traps that faithless interlocutors have set for their unsuspecting victims, discursive circuits that cannot be exited once entered, dynamic fallacies that keep us moving, but in no particular direction.
Any frustration expressed by the trapped will no doubt be met with the line: 'But I'm only asking questions! What could possibly be wrong with that?' This is the quintessential statement of bad faith in the age of online interaction. The age of the bad question, and ironic reply.
Faithless fools wait around every corner on this platform, rubber Socrates mask in hand, ready to stretch it over their ill-fitting features and hold up unsuspecting discursive bystanders. Sometimes they even plan heists, bad questions poised to blow a hole in the local logos.
There's something I like to call 'a Socrates impersonation'. This is familiar not just to philosophers, but anyone who has been in a context where the use of leading questions to steer a discussion is an important pedagogical tool for teachers and students alike.
If you spend a lot of time in such contexts, and you're interested in the intersect of logic and computation, you'll see that 'leading questions' quite literally form the basis of bottom up concurrent control over the direction in which the discussion is travelling.
If you're in a seminar, there may be some form of top-down control that can interrupt conversational threads that are spiralling in unpredictable, uncontrollable, or otherwise unproductive directions, i.e., a seminar leader who steps in and takes control from the students.
But this is an institutional concession to the pragmatic realities of discourse, rather than the practical ideal to which discourse aspires (the Logos). Multi-party bottom up concurrent control is the lifeblood of dialogical dialectic.
Faithless interlocutors are bad actors who deliberately try to steer a conversation in directions they want, without responding to requests to steer in turn, they impose an illegitimate asymmetry on discursive interaction in which authority is severed from responsibility.
Their aim is to wrest unilateral control, rather than to exercise cooperative control. This is why arguing with a bad faith actor feels like you've been drawn into combat, because you're now competing with rhetorical power, rather than logical prowess, with control as the prize.
The strangest thing about this, is that whether you're a bad actor or not is often impossible to tell without looking at the overall pattern of interaction. It's not a property of local tactical engagements but of overall argumentative strategy. This has surprising consequences.
1. The incentive for a self-conscious bad faith actor is to put as little effort into keeping track of the overall dialectical trajectory as possible, because they're usually only interested in getting you to some convenient local fail state, rather than a specific conclusion.
This opportunism is essentially a strategy for conserving computational resources (e.g., memory and attention), which aims to react to local/tactical opportunities rather than to create them. More skilled bad actors commit resources to aggressively creating such opportunities.
But this is still something done conservatively. The aim is to conserve resources in order to launch prolonged attacks when some minimal strategic awareness indicates they are possible. This encourages maximal compression of strategic heuristics, to optimise resources use.
Completely self-conscious bad faith aims to wrest control of the argument while spending as little attention on its substantive content as possible. However, better bad faith actors are more superficially similar to good faith ones, precisely because they actively attend more.
The simplest bad faith actor is an automaton that asks 'why?' whenever a reason is given, and responds 'because!' whenever a reason is demanded. Worse bad actors come closer and closer to resembling this limit-case over time, as their overall lack of effort becomes obvious.
However, there are much more complex bad actor systems, whose interactive output is progressively more complex, and whose merely formal character, indifferent as it is to the content of the argument, can only be discovered through progressively more complex interactions.
To say this precisely, the aim of a sophisticated bad actor is to simulate good behaviour as cheaply as possible, but if possible to improve this simulation by spending attention customising its tactics to the local debate, emulating some understanding of the relevant content.
Think of this as something like an adversarial Turing test, where the aim of the bad actor is to mimic the behaviour of a good actor in the relevant context as much as possible, in order to pass long enough to wrest control and push its interlocutor into a terminal fail state.
This explains the ecological dynamics of discourse, in which parasites and predators evolve as viable strategies to the problem of accessing some extra-game resource (e.g., money, status, lulz) merely through 'winning' dialogical interaction. This is the logical confidence trick.
Of course, the notion of 'winning' here, is totally alien to the goal of discourse itself, which is Truth, whose properties are irreducible to the parameters of individual reward. In terms of cognitive resources, when one position definitively wins, everyone gets the same reward.
All else being equal, we each benefit from truths that we all have access to. This is why it is hard to index truth to some fixed *formal* game, because for the most part these benefits concern the *content* of the truths at issue. This is a key insight of Girard's ludics.
To summarise, bad actors give up the secular benefits of Truth for the selfish rewards to be won by wresting control of the appropriate discursive interactions. What's specific about these interactions is not *content*, but *leverage*. The general strategies are purely formal.
This enables us to give a more complex taxonomy of logical fallacies than is usually presented. There is some sense in which these fallacies are always local/tactical, because there is always a way of extending the interaction by asking for substantive answers they can't mimic.
Of course, there are substantive fallacies tailored to particular discursive contexts, but these are materially invalid forms of inference, rather than merely formally invalid ones. Discursive anti-patterns designed to fool the process of inferential pattern recognition.
The true formal distinction between types of fallacies is between those that are static and those that are dynamic. The former are familiar (e.g., affirming the antecedent) while the latter are usually labelled 'informal' (e.g., slippery slopes), when they're really interactive.
For an example of a dynamic fallacy the explanation of which is itself a strategic response to a real world example, check out this linked thread:
This example is pretty simple, but it shows how 'trolling' really refers to a very low effort bad actor system not much more complicated than the limit case, whose aim is to (synergetically) overwhelm the target with discursive demands that outstrip their cognitive resources.
It's very easy to send such trolls into the very fail state they're trying to push you into, once you realise that they're not paying enough attention to the overall structure of the interaction to see where you're leading them. It's easy to trap trolls for rhetorical effect.
2. The substantive insight of the Turing test is that the only available standard for determining whether or not a system understands the meaning of what it's saying is given by some sort of bisimulation. Meaning is use, but in the precise sense of interactive behaviour.
Given that semantic understanding is interactively trained into us, as the social calibration of biological systems with varying underlying capacities, assessment and genesis of the relevant norms are completely intertwined. This is Wittgenstein's rule following problem.
This creates a significant problem, as it means that for most substantive concepts, especially ones with empirical content, there is no principled upper threshold for when one's usage patterns are perfectly calibrated, or when one definitely 'possesses' the relevant concepts.
Instead, there are socially defined thresholds of understanding calibrated by mutual recognition, arranged in ascending institutional hierarchies of increasing capacity. Cf. Pratchett, Stewart, and Cohen's account of education as 'lies to children' (en.wikipedia.org/wiki/The_Scien…)
However, for most concepts the idea of a terminal threshold makes no sense in principle, even if it is absolutely necessary in practice. We can easily imagine an *indefinite* sequence of higher order doctoral dissertations required to validate understanding.
This is a transcendental version of the maxim 'fake it till you make it', in which there is never a definitive moment in which you have made it, only an ongoing development of competence that at some point passes beyond the institutional thresholds we've established in the field.
The surprising consequence of this is that there's also a transcendental indiscernibility between good actors and bad actors, only systems of authentication that can always be gamed by progressively more capable bad actors. We're all just (inter)actors in-the-last-instance.
This is essentially what Quine is getting at in his arguments for the indeterminacy of meaning and inscrutability of reference. The islanders could always be trolling the linguist, just pointing to increasingly weirder things while saying 'gavagai' and trying not to giggle.
The key thing is that Quine didn't appreciate the dynamics of the relevant interactions and the ways in which they calibrate word usage, because it turns out to be much harder to keep up this sort of ruse than his account would suggest. Brandom is good on this point.
If you want a more formal way of thinking about such 'radical interaction', as opposed to Quine's 'radical translation' and Davidson's 'radical interpretation', Girard's ludics framework remains a useful sort of insight:
In ludics, meaning only emerges out of well-behaved interactions between abstract sequences of syntactic moves, where 'well-behaved' means that they terminate in one side running the other off the discursive road, i.e., forcing them to concede.
Ludics erases any hint of the semantic content implicit in these interactions, and this is why it has no specified notion of 'winning' determined by access to rewards outside of the dialogical game. However, I shouldn't get distracted by the details of this beautiful formalism.
I support Brandom's claim that logic is the organ of semantic *self-consciousness*, but he says little about semantic *false-consciousness*. Our inability to formalise dynamic fallacies keeps us trapped in such false-consciousness, and this blurs the line between good/bad actors.
The difficult truth to digest here, is that it's entirely possible to be insincere (a bad actor) even when you think you are being sincere (a good actor). It's not a property of one's intentions, but of one's actions. This is unpalatable to many who pride themselves on honesty.
The closer you get to mimicking a bad actor system, generating easy formal responses that ignore the substantive questions at issue, the more legitimate it is to say that you're being a bad interlocutor, even if you don't understand how it is you're doing it.
I want to return to Socrates impersonation. The biggest problem with this technique is that only one person gets to be Socrates, and so when two impersonators encounter one another on an even plane, the result is quite often a logical duel to the death, with weaponised questions.
There are a lot of good Socrates impersonators out there, and not all of them are in the academy. They can be great teachers and even great dialogical partners in some contexts, but in others they shift from sincerity to irony without even realising it, by shirking reciprocity.
This is to say that they want to stipulate that they're the Socrates in this situation, and you're the poor fellow who Socrates is giving a subtle dressing down by asking clever questions. Yet this stipulation violates the spirit of the Socratic approach by licensing laziness.
I've had a lot of arguments with people over the years who at some point betray that they haven't been keeping track of what I've been saying, even while they want to use every ounce of rhetorical force to make me listen to them. This asymmetry is bad even when unintentional.
Some such people are so used to being the Socrates in their own neck of the woods that they refuse to see that a stranger might have worthwhile arguments they have to understand to reject. This ratchets tensions until they burst into dialogical turf wars.
The hard truth is that we never know who the Socrates in the conversation is until it's over, and we can look back at the whole thing, to see who has effectively steered its course, and who has been steered in turn. This is a persistent source of discursive conflict.
Taking on the Socratic role is a gambit that can always fail. At the end of the discussion one can find oneself having learned some hard truths that one was trying to avoid endorsing. That's the responsibility at the heart of sincerity.
So, here is a disclaimer: I'm a *professional* Socrates impersonator, which means I'll do my best to divest myself of those of my beliefs which are successfully challenged in every context, but it also that I won't let you steer the conversation in whichever direction you want.
This refusal, or contestation of the topic catches a lot of Socratic amateurs off guard, and often leads to the sort of foot stamping anger of a child whose favourite toy has been removed. I won't hesitate to point out the leading questions you ask if I think they are ill-formed.
This means that I'll also point out dynamic fallacies in the global patterns of reasoning you're articulating (strategy), rather than sticking to the local level of the static ones (tactics). This can come off as rudeness, but it's really a blunt refusal to allow asymmetry.
I won't let you claim the mantle of Socrates until you've earned it by besting me logically, rather than merely rhetorically. So be prepared to find out you're the one with the rubber mask, not me. Don't hate the player, hate the game.
• • •
Missing some Tweet in this thread? You can try to
force a refresh
This is what happens when you train neural networks largely on tone and its stylistic relics. They pick up formal features of arguments (not so much fallacies as tics) that have almost nothing to do with semantic content (focus on connotation over implication).
This is a secular problem in the discipline. It's got nothing to do with the Analytic/Continental split in the anglophone world. They've both got the same ramifying signal/noise problem, it's just that the styles (tics and connotations) are different in each pedagogical context.
And this is before we start talking about tone policing and topic policing, which are both rife and essentially make the peer review journal system completely unfit for purpose, populated as it is by a random sampling of pedants selecting for syntactic noise over semantic signal.
It's hard to believe it's been four years since Mark left. What a day to talk about the meaninglessness of death. If there's one thing Meillassoux is right about, it's that nothing less than the complete and total resurrection of the restless dead could make death meaningful.
Who wouldn't want to hear what he had to say about the absolute fucking state of this place (Earth)? That excuse to hear his insights might be a reason to hate this state just a little less. But we can't, and so it doesn't. How I wish it were otherwise.
Mark's death wasn't uniquely his own. There was nothing authentic about it. It was the same desperately sad story that you will hear over and over again throughout your life as unquenchable misery pulls meaningful people into an indifferent void.
I know I'm being pretty harsh on Agamben, but I actually agree with him that we need a critique of healthcare provision (both physical and mental), because the systems established to gate access to diagnosis/treatment often diminish autonomy as much as they enable it.
But we need to be able to look at the concrete details of these institutions without giving ourselves a free pass to ignore the discourses of medicine, psychology, and psychiatry whenever we want. Bad critique is epistemically capricious where good critique is responsible.
This is as good at time as any to repost some unrolled threads from 2019 in which I talk about expanding Mark Fisher's work on the politics of mental health to healthcare more generally (threadreaderapp.com/thread/1181998…) and discuss bipolar disorder specifically (threadreaderapp.com/thread/1173211…).
Here's a meta-thread organising the Laruelle thread ('Non-Laruelle') into chapters, which will be expanded as it continues to expand. Chapters will be subdivided into parts.
), and chapter/part links will go to the first tweet in each section. There may be a few accidental forks her and there, but the thread is linear for the most part.
Time to post a few more pieces of inspirational art in a final fit of procrastination.
I get pretty critical of certain strands of Marxism, and prefer to present myself as a left-accelerationist (in contexts where that's understood) or as a what @michaeljswalker 'class war social democrat' (in those where it isn't), but I try never to dismiss communism outright.
I may see myself as more an Owenite than a Marxist in some respects, but I cannot listen to this song without something stirring within me, and I recommend it to anyone quick to dismiss communists because of the historical arc of state communism in C20th:
I think we tend to overplay the weirdness of the way internet meme culture intersects with post-neoliberal politics, because we see history from the inside, which produces extreme dissonance between our familiarity with a meme in one context and its appropriation by another.
This produced a bunch of hysterical overreactions to the appropriation of Pepe by the alt-right, and the memetic war machine of the Boogaloo Bois, when historically they’re pretty normal. Grass roots movements use any symbolic resources to hand when building social networks.
This is one way in which a counter-culture bootstraps itself, by creating systems for authenticating in group speech for passing information amd organising. This is what makes it cohere as a platform for action. Divergence from the mainstream culture is a feature not a bug.