So, this morning I'm thinking about Stan Lee's maxim ('with great power comes great responsibility') and the discursive responsibility that comes with the discursive power of having a personal communication platform (everything from a syndicated column to a Twitter account).
To recap my basic stance on moral logic: 1) ought-implies-can (Kant), entails that decreased capacity implies decreased responsibility, 2) with great power comes great responsibility (Lee), which entails that increased capacity implies increased responsibility.
I think that these principles permit us to deploy claims about what *is* the case as reasons in discourse about what we *ought* to do without falling into the naturalistic fallacy (Hume) and deriving how things ought to be in any given instance from the way things already are.
This is important for two reasons: 1) This validates commitments to making the world better than it has ever in fact been, beyond any putative cycle of the ages in which a prelapsarian state sings out as the eternal home we have been exiled from, and to which we aim to return.
2) This invalidates commitments that are tied too tightly to the causal topography of the socio-political terrain we find ourselves within, allowing shifts in our environment, its affordances, and the balance between our capacities for action to perturb our norms. Powers change.
For those who haven't been paying attention, we're living through a phase transition in the constitution of the social contract in the the post-industrial West (e.g., US, UK, EU) that is bound up in an even more nebulous transition in geopolitics. Future shocks abound.
This transition is manifold, but one significant aspect is the slow decay of the systems of mass media through which the ideological facade of the current social contract is maintained, as the underlying political economy it disguises drifts out of phase with mass recognition.
Put simply: enough people now see that the lives they were tacitly promised by the culture they were raised within cannot exist within actually existing liberal democracy that they're generating new cultural configurations that don't fit into its ('post-ideological') politics.
There have *always* been such people, but there's demographic momentum now, catalysed by technological changes in the way we consume media, or the ways in which our understanding of the world and its political realities are mediated for us. The beginnings of counter-culture.
We're forced to make predictions and form hypotheses about the tendencies at play here, but this is a genuine site of political contestation, rather than simple economic fate. Anyone who tells you they know exactly how they're going to play out is probably selling you something.
Why this long detour through moral logic and political analysis? In essence, though I think there are now something resembling independent communicative norms governing action/interaction on new media platforms, they're still evolving. Pretending otherwise is false consciousness.
Moreover, the socio-technical structure of these platforms enables generation, proliferation, co-existence, competition, and evolution of such norms to unprecedented degrees of scale, speed, and complexity. This is often less extreme than many people suggest, but it's happening.
Look at Twitter and you see a bunch of overlapping communicative worlds whose boundaries are still largely defined by instinct, i.e., by convergent evolution of affective heuristics for curating our informatic environment, channelling our feelings, and managing peer recognition.
Explicit signifiers evolve, signalling you're leaving one world and entering another, from pronouns and alts to 'RT is not endorsement' and blueticks. Without entirely realising how, we've developed some fairly complex expectations about online behaviour based on bios alone.
One has to respect these sorts of tacit knowledge, and the more reflective practices that develop out of them. There's an important sense in which they're non-optional, if for no other reason than this platform limits *data sovereignty* in subtle and unpredictable ways.
The thing that I dislike most about Facebook and Twitter is that they built prosthetic streams of consciousness (feeds) that are easy to integrate into your cognitive economy, and then, once we were hooked, removed our ability to curate them, putatively for our own good.
My Twitter feed is no longer an aggregate of tweets produced by those I've chosen to follow, but an algorithmically curated selection of these tweets and others that fall in the statistical region they represent, as determined by the way likes flow across the social graph.
This enables Twitter to inject 'sponsored' content into my stream of consciousness, but that's not exactly insidious, it's my inability to directly tune the algorithms that curate my feed which I resent. This is a concrete loss of online personal autonomy as far as I'm concerned.
Almost everyone who spends time online exists in a metastable state on the edge of information overload, and so we've no choice but to strategically filter the information injected into our consciousness. In the absence of explicit control, this means implicit heuristics.
But respect for this sort of tacit knowledge is a two edged sword: to treat it as knowledge is to admit that we should be able to articulate, assess, and challenge it explicitly, and the limits placed on such articulation are thereby also limits on its normative significance.
Limits on capacity produce limits on responsibility. What responsibilities are implied by the expertise involved in shaping one's personal feed/profile, keeping close communicative worlds separate (e.g., shitposting/professional comms) while trying to affect 'the discourse'?
Well, to answer this question we have to bring in another principle I like to call 'normative parity': no responsibility without authority, and no authority without responsibility. What authority is premised on the responsibility this expertise underpins (ought-implies-can)?
At its most primitive, this is the authority to speak for oneself: to articulate what one believes in one's own terms, to express ones feelings, desires, commitments, and the wider range of speech acts whose contents are premised on the above. No one else gets to speak for you.
But as simple and as obvious as such authority appears, once one digs into the details, things are inevitably more complex and contentious when one digs into the details...
To dig into these details then, start with the observation that we do not give Twitter accounts to infants, and indeed, we are generally circumspect about allowing children unfiltered access to social media not simply for what they might see, but for what they might say.
As I have said many times before, we must understand the genesis of autonomy as a gradual process rather than the sudden appearance of a fully formed agent. This process bootstraps a child through levels of authority/responsibility by training the relevant capacities.
Training generally involves fucking up, quite a lot. As such, the sandbox environments we use to bootstrap the higher levels of capacity that any authority/responsibility is predicated upon are permissive. They encourage quick, safe, and teachable failures.
The low stakes character of these failures is codified by limited responsibility: if a student makes a mistake, they are responsible for nothing more than correcting it when they try again (though there are obviously more perverse and highly unethical forms of teaching).
Yet this limited responsibility comes with limited authority: when we teach students to articulate their beliefs and commitments, we don't treat the things they say within the teaching context as if they are transmissible outside that context. The sandbox is communicatively safe.
When a first year student in a seminar defends moral nihilism, I am happy to entertain the belief, and even to help them defend it against other students attacking it with lazy criticisms. When a tenured professor does the same, I will rip them a (dialectical) new one.
Prima facie, this is for two reasons: 1) the student is still developing the capacity to understand the consequences of the commitment they're undertaking in endorsing this view; and 2) the student isn't able to 'lend their weight' to others who would assert this view elsewhere.
Here we have the two dimensions of authority involved in assertion and the speech acts related to it (e.g., questions, challenges, etc.): articulating one's own beliefs, and licensing others to believe what you believe. The relation between these dimensions varies to some extent.
To analyse this relation is to address the difference between the capacities underpinning epistemic authority (knowing *what* one is talking about) and communicative authority (knowing *how* to talk about it). The line between these capacities is blurry, but they are distinct.
It's entirely possible to be idiomatic and/or idiosyncratic: to have a well articulated and thoroughly justifiable system of beliefs that's nevertheless difficult to communicate to anyone else, for whatever reason. One can be both a (technical) expert and a (communicative) idiot.
There's such a thing as communicative *work* involved in translating/transmitting ideas, especially between different communicative contexts. This work and its associated skills are the province of journalists, pedagogues, and demagogues alike. Here rhetoric cleaves from logic.
I don't really want to get side tracked by the question of the link between persuasion and deception. This thread has enough tangents as it is. All I'm after is the distinction between the sorts of failure involved in either case: technical errors versus communicative mistakes.
It's very easy to confuse these when assessing (or more often, predicting) the consequences of speech. Indeed, when the consequences are bad enough such conflation seems warranted. It doesn't matter *how* you fucked up, just *that* you fucked up. There's a case to be made there.
But licensing this conflation in every case is equivalent to collapsing the distinction between logic and rhetoric, and even if you don't do it in every case, the less selective you are in observing this distinction the more precarious this distinction becomes in practice.
Note the common paradoxical spectacle of people claiming moral authority while effectively insisting that, discursively speaking, even if might doesn't make right, weak makes wrong. It's a short path from here to using bad arguments that nevertheless communicate 'correct' views.
To make a point I've made before: many of the battles at the theoretical end of the online culture war are fought by left and right Nietzscheans vociferously complaining about one another's discursive vices. It's always the other guy's will to power that is suspicious.
To connect this point with the above considerations: a factor involved in these battles, characteristic of every flame war, is the elevation of affective heuristics to the status of prescriptive principles, or the effective elision of the difference between such things.
This elision does a good job of cleaving communicative contexts along lines of feeling, but it makes translating or otherwise mediating between those contexts incredibly difficult, because it's all too easy to be painted in the libidinal colours of the opposing side.
Even and especially when the boundaries of these communicative contexts are hard to determine, let alone shape, there's always someone waiting to shoot the messenger. Communicative form gets treated a proxy for epistemic content.
To get to the point of this thread: why am I interested in these distinctions beyond exculpating myself for my communicative, epistemic, and/or moral failures? Because when social terrain shifts, and the balance of powers with it, we must learn to cope, which means failing a lot.
From a purely epistemic perspective, the phase transition we're going through demands new ideas, which demands a sort of rapid prototyping and testing of abstract frameworks and concrete hypotheses. Communication can facilitate this, done properly.
But from a more generally communicative perspective, the phase transition requires new ways of communicating these ideas, both in the sense of rhetorical strategies and institutional norms. These can be theorised, but there's no substitute for prototyping and testing.
The kicker is that these two types of experimentation are quite hard to separate in practice, even when it is essential to separate the two types of failure they involve in principle. What's an important learning experience from the inside, from the outside looks like idiocy.
The historical tragedy of Lee's maxim is that the growth of our capacities for conscious action tends to precede that of our capacities for self-conscious control. Our misguided attempts beget the very guidance upon which our responsibility for failure depends.
One the cutting edge of history, there's no way to know just how badly we're able to fuck shit up until we do.
But here's my belated point. It's really easy to see a failure as a failure without understanding what it was trying to attempt or even why, i.e., without appreciating what might be learned from it, or even that there is something to learn here. Experiments become mere fuck ups.
Honestly, I think that most of us set our tolerances for what counts as a serious epistemic/communicative mistake in the current context way too high, by pretending that the affective heuristics we need to filter what we hear are already established norms governing speech.
This is a double mistake: first assuming that what we don't want to hear determines what shouldn't be said in semi-public contexts, and second assuming that what shouldn't be said in such contexts is something false in every context. We assume that we *just know* what's right.
Environments like Twitter, whether by accident or design, encourage decontextualisation and rapid response, while disabling our ability to turn implicit heuristics into more explicit and well defined norms. It's thus not entirely our fault we do this, but we must learn from it.
Before anyone gets the jump on me, I am saying that facts don't care about your feelings. But more importantly, I'm saying that the facts you care about are often poorly served by your feelings about the way they're communicated. These passions should be tuned separately.
To quote a passage from kpunk that hit me very hard when I returned to it several years ago: k-punk.org/abandon-hope-s… Image
I am here, in some small, long-winded way, attempting to prove that Twitter can be used actively, rather than reactively, in order to test the substantive ideas, communicative forms, and maybe even build the personal platforms that the current juncture demands of us.
But what such a platform is and how it should be used is a question that does not yet have a clear answer, a question to which my attempts hope to bring some clarity, if only by making mistakes interesting enough that my responsibility for them might be made plain in retrospect.
I was recently accused of abusing my platform. Specifically, the fact that I have ~5000 followers was invoked to underline the seriousness of my mistake. Thinking about it, I realised that I simply don't know what authority this gives me or what responsibility it entails.
Not because I don't think Twitter can't be misused, or even because a platform as modest as mine can't be abused, but because the capacities it grants me and the responsibilities consequent on them are fairly mysterious to me. I do not know what this etheric body can do.
But I'm determined to find out, one attempt at a time. Hopefully, there'll be enough successes to see what difference ~6K followers makes, and to further determine which potential failures are too risky for an account of such grand station. Til then, I labour in ignorance.🖖
CODA

One final word from that great master of the attempted thought, Monaigne:

"There is a sort of strong and generous ignorance which is as honourable and courageous as science."

Here's to being generous in our ignorance guys.🖖

• • •

Missing some Tweet in this thread? You can try to force a refresh
 

Keep Current with pete wolfendale

pete wolfendale Profile picture

Stay in touch and get notified when new unrolls are available from this author!

Read all threads

This Thread may be Removed Anytime!

PDF

Twitter may remove this content at anytime! Save it as PDF for later use!

Try unrolling a thread yourself!

how to unroll video
  1. Follow @ThreadReaderApp to mention us!

  2. From a Twitter thread mention us with a keyword "unroll"
@threadreaderapp unroll

Practice here first or read more on our help page!

More from @deontologistics

18 Mar
One small thought for the evening: I rag on Nietzscheans a lot, but there's a peculiar hermeneutics of power that is broader than Nietzsche's influence, even if he is a representative figure in it. I also don't think this hermeneutics is worthless, just that there are excesses.
I could, and probably should, write a book about these excesses, but I see the sort of Tory history Nietzsche specialised in as a useful corrective to the sort of Whig history that Hegel is famous for (cf. my piece on Hesse's Glass Bead Game - glass-bead.org/article/castal…).
But the excess that concerns me this evening concerns the relation between power and joy. Nietzsche is often, quite rightly, compared with Spinoza, the other great thinker of the conjunction of power and joy (cf. Deleuze). There's obviously a significant relation here.
Read 28 tweets
15 Mar
Someone pay me and @tjohnlinward to write a treatment for this.
For anyone who wants to hear the oral history:
Just to give you a slight inkling of the semiotic riches to be plundered here, the Worm was said to have coiled itself around the hill on the top of which this monument now stands: en.wikipedia.org/wiki/Penshaw_M…
Read 5 tweets
14 Mar
@4Q248 Just composing optimal tweets in between sending emails. It’s coming.
@4Q248 To be honest, I don't think there's a huge amount of difference on the question of libidinal mechanisms, the real divergence is the functionalist account of cognition, representation, and its associated norms. But these norms do provide some purchase on cognitive pathologies.
@4Q248 I think Ray's talk at the first accelerationism workshop provides the original statement of this, where he describes his own divergence from Land by rejecting his liquidation of the notion of representation. I've described my own trajectory from Deleuze to Brandom in these terms.
Read 9 tweets
6 Mar
I like this piece, but there’s an aspect of it that doesn’t quite sit right with me. It’s really easy for leftist critiques to accidentally imbibe the imaginary of ‘the market’ as impersonal force by projecting it onto the objects of their critique. I think it does too much here.
The primal awkwardness of most incels is obviously shaped in bad ways by capitalism, neoliberalism, and their ideological apparatuses, but there’s diversity in this awkwardness beyond the stamp ‘the market’ has put on it, and I suspect that it’s worth delving deeper here.
I don’t want to provide a unified theory of the intel here, not only because that would require a lot of work, but because it would also undermine my point. My sympathies are open here: I know many men (not ‘incels’) who’ve been twisted into bad shapes by romantic incapacity.
Read 52 tweets
6 Mar
Better late than never, I suppose? Would've been nice if ~120K of our country's most vulnerable didn't have to die in the name of a bad analogy though. Folk economics has had democidal consequences.
On the ~120K number, it is possible to quibble (cf. channel4.com/news/factcheck…). However, the biggest quibbles were always 'what even is an excess death, really?', an epistemic bubble that has unfortunately been burst by another ~100K excess deaths since.
The question is now solidly *how* to quantify such deaths, rather than *whether* to do so. If you look at Tory governance since 2010, it's hard to avoid the conclusion that it has, through a heady mix of malfeasance and incompetence, been thoroughly democidal. Thanatopolitics.
Read 10 tweets
6 Mar
This is close to @lastpositivist's #NoHeroes stance. I think I've a slightly different take on this, though not a substantially different one. I always try to begin with Stan Lee's maxim: "With great power comes great responsibility."
I think we have a responsibility to use whatever social power we accrue wisely, and this is the only thing that justifies such power. Yet I also think this is the flip side of Kant's principle of ought-implies-can: that we can't blame people for not doing things they can't do.
The (Hegelian) difficulty that the conjunction of these ideas faces is that, historically speaking, the growth of our (conscious) capacities for action precedes that of our (self-conscious) capacities for criticising/correcting these actions. We are destined to fuck up, a lot.
Read 27 tweets

Did Thread Reader help you today?

Support us! We are indie developers!


This site is made by just two indie developers on a laptop doing marketing, support and development! Read more about the story.

Become a Premium Member ($3/month or $30/year) and get exclusive features!

Become Premium

Too expensive? Make a small donation by buying us coffee ($5) or help with server cost ($10)

Donate via Paypal Become our Patreon

Thank you for your support!

Follow Us on Twitter!