Here's a heuristic that I think people rely upon too much in everyday moral reasoning: "People I find moralistic and annoying must be wrong." The background assumption here is that good moral arguments for social change should go down smoothly. However, (1/8)
I think we should expect most genuine moral progress to be experienced as irritating. Widespread moral progress inevitably requires some changes to social norms. The problem is most people are habituated to the existing norms, to the point that they're barely noticeable (2/8)
Disruptions to these norms are generally experienced as disfluent and aversive (think about how disorienting it can be on your first day in an unfamiliar country). So norm change is rarely pleasant (3/8)
Then there’s the fact that new norms need to be enforced via social sanctions that interact with nasty emotions like shame, embarrassment, and a desire to belong. This means being a recalcitrant norm adopter is likely to result in some very unpleasant interactions. (4/8)
From that person's perspective, the new norm will inevitably be experienced as something foisted upon them by annoying outsiders. This should give us some sympathy for normative laggards, but (5/8)
it should also make us think twice before jumping from "those people and the changes they're trying to enforce are annoying" to "I don't need to pay attention to them." Lots of on-net good norm change is likely to be experienced by many as irritating or threatening. (6/8)
The exception is when you belong to the social group driving the change and you identify with the new norm. Then norm change feels like a win. But from within such a group, for echo-chambery reasons, it's often hard to tell whether you're on the right side of history. (7/8)
So we should actually be *more* skeptical about norm changes when they feel good, because we might not be in an epistemic position to recognize their harms.
Bottom line: expect moral progress to be a pain in the ass. (8/8)
• • •
Missing some Tweet in this thread? You can try to
force a refresh
🧵I've got a new paper forthcoming in Philosophical Perspectives called "Symbolic belief in social cognition." I argue that we've got two folk psychological concepts of belief: one that's mostly for mindreading, and another that's mostly for mindshaping. philpapers.org/rec/WESSBI-2
The basic idea is that the mental state concept we express when we say "I think/believe it's raining" is very different from the mental state concept we express in signs like this:
The former is a concept of an epistemic, evidence-sensitive notion of belief that is familiar to philosophers and measured by stuff like the False Belief Task. Ordinary people mostly express with concept with the verb "think," but philosophers (weird nerds) mostly say "believe"