A short thread on the relationship between social media, hypocrisy and political extremism. It’s been bugging me for years. 1/9
Everyone is a a hypocrite at some level, since it’s impossible to act the same in private and public. However, digital technology has (re)created the widespread belief that hypocrisy is now a defining feature of people in power. 2/9
Why? Because everyone shares everything they do & it’s all saved. It can be dragged out to reveal the gulf between your acts & words. A secret recording or a statement from 10 years ago - which you’ve forgotten or outgrown - can be re-posted and used as proof of bad faith 3/9
Anyone who says anything remotely right-on is liable to have something contradictory they said 7 yrs ago in a private FB group thrown in their face. Anecdotally almost every conversation I have with a-political friends ends up in the same place: ‘they’re all hypocrites’ 4/9
Why is this a problem? Sorry to bring Arendt into this (again). In The Origins she explicitly wrote about how dangerous hypocrisy (and the belief it is ubiquitous) can be. In the 1920s & 30s there was a belief that the bourgeoisie paraded virtues which it didn’t follow 5/9
This meant it was easy to “accept patently absurd propositions than the old truths which had become pious banalities”. She wrote that the mob believed the truth was simply whatever respectable society had ‘hypocritically passed over or covered up’. 6/9
And ppl didn’t mind being deceived because everyone was a hypocrite anyway. "Instead of deserting the leaders who had lied...they would protest that they had known all along that the statement was a lite & would admire the leaders for their superior tactical cleverness" 7/9
In other words, if you believe everyone is a hypocrite you'll believe the worst about everything and succumb to any 'absurd proposition' that isn't the 'old truth'. 8/9
Anyway, I guess I’m saying that it’s very easy to think everyone is a hypocrite these days, and it's partly a function of how the internet never forgets. But it’s a risky and often lazy default. Don’t let it make you think that absurdities are real! /End

• • •

Missing some Tweet in this thread? You can try to force a refresh
 

Keep Current with Jamie Bartlett

Jamie Bartlett Profile picture

Stay in touch and get notified when new unrolls are available from this author!

Read all threads

This Thread may be Removed Anytime!

PDF

Twitter may remove this content at anytime! Save it as PDF for later use!

Try unrolling a thread yourself!

how to unroll video
  1. Follow @ThreadReaderApp to mention us!

  2. From a Twitter thread mention us with a keyword "unroll"
@threadreaderapp unroll

Practice here first or read more on our help page!

More from @JamieJBartlett

5 Jul 20
Ever feel like politics is one giant reality TV show?

Let me tell you about one of the most bizarre & fascinating interviews I've ever done - which airs today on BBC Radio 4 at 2.45pm. (Final episode of mine & @gemmanewby's series 'Watching Us'). 1/6
It's with Bill Pruitt - a producer on early seasons of The American Apprentice, which starred Donald Trump 2004 - 2015.

Reality TV 'made' Trump. I don't mean 'made him famous'. I mean the producers, including Bill Pruitt, created the Donald Trump you now know. 2/6
Trump wasn't a particularly successful or engaging businessman when they found him. But the TV format required that - so Pruitt & colleagues had to edit a 'version' of Trump into existence: making him look more shrewd & more impressive than he really was. Listen here: 3/6
Read 6 tweets
15 May 19
I can’t believe what I’m seeing! While running a facial recognition pilot, one man (understandably imho) covered himself up. The police forced him to show his face (& then fined him for disorderly conduct). This is dangerous & terrifying.
If this is rolled out, there will be hundreds - probably thousands - who will do exactly what this man has done: principled refusal to comply. What will the police do then?
And if you think this technology will be ‘objective’ or ‘neutral’ then I suggest you watch this:
Read 6 tweets
18 Feb 19
Far too much in today's @CommonsCMS report on disinformation to discuss here. But a couple of quick thoughts that might be helpful. (Full report is here: publications.parliament.uk/pa/cm201719/cm…). /thread.
It does not, as some seem to think, say that social media platforms should be defined as 'publishers'. Rather they are somewhere in between a platform & publisher & we need a new definition & rules for them. I agree with that. We can't always rely on old categories.
Looks like they're pushing something like the German NetzDG law, which basically introduces harsh penalties if illegal / harmful content isn't removed quickly. I expect the forthcoming White Paper ('early 2019', whatever that means) on online harms to propose something like this.
Read 11 tweets
4 Feb 19
Question: who is it on here whose view / opinion you respect so highly that, if in conflict with your own, you revisit your position?
I'm getting ratioed here
[This is turning into a useful follow list....]
Read 6 tweets
31 Jan 19
Today the Science & Technology Select Committee published its report about regulating social media. Here it is: parliament.uk/business/commi… Couple of thoughts. #Thread
It’s not bad. It says that platforms should have a ‘duty of care’, for kids using their services - such as default high privacy & filtering certain harmful content. Also wants Ofcom to be regulate that duty, so it can check if they’re up to scratch & fine if not. Reasonable, imho
Other positives: mandatory PHSE classes, more data for researchers, tech companies should invest in AI responses to ‘deep fakes’ (first time a Select Committee has brought this up perhaps?). I can get behind all this. (Check this deep fake: )
Read 10 tweets
23 Jan 19
I founded a research centre specialising in machine learning algorithms. I’m neither a Democrat nor a Republican. She’s completely right & I’m delighted a politician is actually talking about this as it’s going to become a huge issue.
(And this video - which was circulating last year - is a very visual illustration of how bias can be built in & automated in apparently ‘neutral’ machines).
How can maths be racist? Asks Ryan Saavedra. Automated bias is widely understood as a problem in machine learning, because algorithms make choices based on data - and what if the data input are biased? (See Cathy O’Neil, Sayifa Noble great work on this). 1/4
Read 6 tweets

Did Thread Reader help you today?

Support us! We are indie developers!


This site is made by just two indie developers on a laptop doing marketing, support and development! Read more about the story.

Become a Premium Member ($3/month or $30/year) and get exclusive features!

Become Premium

Too expensive? Make a small donation by buying us coffee ($5) or help with server cost ($10)

Donate via Paypal Become our Patreon

Thank you for your support!

Follow Us on Twitter!