, 19 tweets, 3 min read
I was thinking about the small movement that thinks there was a conspiracy to keep the Battle Angel Alita movie from being recognized as a True Masterpiece: I’d been thinking about it as just kind of funny, but it struck me - I wonder if that is actually someone training a model.
I am not a machine learning guy, but a Wikipedia level understanding includes the idea of throwing a crapload of data at a model in a way that lets it improve. This usually means better analysis, but it can have other ends.
But if one looks at the iterative nature of modern propaganda machines (such as viewing GG as the trial run for the MAGA push) then if you were someone who built these systems, you’d WANT learning models, but how would you get them? Dumb, micro movement s like this seem ideal.
I mean, it might not be. Dumb things happen organically too. Still, I find the prospect kind of fascinating, though also dangerous, since it only takes a small nudge to go full conspiracy theorist.
But that question: how do you train a social model? I wager that there are some smart answers to that which you’re not going to find in any O’Reilly book.
Unsurprisingly, this thread caught a troll. It's chock a block full of keywords, so I kind of expected that, but it makes it all the more fascinating to see it play out
BTW, if the specific movement bothers you, feel free to swap in almost anything which has trended weirdly. Social Media is *full* of examples of this, which is rather the point. The fact that they happen organically means there is every reason for someone to try to model it.
Topic-wise, I have no more reason to think that this movement is any more real or fake than the entire sidebar of twitter (some % of which is *certainly* fake). The more interesting question is how one spots a fake, esp given the *point* of modeling would be to defeat scrutiny.
Further complication: Even if something starts as a fake, real humans get sucked into it and buy into it sincerely. At that point, is it still fake?
One decent indicator is how it responds to the IDEA of being fake. Real things spend very little effort defending their realness. This is classic con stuff - No one will provide you as much documentation as a good liar.
But I don't know how much I trust that as an indicator. it's fairly easy to train out of a model, so even if it is an indicator today, it's probably going to smooth out.
Might be some similarity to spotting counteragents in a movement. That is - if you are in a march and someone hands you a brick and encourages you to smash a window, that person is probably a counteragent.

It's an imprecise measure, of course. They could just be a jerk
Of course, you're still better off assuming they're a counteragent because either way their agenda is probably pretty far removed from your own. Which points to the indicator: what is the agenda?
If a movement or person is running google alerts or triggering bots, then their agenda is to MAKE NOISE. it may be under the pretense of discussion, but that is not how human discussion works. They are merely maximizing their signal.
This is not a terrible agenda. Lots of us like to make noise and get noticed. It's just a question of pretense

Now, what's interesting is that these patterns are *also* the patterns of serial harassment. Lot of interplay between those ideas, enough to consider cause v effect
Answering that almost certainty requires someone smarter than me. But it's very hard to think about bot farm behavior and not see the overlap with other behaviors we've seen, like medium.com/@Ettin/did-zak… and wonder if it's inevitable, or just unfortunate overlap.
ANYWAY.

Given that any systemized iteration (like machine learning) is driven by the quality and volume of metrics available and the metric-driven nature of social media, the application of ML to it is entirely natural, with the question being what it LOOKS like.
The answer is almost certainly something we can point too because of course it's happening and it will look like...well...internet. And it benefits from making itself very hard to talk about.

Not sure how we beat this one, honestly. But we kind of need to.
Well, that was a downer. :)
Missing some Tweet in this thread?
You can try to force a refresh.

Like this thread? Get email updates or save it to PDF!

Subscribe to Rob Donoghue
Profile picture

Get real-time email alerts when new unrolls are available from this author!

This content may be removed anytime!

Twitter may remove this content at anytime, convert it as a PDF, save and print for later use!

Try unrolling a thread yourself!

how to unroll video

1) Follow Thread Reader App on Twitter so you can easily mention us!

2) Go to a Twitter thread (series of Tweets by the same owner) and mention us with a keyword "unroll" @threadreaderapp unroll

You can practice here first or read more on our help page!

Follow Us on Twitter!

Did Thread Reader help you today?

Support us! We are indie developers!


This site is made by just three indie developers on a laptop doing marketing, support and development! Read more about the story.

Become a Premium Member ($3.00/month or $30.00/year) and get exclusive features!

Become Premium

Too expensive? Make a small donation by buying us coffee ($5) or help with server cost ($10)

Donate via Paypal Become our Patreon

Thank you for your support!