Mor Naaman (@mor@hci.social) Profile picture
Information Science Faculty at @cornell_tech focusing on our information ecosystem’s trustworthiness, and its impact on democracy. Taller in person.
Mar 10, 2023 6 tweets 2 min read
In our new paper, we did not quite run the interactive Turing Test but wrote that our findings suggest a post-Turing world where the test is no longer of *machine intelligence* but of *human vulnerability*. Let me explain🧵

pnas.org/doi/10.1073/pn… We have reached a point where machine intelligence (i.e. the ability to generate human-like text) is at the level of the Turing Test. What would tip the scale is not better AI, but the ability of AI to learn and exploit human assumptions/weaknesses. 2/
Mar 7, 2023 12 tweets 5 min read
🚨New Pub at PNAS🚨! We know people cannot detect language written by AI. But what makes them THINK text was AI-generated? We show that people have consistent heuristics... that are flawed & can be exploited by AI to create text "more human than human" 🧵

pnas.org/doi/10.1073/pn… In the first part of this work (lead author: @maurice_jks) we collected 1000s of human-written self-presentations in important contexts (dating, freelance, hospitality); created (#GPT) 1000s of AI-generated profiles; and asked 1000s of people to distinguish between them. 2/
Jan 21, 2021 13 tweets 5 min read
NEW from my group: Voterfraud2020, a public Twitter dataset with 7.6M tweets and 25.6M retweets related to voter fraud claims, including aggregate data of every link and YouTube video, and account suspension status. First the link, then some insights: 1/

voterfraud2020.io Our data, tracking hashtags and phrases related to voter fraud, spans 2.6M users. Community detection on the retweet graph shows that only 55% of the users are promoters of voter fraud claims (on the right). Our analysis is sensitive to this distinction 2/
Jan 20, 2021 4 tweets 2 min read
And here is @EmpireStateBldg synched* to Alicia Keys on @Z100NewYork to honor covid-19 victims at 9pm tonight.

Powerful. Thanks you.

* Looks like we are a few seconds off Of course the moon had to show up.
Jun 8, 2020 18 tweets 9 min read
I used my 30 mins to make a pitch for thinking (and shifting) the unit of analysis or unit of observation used in our research, with a quick survey of common UoAs and sample papers. A thread: #icwsm2020 When talking about online safety/abuse/misinformation one can image about content- and people-driven units of investigation; and then their mix. In each, we can start from the smaller units and work our way up. 2/
Mar 10, 2020 20 tweets 8 min read
NEW: our analysis, showing why Bernie fans are seen as more toxic on Twitter. We show there are *many* more active Bernie fans on Twitter, they reply to other candidates more frequently than other supporters do, and their replies are (slightly) more likely to be toxic. Thread. 1/ Are Bernie supporters reply on Twitter in a more toxic fashion to rivals than those of other candidates? With 2 new papers on how political candidates are attacked online, we realized we have data/methods to shed some light on this question. 2/
Feb 3, 2020 10 tweets 3 min read
*NEW PAPER* in which @jeffhancock @karen_ec_levy & I introduce AI-MC: AI's increasingly central role in human communication (from smart replies to deep fakes) and how this trend may impact our language *and* our interpersonal relations, including how we trust each other 1/ more in this thread but here's an open access link for the deep readers: 2/
academic.oup.com/jcmc/advance-a…