Below are my group's latest #artificialintelligence research advancements from 2022 that promote integrity, equity,& well-being.
Mis/disinformation, hate, propaganda present some of the biggest threats to public health, democracy, science, & society
My group continues to push for text and graph solutions to create a safer online ecosystem for all:
1) We created the first-ever ban evasion detection model. This model is now being deployed to aid Wiki moderators: lnkd.in/gk7mK87f.
2) We showed how misinformation exacerbates anxiety: lnkd.in/g_YXubR3
3) We illustrated non-English models do not perform as well on equivalent tasks (e.g., detecting misinformation) compared to English models. This highlights the need to create equitable multilingual model as 74% of internet data is _not_ in English: lnkd.in/gnzjDxSt
4) We investigated the robustness of recommender systems. Alarmingly, we found that modern recommender systems are pretty fickle to even minor changes in the training data: lnkd.in/gPAMVtbC
5) We showed how misinformation spreads across social media platforms. A platform can not solve the harmful/dangerous content problem on its own, given how fast the content spreads across platforms: lnkd.in/gZBGW3YB
• • •
Missing some Tweet in this thread? You can try to
force a refresh