Sometimes as a security researcher I'm shocked that the assumptions we make: that every system can be broken, that no system is truly private are rarely held by the broader CS community.
A short thread of recent examples.
In preparing COVID19 ppr, I'm reminded of our efforts to submit the initial work on people's willingness to adopt based on privacy vs. accuracy @NeurIPSConf.
Reviewers claimed it was unnecessary to consider privacy in a decentralized system because it was perfectly private.
As security ppl, we took that breaches should simply be discussed in terms of likelihood, not impossibility.
Example 2: I spoke with a large industry AI lab last week.
One of their research heads said of our work on safety in digitally-mediated online-offline interaction like gig work & online dating:
"I don't see how that's about AI or how it's even research. That's product stuff"
A few years ago, comments like that derailed me.
I still sometimes say to mentors "but I'm not REALLY a CS researcher".
But most of the time, now, I shake my head because I know that they are wrong:
Security-thinking matters. Safety matters. And if we don't start considering security not just for data but for people before and while we build AI systems, we're doomed.
</end rant>
• • •
Missing some Tweet in this thread? You can try to
force a refresh
But, if you're seeing this, here's a short summary:
These changes in rules can mean a huge & unexpected loss of income.
As far as we can tell from our still-in-progress research, many OF creators are new to S*x W*rk. While experienced S*x W*rkers were anticipating the realities of deplatforming, less experienced folks may not be.
While you may think of OF as a side gig, for many people, it's not.
Anti-adult work policies, often driven from American morality (for OF: the morality of their payment processors & the people lobbying them) put marginalized people - often women, people of color, and LGBTQ folk - at serious risk.
@airbnb uses AI to detect whether a user is a sex worker, mentally ill, or otherwise "un-desirable".
Based on this algorithm, #airbnb bans these folks, even if they were *not* looking to use the property for e.g., #sexwork.
Multiple sex workers we interviewed in Europe (Germany/Switzerland) reported this problem extensively, even though sex work is not illegal behavior.
This is a classic case, as many workers discussed, of Americanized-tech damagingly applying our "ethics" to the rest of the world
Our interview data further confirms the prevalence of this practice internationally, which denies legal sex workers, among other groups determined undesirable by AI, from participating in a large and fast growing part of the gig economy, when they are gig workers themselves!