Very interesting keynote panel at #FAccT22 now about management tools ('bossware') that reduce worker's autonomy and give power to employers.
Thinking about products that are often marketed for productivity, but used in unintended ways.
Dr. Negron mentions that this tech can be weaponized by companies; for example, they're used by places like Amazon and Whole Foods to understand where there's a connection between workers that could lead to unionizing (!!). #FAccT22
Some really interesting / scary(?) things that these workforce management tools can do: ranking workers based on productivity, giving influence scores, track physical movements, and even sentiment analysis in meetings (!!); can feed into more malpractice
Dr. Min Kyung Lee talks about how her work has looked at the context of automatic schedule programs, which they've found has physical, financial, and psychological harms b/c people have a hard time to predict how much they'll work & can prevent them from paying rent #FAccT22
Dr. Rida Qadri, on the Global South: techniques of surveillance have been perfected on low-power populations, and then move towards the Global North; the narrative is that this is for productivity and that workers don't engage in fraud. #FAccT22
Dr. Qadri also talks about how workers in Jakarta, Indonesia have used technology itself to subvert algorithmic management (like using glitches in the security architecture & selling their own apps). #FAccT22
Makes me think of this MIT Tech Review article: technologyreview.com/2022/04/21/105…
Dr. Frederik Borgesius talks about how GDPR has actually helped with dealing with this; it's not just about privacy but in general about protecting human rights with personal data use; allows workers to send an email to ask people to show worker data on them, w/o lawyer #FAccT22
Dr. Seth Lazar points us towards Dr. Wilneida Negron's paper with coworker.org called "Little Tech is Coming for Workers" where they establish a framework for reclaiming and building worker power #FAccT22
Thinking now about how we can do work that benefits the workers. Dr. Lee talks about how even if workers get their raw data, it's hard for them to get actual benefits from it; we should figure out how workers themselves can benefit from their data. #FAccT22
Dr. Qadri: "we should build relationships of trust with workers and have relationships of care and think about what these would look like"; avoid research that's acontextual and US-centric #FAccT22
Dr. Borgesius mentions that "The rights of workers are being sabotaged even in countries with good welfare status" (like the Netherlands). Gig work is prevalent there & it's like we're at the beginning of the Industrial Revolution without statutes/regulations on work. #FAccT22
Dr. Borgesius mentions that we also need competition laws to break up the big companies; Dr. Lazar mentions that like the coworker paper goes into, this is a problem of Little Tech as well, not just Big Tech. #FAccT22
• • •
Missing some Tweet in this thread? You can try to
force a refresh
"When one sees a racist tweet receive hundreds of thousands of interactions, is the platform the antagonist?" - @DocDre#FAccT22
"Any technical endeavor should properly begin by reflecting upon sociological and cultural understandings of that technology’s use and consequences" - @DocDre#FAccT22
"Online racial microaggressions have been elevated from individual experiences to widely broadcast, reverberating moments simultaneously experienced by many Black folks" - @DocDre#FAccT22
Something that (initially) quietly happened this week that might need more attention: OpenAI's DALL-E (AI for image generation), which originally had a "no realistic faces allowed" policy has been changed to allow for the generation & sharing of faces.
Here's their official email about the policy change, and why they put the policy change in place (citing new "safety measures"). I don't know if this is enough, tbh.
You can see some of the results of the new face generation by DALL-E in this thread. The faces are extremely realistic, and while some might find it really cool, I'm *really* worried about the harms that this can lead to.
Really excellent talk right now on "Automating Care: Online Food Delivery Work During the CoVID-19 Crisis in India" by Anubha Singh and Tina Park that looks at structural inequalities and power asymmetries in the notions of "care" that delivery apps employed during COVID #FAccT22
Gotta read this paper and revisit this talk when I have more time, because this critical analysis applied to the measures adopted by delivery platforms in India is just *chef's kiss*
"Much of the responsibility of safety fell on the shoulders of the food delivery worker", while customers were not required to follow such safety protocols. Workers would be penalized for violating measures, but customers were not, revealing an asymmetry of care. #FAccT22
Really powerful video at #FAccT22 right now about gig workers who had to deal with pay change algorithms at the start of pandemic & how a community-led audit helped them bargain this "black box".
They'll be releasing a public version of the video soon (keep your eyes peeled)!
This work was led by folks at Coworker.org with gig workers at Shipt.
From the panel, it seems that they went to Shipt with a systematic review of the data around how their algorithms reduced workers' pay, but Shipt has denied this & things have not changed. #FAccT22
Some emotional words from Willy Solis, a Shipt Shopper who began organizing his fellow gig workers at Shipt when they implemented the pay cut. From this, he became a lead organizer for Shipt workers nationally with @GigWC. Do check out and support their work! #FAccT22
My high school (after I graduated) published a 'kill list' with names of Black students on it. NY Mag would probably write a spin article about how we should suspend our judgements about the model minority Asian kids who made that list.
Yes, this really happened, and there was NOT enough backlash. It was made by certain students at the school, and in the years that followed, other students organized a racial justice coalition at the school to address it. But not sure what ended up happening.
This shit was so vile. Here's the article about it that eventually got buried because the school protects its reputation so much:
Very cool paper from Terrence Neumann, Maria De-Arteaga, and Sina Fazelpour thinking about justice in misinformation detection systems that asks what "informational justice" looks like, esp. for different stakeholders who interact with information claims. #FAccT22
Huge example here. NYTimes conducted a data labelling experiment, where Native American students labelled this image of Residential Schools with tags such as genocide and cultural elimination. A leading algorithm labelled it "crowd, audience, and smile". Holy crap, #FAccT22
(ignore that this tweet got sent out like 10 hours late; Twitter for some reason refused to send it at first 😬)