Being shown now at #FAccT22! The production quality on this paper video is so good 🥺 A must-watch for those interested in this growing area of AI auditing
Video is also moving at light speed 😅 so I haven't been able to take notes, catch the video, and tweet about it.

Some highlights:
* Client confidentiality prevents second-party auditors from sharing data about the audits
* Auditing regulation: ...there are few
#FAccT22
Here is the full video from @AJLUnited on "Who Audits the Auditors?" which is a must-watch from #FAccT22. (There is an abridged 4 minute version you can view as well!)

From @schock in #FAccT22 chat:

- Majority of auditors want regulation that requires disclosure of results
- Majority did *not* think it was critical to check if there was harm reporting
- Only two respondents provided evidence that impact communities are involved in audit!
Another take-away: first-party auditors (those internal to the companies) *want* to share summaries of audit results and their methods, but they're not allowed to (due to NDA or other company restrictions). #FAccT22
Slide on AI auditing. AI auditing checks whether systems meet expectations in these various areas, including regulatory compliance, consent, labor practices, bias, transparency, effectiveness, energy use, security, and vulnerable community impacts. #FAccT22
I've got the feeling that we'll be referring to this terminology for years to come! Some defining work here.

First-party auditors: internal to the company
Second-party auditors: contractors/consultants
Third-party auditors: independent researchers, journalists

#FAccT22
First-party auditors: have high levels of access, but low levels of transparency

Second-party auditors: questions about whether they can disclose & accountability incentives

Third-party auditors: limited/no access to target systems

#FAccT22
One of the *big* topics of discussion around auditing in past years: Cathy O'Neil (second-party) mentions that she can't demand disclosure, or else she won't be hired.

82% of auditors think that public disclosure of audit results *should* be mandated by regulation. #FAccT22
One of my biggest takeaways from this #FAccT22 session:

Respondents agreed that "Companies do not take action on ethical AI issues unless they face public pressure."

"The best way to make sure that an issue is attended to is to have the press report on it."

• • •

Missing some Tweet in this thread? You can try to force a refresh
 

Keep Current with wells (oakland enby @ umsi)

wells (oakland enby @ umsi) Profile picture

Stay in touch and get notified when new unrolls are available from this author!

Read all threads

This Thread may be Removed Anytime!

PDF

Twitter may remove this content at anytime! Save it as PDF for later use!

Try unrolling a thread yourself!

how to unroll video
  1. Follow @ThreadReaderApp to mention us!

  2. From a Twitter thread mention us with a keyword "unroll"
@threadreaderapp unroll

Practice here first or read more on our help page!

More from @WellsLucasSanto

Jun 23
"When one sees a racist tweet receive hundreds of thousands of interactions, is the platform the antagonist?" - @DocDre #FAccT22
"Any technical endeavor should properly begin by reflecting upon sociological and cultural understandings of that technology’s use and consequences" - @DocDre #FAccT22
"Online racial microaggressions have been elevated from individual experiences to widely broadcast, reverberating moments simultaneously experienced by many Black folks" - @DocDre #FAccT22 Image
Read 5 tweets
Jun 23
Something that (initially) quietly happened this week that might need more attention: OpenAI's DALL-E (AI for image generation), which originally had a "no realistic faces allowed" policy has been changed to allow for the generation & sharing of faces.

Here's their official email about the policy change, and why they put the policy change in place (citing new "safety measures"). I don't know if this is enough, tbh.

You can see some of the results of the new face generation by DALL-E in this thread. The faces are extremely realistic, and while some might find it really cool, I'm *really* worried about the harms that this can lead to.

Read 4 tweets
Jun 23
Really excellent talk right now on "Automating Care: Online Food Delivery Work During the CoVID-19 Crisis in India" by Anubha Singh and Tina Park that looks at structural inequalities and power asymmetries in the notions of "care" that delivery apps employed during COVID #FAccT22 Image
Gotta read this paper and revisit this talk when I have more time, because this critical analysis applied to the measures adopted by delivery platforms in India is just *chef's kiss*
"Much of the responsibility of safety fell on the shoulders of the food delivery worker", while customers were not required to follow such safety protocols. Workers would be penalized for violating measures, but customers were not, revealing an asymmetry of care. #FAccT22 Image
Read 4 tweets
Jun 22
Really powerful video at #FAccT22 right now about gig workers who had to deal with pay change algorithms at the start of pandemic & how a community-led audit helped them bargain this "black box".

They'll be releasing a public version of the video soon (keep your eyes peeled)!
This work was led by folks at Coworker.org with gig workers at Shipt.

From the panel, it seems that they went to Shipt with a systematic review of the data around how their algorithms reduced workers' pay, but Shipt has denied this & things have not changed. #FAccT22
Some emotional words from Willy Solis, a Shipt Shopper who began organizing his fellow gig workers at Shipt when they implemented the pay cut. From this, he became a lead organizer for Shipt workers nationally with @GigWC. Do check out and support their work! #FAccT22
Read 4 tweets
Jun 22
My high school (after I graduated) published a 'kill list' with names of Black students on it. NY Mag would probably write a spin article about how we should suspend our judgements about the model minority Asian kids who made that list.
Yes, this really happened, and there was NOT enough backlash. It was made by certain students at the school, and in the years that followed, other students organized a racial justice coalition at the school to address it. But not sure what ended up happening.
This shit was so vile. Here's the article about it that eventually got buried because the school protects its reputation so much:

mercurynews.com/2017/06/13/kil…
Read 4 tweets
Jun 22
Very cool paper from Terrence Neumann, Maria De-Arteaga, and Sina Fazelpour thinking about justice in misinformation detection systems that asks what "informational justice" looks like, esp. for different stakeholders who interact with information claims. #FAccT22
Huge example here. NYTimes conducted a data labelling experiment, where Native American students labelled this image of Residential Schools with tags such as genocide and cultural elimination. A leading algorithm labelled it "crowd, audience, and smile". Holy crap, #FAccT22
(ignore that this tweet got sent out like 10 hours late; Twitter for some reason refused to send it at first 😬)
Read 4 tweets

Did Thread Reader help you today?

Support us! We are indie developers!


This site is made by just two indie developers on a laptop doing marketing, support and development! Read more about the story.

Become a Premium Member ($3/month or $30/year) and get exclusive features!

Become Premium

Don't want to be a Premium member but still want to support us?

Make a small donation by buying us coffee ($5) or help with server cost ($10)

Donate via Paypal

Or Donate anonymously using crypto!

Ethereum

0xfe58350B80634f60Fa6Dc149a72b4DFbc17D341E copy

Bitcoin

3ATGMxNzCUFzxpMCHL5sWSt4DVtS8UqXpi copy

Thank you for your support!

Follow Us on Twitter!

:(