The firings of @timnitGebru & @mmitchell_ai from Google raised red flags about corporate interests clashing with intellectual freedom that the #AIEthics community can't ignore. I'm proud to co-author this opinion paper on where to go from here: link.springer.com/article/10.100…
The mistreatment these two experts experienced didn't happen in a vacuum: A small number of corporations hire a large number of AI researchers. These corporations also submit *a lot* of papers to academic conferences, along w/ sponsoring said conferences.
It's not *if* AI research results will clash with corporate interests, but *when* and *how* companies with purse strings handle those clashes. Something that stuck with me from @timnitGebru's story is the obtuse internal review process for her paper.
While internal reviews aren't inherently bad, lack of transparency is problematic -- and it wasn't the first time Google did this. They have a history of submitting papers to journals/conferences that lack basic transparency, which impedes reproducibility: technologyreview.com/2020/11/12/101…
As a founding editor of a peer-reviewed journal, I distrust papers submitted by corporations that don't disclose how they train models and/or which internal reviews took place before reaching my own eyes. It's my responsibility to detangle corporate interests from AI research.
• • •
Missing some Tweet in this thread? You can try to
force a refresh