For those interested in the political economy of AI this report has a lot of teasers. Many of them are aligned with some recent papers that talked about the concentration of research in the hands of a few (corporations and their research collaborators).
Report claims OpenAI and Deepmind, but also other big players in the industry are important players in research but do not/cannot publish their code (I hope all our colleagues who now do ethics at these companies consider these structural issues!)
Tools are an important but of expanding infrastructural power of these companies into research institutions, and the report claims Facebook is outpacing Google.
The brain drain from the universities is happening at the same time as, for example, European public education institutions are pumping money into universities for AI research. This has implications for what happens with public funds and quality of education.
The report suggests the endowment model does not make up for the loss. Notice that aside from the tiny sums, especially when companies finance social sciences, humanities (ethics) and law, it also does not solve the lack of public interest technology research.
#fundingmatters
The brain drain also can be seen from a global perspective: a process further concentrating research power in the hands of tech dominant countries while draining students from the global south, maybe better call this brain extractivism?
This impacts immigration policy, as companies (and universities) fight to keep the "talent" flowing in. This can be contrasted with the silence of tech companies vis a vis the continuous depression of number of refugees accepted into the countries that benefit from these inflows
And, not that NeurIPS as a conference can be seen as a social indicator, but still, this graph raises questions with respect to who will benefit from the current political economy of AI, if "AI" comes to be so successful as promised. Or, who will have wasted the most money?
This slide is especially for @mikarv and @carmelatroncoso and points to how the path paved using differential privacy to unblock the privacy obstacle to current computational infrastructures is further mainstreamed in federated ML solutions.
I haven't finished reading the report (well, really a slide deck), but I would be very curious to hear if people catch other interesting threads that could lead to interesting research questions, but also questions about education and research policy.
Share this Scrolly Tale with your friends.
A Scrolly Tale is a new way to read Twitter threads with a more visually immersive experience.
Discover more beautiful Scrolly Tales like this.