When Alex Irpan of Google writes about compute as the way forward for AI, you wonder how much of this is AI pulling on compute vs. compute (and the investment into chips) pulling on AI.
The whole article is as much about economics as about AI, in fact it conflates the two 1/x
It starts with artificial general intelligence being equated with "economically valuable work":
"artificial general intelligence (AGI) [is] an AI system that matches or exceeds humans at almost all (95%+) economically valuable work
2/x
According to Irpan, economics determines also how AI will spread:
"We also don’t have to build AI systems that learn like humans do. If they’re capable of doing most human-level tasks, economics is going to do the rest, whether or not those systems are made in our own image."
3/x
here he exposes how the promise of AI begets more funds for compute, going full circle:
"If AGI is possible soon, how might that happen? [...] It would likely be based on scaling existing models. And, because it needs to be based on scaling, and scaling needs funding."
4/x
Here, the business model for tools: it is about capturing business budgets (by optimizing organizational workflows & productivity) while also selling expensive compute (hence the idea that more data could improve models a great business proposition for cloud providers) 5/x
Not all is new: the business model for tools has been how software companies have profited for years. Irpan mentions Lotus notes and MS Excel. @xiaochang does wonderful historical work of how IBM pushed for data intensive solutions to sell their hardware. 6/x
While computer scientists may throw around large words like intelligence, my favorite part of this article is how it shows that industry is ready to equate the pursuit of some definition of intelligence with profit. 7/x
• • •
Missing some Tweet in this thread? You can try to
force a refresh
One year into the pandemic, in Germany we are encountering debate around apps (see #LucaApp or #ImmunityPassports) that promise to solve the Corona crisis. The current debate forgets important questions for public interest 1/x
Currently the debate is all about "data privacy". But, is that all that is at stake?
Are there other questions that journalists, politicians and civil society could be asking that could ensure these apps serve the public interest? 2/x
Serving the public interest does not begin and end with good data security and privacy but needs to start with whether the apps serve the purpose they are designed for and do so effectively. 3/x
Ein Jahr nach Beginn der Pandemie erleben wir in Deutschland eine extrem verflachte Debatte über Apps (z.B. #LucaApp oder #ImmunityPassports), die bei der Bekämpfung der Corona-Krise helfen sollen. Problem: Diese Form der Debatte ist nicht hilfreich.
Aktuell konzentiert sich die Debatte auf "Datensicherheit".
Aber ist das die einzige relevante Frage?
Was sind die Fragen, die Journalist*innen, Politiker*Innen und die Zivilgesellschaft stellen sollten, wenn es darum geht, das Gemeinwohl nicht aus dem Blick zu verlieren?
Ob etwas dem Gemeinwohl zuträglich ist, hängt nicht nur von guter Datensicherheit ab, sondern muss sich zuerst mit der Frage beschäftigen, ob diese Apps tatsächlich dem Zweck dienen, zu dem sie eingesetzt werden.
"part of what we in an American university have to consider now, what it is for us to have been made custodians of those principles [of free speech] even as we are made to watch when they are dissolved in an infernal public private partnership." Fred Moten
"it is that this ought not to drive us to defend an abstract principle of free speech, which is only ever concretized, and usually at the same moment dishonestly and disgustingly sacralized, in exclusion."
"instead and in refusal of that, lets claim and eruptively assert our fugitive speech, which is fueled by the realization of the conditions we live in"
For those interested in the political economy of AI this report has a lot of teasers. Many of them are aligned with some recent papers that talked about the concentration of research in the hands of a few (corporations and their research collaborators).
Report claims OpenAI and Deepmind, but also other big players in the industry are important players in research but do not/cannot publish their code (I hope all our colleagues who now do ethics at these companies consider these structural issues!)
Tools are an important but of expanding infrastructural power of these companies into research institutions, and the report claims Facebook is outpacing Google.
This WP article has highlighted important problems with Gapple's contact tracing efforts & quotes some of my mentors in tech policy. But, from my vantage point, it creates (unintentionally) a false dichotomy between national sovereignty and Gapple (1/x) washingtonpost.com/technology/202…
My gut response to the first Gapple announcement was not only lack of sovereignty but also democratic process. Sovereignty is not sufficient give the complexity of the relationship between governments & Gapple (2/x)
To this day most app initiatives are techno-centric and top down and have side-stepped health authorities as well as civil society. Governments across Europe got pushback for this, and some changed course, e.g., Germany, Belgium and the Netherlands. (3/x)
"Watched and still dying"
"I never imagined I would experience more loved ones dying in similar isolation and uncertainty [as AIDS] in my lifetime."
From the wise @Combsthepoet odbproject.org/2020/04/26/wat…
"I have spoken with several social justice organizers and survivors who confirmed the stigma that accompanied not only the patients, but family members, friends and anyone who came into contact with someone suspected of having HIV or AIDS."
"We have since discovered much more about HIV and AIDS, but not without failed experiments, constant tracking of people living with the virus, and lots of government mistakes, most prominently from the White House."