"Superhuman science"
This paper explores the potential impact of AI on scientific fields where innovation is about searching big spaces of possibility (e.g. new drugs, materials, architectures to build better AI systems...). brookings.edu/wp-content/upl… #econai ht @mattsclancy
1. AI systems trained on historical data can help predict which opportunities have more potential, improving prioritisation and expected returns from innovation.
But downstream blockers (e.g. around testing of the ideas that are developed) could constrain these benefits.
2. The paper gets into the weeds of this by modelling innovation as a multi-stage process of idea generation and testing where tests results at different stages are not wholly reliable. It would be cool to map and operationalise this idea in specific fields.
3. They also identify key resources needed for this to work: first of all, we need data of innovation successes and *failures* to train the AI system. Publication bias could mess this up. We also need interdisciplinary combinations of subject + ML / AI skills.
4. They also highlight a market failure: AI systems might be less useful for out-of-sample (radical) innovation in unexplored areas of science where there are less data.
This could create a bias towards incrementalism (consistent with results in Bianchini et al, 2022).
5. Government can help address this market failure by funding risky research that explores new areas of science and creates data to train improved AI systems that can be used in follow-on work.
/ends
• • •
Missing some Tweet in this thread? You can try to
force a refresh
Today we are hosting @smpfotenhauer & @Jackstilgoe to discuss their politics of scaling paper & its implications for @nesta_uk. Here is an older thread with some reflections about it:
Some implications: 1. Be inclusive when exploring alternatives 2. Avoid a bias towards scalable solutions 3. Secure consent for experiments & ensure that they are safe & beneficial in the 'test site'
...
... 4. Carefully assess generalisability of interventions on epistemic / democratic basis. 5. Carefully monitor systemic & indirect impact of piecemeal interventions 6. Accept that a single experiment doesn't provide a strong evidence base for scaling
...
"Any site where scaling is made to look easy should raise red flags about a likely lack of comprehension or inclusiveness of perspectives"
The Politics of Scaling paper highlights the risks of scaling efforts to tackle big societal challenges. journals.sagepub.com/doi/full/10.11…
1. Three shared features of scaling efforts give rise to potential disfunction:
a. Solutionism: societal challenges are fixable
b. Experiments: We can deploy local solutions globally
c. Future-oriented value: scaling focuses on future benefits over present ones.
2. I was specially struck by the notion that scalable solutions, like technological innovations, seek in some way to transform society into the lab where they were initially tested in 'perfect conditions': a living lab, a training set, a behavioural experiment.
Some observations about the UK AI strategy: 1. I like that it presents public funding for R&D as a mechanism to help steer AI in a societally beneficial direction. This sets it in contrast with other national strategies that put R&D and public value / ethics in silos.
2. I like its emphasis in the importance of workforce diversity for building better and more inclusive AI systems. However, it neglects other (correlated) types of diversity e.g. disciplinary, technological, institutional that are also important (cf. arxiv.org/abs/2009.10385)
3. Like @irinimalliaraki, I missed a deeper, more evidence-based analysis of the AI landscape / ecosystem and opportunities and challenges for the UK helping motivate and prioritise its policy agenda. Perhaps this is yet to be developed but I would have appreciated more detail.
Jer Thorp’s Living in Data starts strong: “our projects became less about finding answers in data and more about finding agency, less about exploration and more about empowerment.”
Looking forward to read where it goes with this idea.
“The lesson I learned was to treat data and the system it lived in not as an abstraction but as a real thing with particular properties, and to work to understand those unique conditions as deeply as I can” ~ use data not to scale up away from the thing, but back into it.
The library of missing datasets: A list of blank spots that exist in spaces that are otherwise data-saturated. [sadly not updated for quite some time]
1. We argue that innovation missions need new indicators able to capture emergence, diffusion, crossover and diversity. Otherwise, how will we know if they work?
2. We argue that open data and data science methods (machine learning, NLP, network science) can help us create those new indicators, generating useful information across the mission policy cycle. This is what we do here.
[Highlights of day 2 of the Economics of AI @nberpubs workshop] Sonny Tambe presented a cool paper using LinkedIn data to recover firm investments in intangible AI-related assets, finding it concentrated in 'superstar' firms (FAO @stianwestlake) papers.ssrn.com/sol3/papers.cf…
2. @danielrock presented his work about the value of engineering / AI. I discussed it here: