As the UK is currently beholden to moonshot thinking, worth noting that Project Loon - spun out from X, Google’s “moonshot factory” and referred to here - is about 10 years old and has only recently been deployed at nation scale (afaict).
I think I'd go a bit further than Simon's post though, because it seems to me that using our human instincts for what may or may not be trustworthy is an essential line of defence. If the link looks bad, don't click it; if the alleged news story looks like BS, check the source
I don't think that trustworthiness can necessarily be improved by transparency alone though - I'll defer to Onora O'Neill who says that we need "actual communication" rather than mere transparency and "honesty, competence, and reliability" thebritishacademy.ac.uk/documents/2563…
Quick thread on the state of digital policy in the UK. Interested to know if this reflected in other areas.
In what is, presumably, the last year of a Conservative govt we find ourselves in an odd place that I think is almost peak Theatre of Consultation.
Unless I was asleep under a giant rock and missed it, there was no consultation about the formation of the AI Safety Institute, or about the methods of societal impacts that have been selected, which make no reference to human rights and wch appear technocratic at best.
Instead, we had AI-pa-looza at Bletchley Park. While reams and teams has been written about this, there has been no consultation and the PM appears to be making off-the-cuff policy decisions. ft.com/content/ecef26…
I'm doing a panel this morning on digital inclusion and AI. This is what I'm going to say:
- the paradigm for AI governance the UK govt is working towards deepens social exclusion
- so we need to do two things: challenge the paradigm while also mitigating it
Mitigations for structural power imbalances can have the unfortunate outcome of entrenching existing power imbalances so it's important to do both. Being included in an oppressive system can still be oppressive. I wrote about that here medium.com/careful-indust…
Meanwhile, technologists are always trying to write new social contracts based on what AI can do. But no amount of polling or public deliberation will displace the Declaration of Human Rights and the SDGs in the short term. They may not be perfect but they are what we have.
Quick thread on Responsible Capability Scaling, one of the safety measures outlined in a @SciTechgovuk paper published yesterday - and why it is both welcome and insufficient. assets.publishing.service.gov.uk/media/653aabbd…
Parts of Responsible Capability Scaling have a lot in common with Consequence Scanning, a tool we developed at @doteveryone in 2018/9, in collaboration with many SMEs, which is freely available and widely used by businesses and research teams doteveryone.org.uk/project/conseq…
@doteveryone What Consequence Scanning tries to do is help teams start to apply an external lens on what they are doing, beyond internal OKRs/KPIs, and help teams envisage how their product will grow and change in the world, beyond their business goals
Well I guess this is my daily "read the news and complain about today's ridiculous AI story" tweet. Buckle up, I have a thread theguardian.com/technology/202…
Firstly, let's look at the headline. Sure, Stuart Russell is an expert, but he's not an expert in either education or child welfare, he's an expert in AI. You know the saying, "When you have a hammer, everything looks like a nail" - well, that applies here. The idea that teaching… twitter.com/i/web/status/1…
The idea that it might be desirable for teaching to become redundant assumes, I think, that children need to learn in the same way as neural nets. But, vitally, school also teaches kids about relationships and people and communication.
Classic example of this today with the Ask First NHS app, which puts the user (in this case, me) through several minutes of data entry (possibly some sort of theatre of triage) only to return the result that of course there are no GP appointments to be found
The end point of glitch capitalism is likely to be everyone being so busy signing consent forms for unnecessary data processing that no one notices the world is on fire
Anyway, some likely mundane outcomes from Terrible Forms everywhere include:
- rich and powerful people investing in biometric tech that allows them to delegate all the tasks created by mundane automation to badly paid, highly surveilled PAs who do all this sh*t for them