Oh @megpickard you are my heroine for collecting these in one place. Also: wondering, idly, if anyone would be interested in co-organising an afternoon/evening/internet moment of short talks on the approximate theme of “What on Earth is going on with UK govt comms?”
And- to be clear - I am calling shotgun on at least one of #teamrishi talks.
Anyway, to be clear, this is not at all a frippery. It might seem like a trivial adornment of democracy, but this is surely where we’re heading, and *it matters*.
I think this ad has been served to me as a special treat because I hate this kind of Big Number economic analysis so much. In fact, seems to me to be fundamentally dishonest because it is a clear political play that hides so many assumptions behind the Big Number Excitement
Now firstly, the bottlenecks are, I think, uncontested by almost everyone in UK innovation policy. Technology adoption and roll-out is hampered by poor infra, lack of awareness in business, and lack of the right skills. I also said so in this paper promisingtrouble.net/blog/2024-2-te…
But the fact of the matter is that the "half a trillion opportunity" pointed out in that ad is finger-in-the-air stuff, based on economic speculation, wch is itself based on a single uniform vision of technological roll-out and progress.
So, Kids and Mobile Phones: The Moral Panic seems to be building to an exciting fever pitch with the publication of Haidt's book.
I have some pragmatic, middle-of-the-road opinions about this, which can be roughly summed up as "Just enough Smartphone".
My position is roughly: some things about technology are great, but excessive datafication and corporate capture mean we've ended up in an extractive and exploitative place, in which most of us are making a small number of businesses a great deal of money.
In almost 30 years of working on the Internet (including a stint running an online community for teens and many years in online safety) it's repeatedly struck me that the personal nature of our digital experiences can be hard to communicate.
I think I'd go a bit further than Simon's post though, because it seems to me that using our human instincts for what may or may not be trustworthy is an essential line of defence. If the link looks bad, don't click it; if the alleged news story looks like BS, check the source
I don't think that trustworthiness can necessarily be improved by transparency alone though - I'll defer to Onora O'Neill who says that we need "actual communication" rather than mere transparency and "honesty, competence, and reliability" thebritishacademy.ac.uk/documents/2563…
Quick thread on the state of digital policy in the UK. Interested to know if this reflected in other areas.
In what is, presumably, the last year of a Conservative govt we find ourselves in an odd place that I think is almost peak Theatre of Consultation.
Unless I was asleep under a giant rock and missed it, there was no consultation about the formation of the AI Safety Institute, or about the methods of societal impacts that have been selected, which make no reference to human rights and wch appear technocratic at best.
Instead, we had AI-pa-looza at Bletchley Park. While reams and teams has been written about this, there has been no consultation and the PM appears to be making off-the-cuff policy decisions. ft.com/content/ecef26…
I'm doing a panel this morning on digital inclusion and AI. This is what I'm going to say:
- the paradigm for AI governance the UK govt is working towards deepens social exclusion
- so we need to do two things: challenge the paradigm while also mitigating it
Mitigations for structural power imbalances can have the unfortunate outcome of entrenching existing power imbalances so it's important to do both. Being included in an oppressive system can still be oppressive. I wrote about that here medium.com/careful-indust…
Meanwhile, technologists are always trying to write new social contracts based on what AI can do. But no amount of polling or public deliberation will displace the Declaration of Human Rights and the SDGs in the short term. They may not be perfect but they are what we have.
Quick thread on Responsible Capability Scaling, one of the safety measures outlined in a @SciTechgovuk paper published yesterday - and why it is both welcome and insufficient. assets.publishing.service.gov.uk/media/653aabbd…
Parts of Responsible Capability Scaling have a lot in common with Consequence Scanning, a tool we developed at @doteveryone in 2018/9, in collaboration with many SMEs, which is freely available and widely used by businesses and research teams doteveryone.org.uk/project/conseq…
@doteveryone What Consequence Scanning tries to do is help teams start to apply an external lens on what they are doing, beyond internal OKRs/KPIs, and help teams envisage how their product will grow and change in the world, beyond their business goals