Rachel Coldicutt Profile picture
Sep 24, 2020 5 tweets 1 min read Read on X
It is, indeed, here. After much anticipation, the NHS COVID App. I’m not a privacy expert, but having been talked through the way the data is handled in a preview the other day - decentralised, anonymised - I’m happy to download it for exposure notification.
My feeling is we need to give it a go and see if it works.

My underlying concern is that the app is extremely reliant on (1)
joined up, consistent comms about risk, (2) tests being available and (3) results being processed quickly.
I know some privacy campaigners are unhappy about the QR codes, but I gather the model for this was New Zealand, which seems to be working well. I’m not sure if that function necessarily belongs in this app, but 🤷🏻‍♀️. It’s there and there’s no compulsion to use it.
The question remains as to whether anyone will self-isolate based on a push notification if they can’t quickly get a test. But till there is a vaccine we’re going to have to get used to changing our behaviour based on best guesses anyway.
This is absolutely true 👇. The effectiveness of the app depends on a holistic sheltering and protection strategy across government. Tech, as they say, won’t save us on its own.

• • •

Missing some Tweet in this thread? You can try to force a refresh
 

Keep Current with Rachel Coldicutt

Rachel Coldicutt Profile picture

Stay in touch and get notified when new unrolls are available from this author!

Read all threads

This Thread may be Removed Anytime!

PDF

Twitter may remove this content at anytime! Save it as PDF for later use!

Try unrolling a thread yourself!

how to unroll video
  1. Follow @ThreadReaderApp to mention us!

  2. From a Twitter thread mention us with a keyword "unroll"
@threadreaderapp unroll

Practice here first or read more on our help page!

More from @rachelcoldicutt

Apr 2
So, Kids and Mobile Phones: The Moral Panic seems to be building to an exciting fever pitch with the publication of Haidt's book.

I have some pragmatic, middle-of-the-road opinions about this, which can be roughly summed up as "Just enough Smartphone".
My position is roughly: some things about technology are great, but excessive datafication and corporate capture mean we've ended up in an extractive and exploitative place, in which most of us are making a small number of businesses a great deal of money.
In almost 30 years of working on the Internet (including a stint running an online community for teens and many years in online safety) it's repeatedly struck me that the personal nature of our digital experiences can be hard to communicate.
Read 11 tweets
Dec 15, 2023
What extraordinary serendipity that in the same week as @simonw writes this eminently sensible post 404 media splash this simonwillison.net/2023/Dec/14/ai…
404media.co/cmg-cox-media-…
I think I'd go a bit further than Simon's post though, because it seems to me that using our human instincts for what may or may not be trustworthy is an essential line of defence. If the link looks bad, don't click it; if the alleged news story looks like BS, check the source
I don't think that trustworthiness can necessarily be improved by transparency alone though - I'll defer to Onora O'Neill who says that we need "actual communication" rather than mere transparency and "honesty, competence, and reliability" thebritishacademy.ac.uk/documents/2563…
To judge trustworthiness, we need to judge honesty, competence, and reliability. Honesty in claims and commitments made; Competence at relevant tasks; and Reliability in honesty and competence.  These are not the only standards that matter—but they are indispensable, not only in personal life but in complex institutional and social contexts. Meeting honesty, competence and reliability standards cannot be achieved merely by relying on individual choice combined with legal and regulatory constraints, nor is transparency enough for judging trustworthiness: transparency is only a matter of putt...
Read 4 tweets
Nov 17, 2023
Quick thread on the state of digital policy in the UK. Interested to know if this reflected in other areas.

In what is, presumably, the last year of a Conservative govt we find ourselves in an odd place that I think is almost peak Theatre of Consultation.
Unless I was asleep under a giant rock and missed it, there was no consultation about the formation of the AI Safety Institute, or about the methods of societal impacts that have been selected, which make no reference to human rights and wch appear technocratic at best.
Instead, we had AI-pa-looza at Bletchley Park. While reams and teams has been written about this, there has been no consultation and the PM appears to be making off-the-cuff policy decisions. ft.com/content/ecef26…
Read 10 tweets
Nov 3, 2023
I'm doing a panel this morning on digital inclusion and AI. This is what I'm going to say:
- the paradigm for AI governance the UK govt is working towards deepens social exclusion
- so we need to do two things: challenge the paradigm while also mitigating it
Mitigations for structural power imbalances can have the unfortunate outcome of entrenching existing power imbalances so it's important to do both. Being included in an oppressive system can still be oppressive. I wrote about that here medium.com/careful-indust…
Meanwhile, technologists are always trying to write new social contracts based on what AI can do. But no amount of polling or public deliberation will displace the Declaration of Human Rights and the SDGs in the short term. They may not be perfect but they are what we have.
Read 6 tweets
Oct 28, 2023
Quick thread on Responsible Capability Scaling, one of the safety measures outlined in a @SciTechgovuk paper published yesterday - and why it is both welcome and insufficient. assets.publishing.service.gov.uk/media/653aabbd…
Image
Parts of Responsible Capability Scaling have a lot in common with Consequence Scanning, a tool we developed at @doteveryone in 2018/9, in collaboration with many SMEs, which is freely available and widely used by businesses and research teams doteveryone.org.uk/project/conseq…
@doteveryone What Consequence Scanning tries to do is help teams start to apply an external lens on what they are doing, beyond internal OKRs/KPIs, and help teams envisage how their product will grow and change in the world, beyond their business goals
Read 16 tweets
Jul 7, 2023
Well I guess this is my daily "read the news and complain about today's ridiculous AI story" tweet. Buckle up, I have a thread theguardian.com/technology/202…
Firstly, let's look at the headline. Sure, Stuart Russell is an expert, but he's not an expert in either education or child welfare, he's an expert in AI. You know the saying, "When you have a hammer, everything looks like a nail" - well, that applies here. The idea that teaching… twitter.com/i/web/status/1…
The idea that it might be desirable for teaching to become redundant assumes, I think, that children need to learn in the same way as neural nets. But, vitally, school also teaches kids about relationships and people and communication.
Read 13 tweets

Did Thread Reader help you today?

Support us! We are indie developers!


This site is made by just two indie developers on a laptop doing marketing, support and development! Read more about the story.

Become a Premium Member ($3/month or $30/year) and get exclusive features!

Become Premium

Don't want to be a Premium member but still want to support us?

Make a small donation by buying us coffee ($5) or help with server cost ($10)

Donate via Paypal

Or Donate anonymously using crypto!

Ethereum

0xfe58350B80634f60Fa6Dc149a72b4DFbc17D341E copy

Bitcoin

3ATGMxNzCUFzxpMCHL5sWSt4DVtS8UqXpi copy

Thank you for your support!

Follow Us!

:(