Expect to be corrected quite a lot here for day 3 of #CcioHomeSchool when dipping into the differences between algorithms and AI.
Because AI will fix everything, won't it... #disrupt :-)
Key difference here as I can understand is algorithms are fixed (following set rules and processes) whereas AI can adapt / evolve based on learned inputs
Clinical examples of algorithms in routine practice are 10 a penny... Well's score, qSOFR, CHADS2VASC and so on. Lots of excitement and noise about AI in clinical imaging (paging @rijan44), but far fewer established use cases
Another important distinction is that an algorithm produces an output, whereas an AI is more likely to produce a decision based on discerned patterns.
This can sound scary but is closer to the human experience where (with the exception of lawyers) we recognise patterns based on experience rather than exclusively reasoning through rules :-)
The classic thought experiment here is to try and explain to someone why a picture of a cat shows a cat and nothing else. In real life this invariably ends up with someone shouting "IT'S JUST A CAT!!!"
In AI this uncertainty is often referred to as the black box problem. We cannot ignore this based on the fact that we are black boxes ourselves because we have culture, societal norms and other unwritten rules as balancing measures
One proposed mitigation is "Explainable AI". This is often framed as a description of model components and weighting. Personally I think this misses the point of what AI does. Examples (model outputs) are a form of explanation. Is deeper understanding is needed to build trust?
Think about 2 spaceships. One has flown & landed safely 200 times but there is no documentation as to how it was built. The other has never flown, but has comprehensive documentation "proving" it is safe. Which do you want to get on?
Do we want explainability or testability?
Cleverer people than me have written more on this topic:
Huge amounts have been written on the potential hidden biases within AI (Invisible Women and Weapons of Math Destruction are worth the read).
Clearly this is true, but AI is based on data which is based on our existing biases
We can't avoid this, but can begin to address the issue by focusing on diversity and ensuring technology represents the world it needs to serve. @NetworkShuri are a powerful voice but this has to be everyone's responsibility
Hidden in all of these questions is the issue of liability. If clinical staff act based on an AI recommendation and there is an adverse outcome, who is liable? The physician or the developer who made the AI "for entertainment purposes only" in the T&C
A great @JAMA_current article explored this, breaking the question down by recommendation vs standard of care offset by physician action
Judgement and ethics are going to have to evolve to fill this space. In epidemiological research we leverage the Bradford Hill criteria to consider the likelihood of a causal relationship. I'm sure this model of thinking can adapt to trust in clinical AI
But it will be for us to help wrap guiding principles around this change. I hope we can start with requirements for AI to be
-fair and free of bias
-reliable and safe
-private and secure
-inclusive and of benefit to all
-transparent (data, results and more)
-accountable
• • •
Missing some Tweet in this thread? You can try to
force a refresh
Last thread of #CCIOHomeSchool before the weekend and lets try to tackle the existential question of what a CCIO is and does
This is important not just for our own sanity (!) but because without there being a clear understanding of the role inside and outside of the #DigitalHealth community, then the calls for these roles to be recognised as important in organisations will fall flat
Staff at all levels will broadly be able to describe what the Medical and Nursing Directors do. I don't think the same is true for CCIO type roles
#CCIOHomeSchool day 2... Lets have a think about #digitalconsent. Why would we do this?
- legibility
- reducing the variation in information provided and recorded
- avoiding delays in care due to loss of documents
Anything else?
The key policy document here is probably everybody's favourite bedtime read, the Health and Social Care Act which recognised expressed, verbal and written forms of consent
Interestingly there are only limited examples of where written consent is legally required e.g. fertility treatment. However this is seem as best practice where interventions are complex, carry significant risk / consequence or include aspects not related to direct care