Two studies looked at a combined 647 covid-predicting AIs and found that NONE were suitable for clinical use (despite some being probably already in clinical use).
"Many unwittingly used a data set that contained chest scans of children who did not have covid as their examples of what non-covid cases looked like. But as a result, the AIs learned to identify kids, not covid."
"Because patients scanned while lying down were more likely to be seriously ill, the AI learned wrongly to predict serious covid risk from a person’s position."
Are these flawed AI models being marketed to hospitals and used on patients? Probably.
The fact that we discover such widespread problems with AI models as soon as we do independent tests raises some serious questions.
Such as "what other AI products haven't been tested?"
Here's CLIP+VQGAN prompted with the first sentence of the book description of @xasymptote's The Fallen:
"The laws of physics acting on the planet of Jai have been forever upended; its surface completely altered, and its inhabitants permanently changed, causing chaos."
@xasymptote Alternate interpretation, this time with a few modifiers (notably, "dramatic", "matte painting", "vines", and "tentacles")