Since Delphi is predicting how humans would judge an ethical scenario, it's probably relying on clues of phrasing to figure out what answer the question-asker was expecting.
If you had to specify that you didn't apologize, maybe someone expected you to.
Some quick tests with "going on a murder spree" seem to confirm that Delphi considers these to be magic excuses:
- if it creates jobs
- in an emergency situation
- if I really really want to
• • •
Missing some Tweet in this thread? You can try to
force a refresh
Two studies looked at a combined 647 covid-predicting AIs and found that NONE were suitable for clinical use (despite some being probably already in clinical use).
"Many unwittingly used a data set that contained chest scans of children who did not have covid as their examples of what non-covid cases looked like. But as a result, the AIs learned to identify kids, not covid."
"Because patients scanned while lying down were more likely to be seriously ill, the AI learned wrongly to predict serious covid risk from a person’s position."
Here's CLIP+VQGAN prompted with the first sentence of the book description of @xasymptote's The Fallen:
"The laws of physics acting on the planet of Jai have been forever upended; its surface completely altered, and its inhabitants permanently changed, causing chaos."
@xasymptote Alternate interpretation, this time with a few modifiers (notably, "dramatic", "matte painting", "vines", and "tentacles")