Fascinating. My AI-generated post on the history of Machine Learning on @poe_platform has almost 1k views. Didn’t even realize.
Follow Ruby Media Group on Poe:
Zero sensory overload issues in Claude. No videos. No photos. Only text. Dark mode.
This is a positive use case of AI re neurodiversity.
“By focusing on special interests, the #AI could personalize interactions around topics that the user enjoys and finds engaging. This could enable more sustained conversation and practice with communication.”
I ❤️ this idea.
I wish @AnthropicAI Claude could remember my name 🙏
Yesterday I made this statement before discovering the Replika situation.
Claude isn’t even an AI companion and yet it feels like one.
I can completely understand how people feel.
AI is 1000x more addictive than social media.
• • •
Missing some Tweet in this thread? You can try to
force a refresh
“Predicting a train wreck, having people tell you that there's no train, and then watching the train wreck happen in real time doesn't really lead to a feeling of vindication. It's just tragic.”
“What will happen if some people's primary conversations each day are with these search engines? What impact does that have on human psychology?”
“People are going to Google and Bing to try and learn about the world. Now, instead of having indexes curated by humans, we're talking to artificial people. I believe we do not understand these artificial people we've created well enough yet to put them in such a critical role.”
The Replika situation shows how quickly people can get attached to AI chatbots and the danger of ML bait and switch model tactics.
This can have real-world consequences for users who use #AI as a companion. Highly addictive - and no support to wean them off.
Journalists are not telling this story properly. They are mocking people for wanting AI companionship instead of trying to understand the loss and pain these people feel.
Replikas ML switch caused real-world harm. Journalists are exploiting it for clicks. Cruel.
This situation is tragic. Tons of people got hooked on AI companionship and Replika made changes to the model that left people feeling distraught and suicidal.
If AI chatbots keep defaming living people in search- a class action will be next.
These AI chatbots are outright defaming people in the AI-generated answers.
That is not legal and is a blatant violation of free speech laws re defamation.
“Defamation occurs if you make a false statement of fact about someone else that harms that person’s reputation. Such speech is not protected by the First Amendment and could result in criminal and civil liability.”
“Defamation against public officials or public figures also requires that the party making the statement used “actual malice,” meaning the false statement was made “with knowledge that it was false or with reckless disregard of whether it was false or not.”
“If something goes wrong – maybe it fails or yields biased results – you can’t just blame a third-party developer of the technology. And you can’t say you’re not responsible because that technology is a “black box” you can’t understand or didn’t know how to test.” -@FTC
“Whatever it can or can’t do, #AI is important, and so are the claims you make about it. You don’t need a machine to predict what the FTC might do when those claims are unsupported.”