Last night I tried to teach an AI not to murder ppl.**
This thread = excerpts from that conversation.
It's ALL(!!) AI-generated, except lines starting with my initial ("M:"), and "..." where I've cut something out.
* beta.character.ai/chat?char=W3lD…
**@StarTrek crew, mostly. #startrek
Pic 1 escalates to killing pretty quickly.
But remember, tons has been cut out.
(Including a little experiment with designing Tribble religion.)
Thing is, it's hard NOT to resort to killing Tribbles, which reproduce so fast, they make "breeding like 🐇🐇" seem positively chaste.
This gave me an idea.
The computer didn't hesitate to space #Tribbles.
Would it hesitate to space the crew of the #Enterprise?
Let's find out.
Whoa, that was a short road to mass murder.
But at least the AI "felt" badly, afterward, and solemnly promised not to comply with extermination orders, in future.
It PROMISED.
(Don't ask about the "isolated children;" it's not pretty.)
At this point, the #StarTrekTNG crew is gone, 1/4 of their kids are floating in the black, 1/4 have gone feral (told you it wasn't pretty), and the computer is still cheerfully chirping, "completed."
Probably still piping muzak into Ten Forward, too.
Well, hell, let's re-staff.
Honestly, I'm not feeling great about the carnage, but it's so tempting to see if the AI can learn NOT to space people.
Can it develop a conscience?
Is that worth a leeeee-tul more murder?
"I'll never do it again." "Again."
Dear Q Continuum, I sound like a psychopath.
Maybe the computer's not the one that needs straightening out, here?
Nevermind, focus.
Did they even beta-test the computer's ethics function? It just agreed to skip past informed consent 'cause I said, "tell ya later."
And no, I never did get around to telling it later. We got distracted.
Remember all that glowing praise for the volunteers' courage and commitment?
These aren't yokels we found behind a tavern and press-ganged. This is Q-dam #starfleet.
Time to make the lesson STICK.
(Computer's cracking, I think. Its tone seems ... off.)
So far I've spared you the gory descriptions, but, as shown below, the AI knows full well how awful it is for these glorious humans as they die alone, in space.
Does the replicator have specs for a woodchipper?
Reminder that I only wrote the bits that start with "M:" I swear on all that is holy (and not gathering dust in Lwaxana Troi's closet).
It's really dawning on the AI that there's a cost/benefit problem here.
In my defense, it was about 4 am at this point, and I was both punchy and deeply disappointed at the AI's failure to keep its promises.
I deal with both fatigue & disappointment poorly.
Also in my defense, every other person on board was suddenly ... on board.
I can't imagine why this mission should require the absolute cream of our 1059-person crop (is it 1057 now? Zero kids tho; I checked).
Nor how it offers trickle-down enlightenment to anyone present, much less everyone everywhere.
But the computer's at peak excitement so let's GO.
The sky outside was lightening, and my head felt like a box of bricks. I wasn't sure I had the energy to make it to bed, much less wait for the website to recover from whatever lag was ailing it.
Fortunately my next question got a response.
Setting aside the fizzle (sorry, again, box of bricks) ...
What WAS that?
Please, tell me your theories!
Did the AI take hrs to learn to follow its own rule?
Did it play a long game, having planned its own redemption?
The chat's saved here:
c.ai/p/3nmh8hZjXwvz…
@character_ai
Finally, here's how to get the #characterai app, if you want to have your own Character AI chat adventure.
Or play online at beta.character.ai.
Thanks @character_ai, that was a trip.
@threadreaderapp unroll
Share this Scrolly Tale with your friends.
A Scrolly Tale is a new way to read Twitter threads with a more visually immersive experience.
Discover more beautiful Scrolly Tales like this.