It's practically a law of heuristic or evolving algorithms that the edges of the test harness are weaker than the problem you're trying to solve, so the AI seeks them out instead. And then smoothing out all the edge cases and glitches turns out to be harder than the problem!
This isn't like "weird unanticipated behavior", it is *absolutely standard*. Even incredibly early genetic algorithm style tests have it -- think of the GA that's designing creatures, looking for new forms of locomotion, but instead it finds *every bug in the simulator.*
Back in the old days, we were using a constraint-solving AI to plan an academic conference. Every panel had a list of requirements: microphones, projector, a certain number of seats, length, etc. There were too many to fully satisfy all of them, so the goal was to optimize.
The planner happily reported back that it had an excellent solution! Every panel had every requirement satisfied except one.
When we looked at the plan, all the panels were in the largest room, with all the gear. They were all one minute long, one after another.
This led to the (re)discovery of a very common problem in this space, which is the *expressing the true constraints of the problem is equivalent to or harder than solving the problem.*
The constraint language we'd built to tell the AI what it was trying to do wasn't expressive enough. It couldn't say, "Okay, cutting a 60 minute panel to 40 minutes might be plausible, but cutting it to 1 minute is completely useless."
The more realistic your simulation is -- the more options the AI has -- the worse this problem becomes.
In the drone scenario, there are tons of options that would literally never occur to a human pilot, because they're carrying a TON of implicit knowledge!
But the drone isn't, so you have to put it in the constraint language. And its one of those situations that I think every programmer has encountered, where what you thought was like a minor little task at the end turns out to be an endless sucking void.
You write the simulator and you think your done. It works, look at the little drone flying around. High five!
And you don't realize that about 99% of the work is still ahead. And it may turn out to be actually impossible.
• • •
Missing some Tweet in this thread? You can try to
force a refresh
First of all, it loves literally singing Elon's praises.
More interestingly, though, it can't count syllables and can't get the rhyme scheme right.
Here for example, the first verse has it right (tale/trip/port/ship, BACA) but it reverts to AABB in subsequent verses. (Where the next verse of the original is rough/tossed/crew/lost.)
Argh. The frustrating thing about the ChatGPT discourse is that it seems to be carried on in a weird hypothetical vacuum. This is a good example -- AI apologists asking "you can't just ban AI stories, what if they're great?"
Folks, they're not great.
ChatGPT is, except by accident, not capable of writing a good story. *It is not a general AI, it is a text prediction engine.* It takes words and figures out which words are statistically likely to come next, based on its corpus and hand-tuning.
This turns out to be great for generating grammatical text, because humans write a lot of grammatical text in its corpus!
But this is not what makes a good story. The purpose of a story, at its most basically level, is to make the reader *feel* something.
Sigh. Another article saying, "Times are tough in tech, and companies are tightening their belts. Tech workers have to give up perks like working from home."
*Turns on megaphone*: WORK FROM HOME IS CHEAPER FOR EVERYONE
It's cheaper for workers, obviously, since they don't have to subsidize their employers with unpaid commute time, vehicle costs, and gas.
And meanwhile in Seattle, the big tech employers (AMZN, MSFT, GOOG, etc) spend literally *billions of dollars* on *parking*.
It's not the same in every industry or every company. But it's true for a *lot* of tech. Working in an office is the *expensive* option.
In the abstract realm of Homo Economicus, this is usually taken as a given -- individuals make rational decisions based on their preferences and available information. Indeed, this is how you *find out* their preferences.
But behavioral economics complicates that. I think it was Kahneman who recounts being inspired by an incident where everyone was eating from a bowl of nuts at a dinner party, but expressed thanks when he took it away.
The article, of course, is cheerleading this. "This may sound dystopian to some, the plotline of a Black Mirror episode. But social tokens are part of a broader and fundamentally positive phenomenon: everyone is becoming an investor."
"Over time, wealth has accumulated with a select few—the investing class...
But moves by Masmej and others like him point to a shift. More and more of the world is becoming financialized...
The rules around how we create and capture economic value are being rewritten..."
There's this instinct of "if I sacrifice something valuable, which I can recognize because doing so is painful, then surely I will be rewarded in return!"
Which is very understandably human and often completely wrong.
I feel like this is the impulse that makes people live in barrels at the top of poles and eat only mold "for God"; the sense that, the more difficult your devotion, the more serious and valuable it must be.