Django Wexler Profile picture
Jun 1 10 tweets 2 min read Twitter logo Read on Twitter
Can we please, please not do this?

Like, the days when I worked on AI research are long past. But this is still nauseatingly familiar to me.
It's practically a law of heuristic or evolving algorithms that the edges of the test harness are weaker than the problem you're trying to solve, so the AI seeks them out instead. And then smoothing out all the edge cases and glitches turns out to be harder than the problem!
This isn't like "weird unanticipated behavior", it is *absolutely standard*. Even incredibly early genetic algorithm style tests have it -- think of the GA that's designing creatures, looking for new forms of locomotion, but instead it finds *every bug in the simulator.*
Back in the old days, we were using a constraint-solving AI to plan an academic conference. Every panel had a list of requirements: microphones, projector, a certain number of seats, length, etc. There were too many to fully satisfy all of them, so the goal was to optimize.
The planner happily reported back that it had an excellent solution! Every panel had every requirement satisfied except one.

When we looked at the plan, all the panels were in the largest room, with all the gear. They were all one minute long, one after another.
This led to the (re)discovery of a very common problem in this space, which is the *expressing the true constraints of the problem is equivalent to or harder than solving the problem.*
The constraint language we'd built to tell the AI what it was trying to do wasn't expressive enough. It couldn't say, "Okay, cutting a 60 minute panel to 40 minutes might be plausible, but cutting it to 1 minute is completely useless."
The more realistic your simulation is -- the more options the AI has -- the worse this problem becomes.

In the drone scenario, there are tons of options that would literally never occur to a human pilot, because they're carrying a TON of implicit knowledge!
But the drone isn't, so you have to put it in the constraint language. And its one of those situations that I think every programmer has encountered, where what you thought was like a minor little task at the end turns out to be an endless sucking void.
You write the simulator and you think your done. It works, look at the little drone flying around. High five!

And you don't realize that about 99% of the work is still ahead. And it may turn out to be actually impossible.

• • •

Missing some Tweet in this thread? You can try to force a refresh
 

Keep Current with Django Wexler

Django Wexler Profile picture

Stay in touch and get notified when new unrolls are available from this author!

Read all threads

This Thread may be Removed Anytime!

PDF

Twitter may remove this content at anytime! Save it as PDF for later use!

Try unrolling a thread yourself!

how to unroll video
  1. Follow @ThreadReaderApp to mention us!

  2. From a Twitter thread mention us with a keyword "unroll"
@threadreaderapp unroll

Practice here first or read more on our help page!

More from @DjangoWexler

May 30
ChatGPT still fails the Gilligan's Island test Image
First of all, it loves literally singing Elon's praises.

More interestingly, though, it can't count syllables and can't get the rhyme scheme right.
Here for example, the first verse has it right (tale/trip/port/ship, BACA) but it reverts to AABB in subsequent verses. (Where the next verse of the original is rough/tossed/crew/lost.)
Read 5 tweets
Feb 23
Argh. The frustrating thing about the ChatGPT discourse is that it seems to be carried on in a weird hypothetical vacuum. This is a good example -- AI apologists asking "you can't just ban AI stories, what if they're great?"

Folks, they're not great.
ChatGPT is, except by accident, not capable of writing a good story. *It is not a general AI, it is a text prediction engine.* It takes words and figures out which words are statistically likely to come next, based on its corpus and hand-tuning.
This turns out to be great for generating grammatical text, because humans write a lot of grammatical text in its corpus!

But this is not what makes a good story. The purpose of a story, at its most basically level, is to make the reader *feel* something.
Read 22 tweets
Jul 28, 2022
Sigh. Another article saying, "Times are tough in tech, and companies are tightening their belts. Tech workers have to give up perks like working from home."

*Turns on megaphone*: WORK FROM HOME IS CHEAPER FOR EVERYONE
It's cheaper for workers, obviously, since they don't have to subsidize their employers with unpaid commute time, vehicle costs, and gas.

And meanwhile in Seattle, the big tech employers (AMZN, MSFT, GOOG, etc) spend literally *billions of dollars* on *parking*.
It's not the same in every industry or every company. But it's true for a *lot* of tech. Working in an office is the *expensive* option.
Read 12 tweets
Jul 26, 2022
Reading a bunch of stuff about Meta and IG and how they keep optimizing to follow the clicks.

And it occurred to me -- can we necessarily assume user behavior even indicates user preferences?
In the abstract realm of Homo Economicus, this is usually taken as a given -- individuals make rational decisions based on their preferences and available information. Indeed, this is how you *find out* their preferences.
But behavioral economics complicates that. I think it was Kahneman who recounts being inspired by an incident where everyone was eating from a bowl of nuts at a dinner party, but expressed thanks when he took it away.
Read 16 tweets
Nov 30, 2021
Turns out that cyberpunk wasn't nearly bleak enough

theatlantic.com/ideas/archive/…
The article, of course, is cheerleading this. "This may sound dystopian to some, the plotline of a Black Mirror episode. But social tokens are part of a broader and fundamentally positive phenomenon: everyone is becoming an investor."
"Over time, wealth has accumulated with a select few—the investing class...

But moves by Masmej and others like him point to a shift. More and more of the world is becoming financialized...
The rules around how we create and capture economic value are being rewritten..."
Read 26 tweets
Jul 20, 2021
There's this instinct of "if I sacrifice something valuable, which I can recognize because doing so is painful, then surely I will be rewarded in return!"

Which is very understandably human and often completely wrong.
I feel like this is the impulse that makes people live in barrels at the top of poles and eat only mold "for God"; the sense that, the more difficult your devotion, the more serious and valuable it must be.
Read 5 tweets

Did Thread Reader help you today?

Support us! We are indie developers!


This site is made by just two indie developers on a laptop doing marketing, support and development! Read more about the story.

Become a Premium Member ($3/month or $30/year) and get exclusive features!

Become Premium

Don't want to be a Premium member but still want to support us?

Make a small donation by buying us coffee ($5) or help with server cost ($10)

Donate via Paypal

Or Donate anonymously using crypto!

Ethereum

0xfe58350B80634f60Fa6Dc149a72b4DFbc17D341E copy

Bitcoin

3ATGMxNzCUFzxpMCHL5sWSt4DVtS8UqXpi copy

Thank you for your support!

Follow Us on Twitter!

:(