The year is 2035. You're sitting comfortably in your L5 self-driven car, zooming across the highway.
Suddenly, the truck in front drops a big boulder. In a split second the car AI has to make a choice: brake or dodge... ๐งต๐
โ ๏ธ Even with a full brake you're not guarantee to survive the crash.
๐ To your left there's a van driven by a human.
๐ดโโ๏ธ To your right there's an unprotected biker.
โ What should your AI do?
>
โ ๏ธ If the AI chooses to brake, the boulder will still hit you, with potentially disastrous consequences.
And it's very hard to sell you an AI that will make the choice with the least chance to save your life.
๐ If it chooses to hit the van, you'll most likely survive, and the other driver has a pretty decent chance too.
๐ดโโ๏ธ If it chooses to hit the biker, he/she will almost certainly die, but you'll maximize your chances of not getting hurt.
If the AI is trying to save you at all costs, it will kill the biker. That's the optimal solution for you.
But we probably don't want that, so let's program it to hit the van, the choice that minimizes the odds of someone getting killed, right?
๐ Now, let's suppose it's two bikers, one each side, but here's the catch: one of them is wearing a helmet.
What's the option that minimizes the odds of someone dying?
Even if by a very small chance, now hitting the biker with a helmet seems the best option.
๐ค Suddenly, doing the right thing as a biker (wearing a helmet) makes you *more* likely to be hit by a car.
We probably don't want that either, or everyone will be biking with the least possible protection.
But wait, it gets worse... ๐
๐ What if there's two identical bikers, but one is carrying a baby, the other is carrying a pregnant women, and you're carrying two small children?
What do you value more, a potential life lost or a certain life lost? Does it matter what is the expected lifetime left?
If you're the one driving, you'll do whatever your instincts tell you.
And whatever that decision is, any court or judge will have the consider the fact that you didn't have enough time to think.
But an AI had enough time to think. It made a conscious decision.
๐ That decision was either preprogrammed, or computed from a preprogrammed formula, or learnt from data, or reinforced, ...
In any case, there are humans behind it who had plenty of time to think carefully about it.
And they made a conscious choice, or didn't they?
A possible solution is to refuse to make a choice at all.
๐ฒ In an impossible situation let the AI flip a coin, so whatever happens it was due to luck.
But is it morally correct to refuse to solve a literal life-or-death problem if we have even a slight chance to solve it?
โ Why is this question so hard? Well, moral is tricky. It feels immoral to even consider there might be a predefined answer for who's life is worth more.
But an AI needs a formula in this case, even if it's "choose random", and we better come up with one we can live with.
As usual, if you like this topic, reply in this thread or @ me at any time. Feel free to โค๏ธ like and ๐ retweet if you think someone else could benefit from knowing this stuff.
There seems to be a recent surge in the "HTML is/isn't a programming language" discussion.
While there are a lot of honest misconceptions and also outright bullshit, I still think if we allow for some nuance there is a meaningful discussion to have about it.
My two cents ๐
First, to be bluntly clear, if a person is using this argument to make a judgment of character, to imply that someone is lesser because of their knowledge (or lack of) about HTML or other skills of any nature, then that person is an asshole.
With that out the way...
Why is this discussion meaningful at all?
If you are newcomer to the dev world and you have some misconceptions about it, you can find yourself getting into compromises you're not yet ready for, or letting go options you could take.
One of the very interesting questions that really got me thinking yesterday (they all did to an important degree) was from @Jeande_d regarding how to balance between learning foundational/transferable skills vs focusing on specific tools.
@Jeande_d My reasoning was that one should try hard not to learn too much of a tool, because any tool will eventually disappear. But tools are crucial to be productive, so one should still learn enough to really take advantage of the unique features of that tool.
@Jeande_d One way I think you can try to hit that sweet spot is practice some sort of dropout regularization on your common tool set.
In every new project, substitute one of your usual tools for some convenient alternative. It will make you a bit less productive, to be sure...
โ Today, I want to start discussing the different types of Machine Learning flavors we can find.
This is a very high-level overview. In later threads, we'll dive deeper into each paradigm... ๐๐งต
Last time we talked about how Machine Learning works.
Basically, it's about having some source of experience E for solving a given task T, that allows us to find a program P which is (hopefully) optimal w.r.t. some metric M.
According to the nature of that experience, we can define different formulations, or flavors, of the learning process.
A useful distinction is whether we have an explicit goal or desired output, which gives rise to the definitions of 1๏ธโฃ Supervised and 2๏ธโฃ Unsupervised Learning ๐
A big problem with social and political sciences is that they *look* so intuitive and approachable that literally everyone has an opinion.
If I say "this is how quantum entanglement works" almost no one will dare to reply.
But if I say "this is how content moderation works"...
And the thing is, there is huge amount of actual, solid science on almost any socially relevant topic, and most of us are as uninformed in that as we are on any dark corner of particle physics.
We just believe we can have an opinion, because the topic seems less objective.
So we are paying a huge disrespect to social scientists, who have to deal every day with the false notion that what they have been researching for years is something that anyone, thinking for maybe five minutes, can weigh in. This is of course nonsense.