We have different ideas about how to “solve” for L5, and various teams are all taking shots at it. In recent years, two schools of thought have emerged about how to approach solving this problem.
🧵👇
For some it is either:
1) a fundamental AI problem which needs a new approach 2) a data problem, which can be solved by more data & more simulation
Some see the greatest challenge as developing the right AI approach.
Others believe that they already have the right approach, and therefore the challenge is acquiring more (and the right) data and doing more training.
Imo, there is some truth in both schools of thought.
All of this is all further complicated by a lack of oversight/ability of 3rd parties to robustly test these systems. In this application, standardized tests are easily gamed. We do not have the deep technical knowledge at the governing/policy level to support this technology 🐔🥚
Self-driving is "the mother of AI problems" bc we're not only trying to solve judgment under uncertainty (~AGI), but we are doing so in a safety-critical environment that requires instantaneous perception, sensing, & acting.
GPT-3 is a good point of reference. It's impressive, especially @ 1st sight. Still, GPT-3 is nothing without human input––it’s not going to write anything up on its own and it needs clear instruction in its required input syntax. In contrast, its applications are safe & contained
GPT-3 can improve with a larger dataset, dramatically so. But no matter how much data it is fed it, it's still just spitting it back––the current AI approach does not allow it to solve new problems without human input & under uncertainty.
How good is "good enough"?
And who gets to decide?
These are the questions we will have to answer in the future. Our safety & wellbeing is on the table, and we are dependent on the tech literacy of people in power to protect us from those seeking profits before all else.
• • •
Missing some Tweet in this thread? You can try to
force a refresh
The reason that this is so cringey (besides the obvious) is that due to #autonowashing and other factors, many people wrongly believe that L2 Advanced Driver Assistance Systems are capable of reliably detecting stationary objects. They are not.
Our mental models (the knowledge we conceptualize about the systems we use before, during, and after we use them to aid in our understanding & use) are often based on a number of assumptions—some true, some false—and with time & experience better align with the intended model
Autonowashing is a concern for us all, because its consequences have the potential to effect us all.
Those who want to see driving automation advance & succeed—especially, and no matter what companies you root for—have an interest in speaking out against this issue.
Autonowashing is *not* limited to any one entity. This problem is rampant across the industry.
Tesla is discussed in relation to autonowashing, proportionately, as they continue to do the most obvious autonowashing of any OEM.
Plastics are a prob. Dead animals are often found w/ plastic waste in their stomaches. Plastics also breakdown into invisible micro-particles which we then might consume. Emerging research on the effects of microplastics on our health doesn't look good.
.@EuroNCAP has announced its new Assisted Driving Grading system which takes a holistic approach to sys evaluation by including "Driver Engagement" in its rating, to "help consumers" &to "compare assistance performance @ the highest level."
This is a win for human-automation interaction/HMI researchers who have been working for decades to explain how important teaming is and the consequences of broken control loops.
This is a win against #autonowashing, and ultimately a big win for consumer transparency & safety!
Further, @EuroNCAP also released the results of their 2020 Assisted Driving Tests with the new grading system and gave ten different ADAS systems a rating:
As someone living through this pandemic who happens to study trust, human vigilance & behavior––it's entirely unsurprising that we're unable to maintain adequate COVID19 prevention measures. It's Psych 101 and it is why it's so important to have policies to help keep us in line.
It's the same with other safety-critical things (ex. vehicle automation); we need bounding boxes to keep us safe.
Human vigilance is like a muscle, but instead of getting stronger with use (+ as long as nothing bad happens) it weakens over time.
Therefore, what has been the most disturbing is watching the trust get knocked out of the policies we need to protect us, via their politicization and constant reversal and reimplementation.