I've been witnessing the "battles" both around advancements in #AI and of the #AI silos since I entered into the field more than 30 years ago.
It has never been so sick and flawed as in the #DeepLearning and GPT-3 era.
1/
The intensity of the #AI battles varies depending on many factors.
They can boil around concrete examples of algorithms.
They even can explode when some concrete people either second or criticize them. It's either white or black, it's difficult to find gray tones in between.
2/
In what follows, I'll be adding some examples of what I've been observing in #AI discussions in recent years 👇
3/
Some of the arguments are highly biased, extremely controversial, and tremendously speculative of what #AI "could be" or "will be doing" vs. what it "can actually do."
Other arguments are well-intended in principle, but they are also ill-formulated or not justified well.
Others are very constructive, but not taken as such by the supposed opponents.
Some are full of hope that we will construct better #AI systems in the future.
5/
See, for example:
When GPT-3 fails or doesn't deliver the correct answer: "Oh, well, that one was not correct."
When it does: "OMG, brilliant! Look at this, it's incredible! Just amazingly awesome! That's #AGI or on the path to it! Long live GPT-3!"
6/
There is a hidden fear to criticize anything related to #AI in general and GPT-3 in particular because "what could 'the cooler guys' think of me if I do it! So better to not say anything about an arguably evident truth because I could risk not being considered 'cool' anymore."
7/
Or the sick propensity for rejecting/downplaying anything that resembles symbolic #AI.
It's repulsive the level of arrogance with which some folks express themselves, elevating sub-symbolic #AI to the highest realms or "only" path on the way to "truly" intelligent artefacts.
8/
I've seen huge inconsistencies when people analyze what #AI, say GPT-3, actually cannot do vs. when they imagine what it "may probably be doing behind the scenes", "attributed behaviours," what it "could supposedly be happening."
Magic bullet powers.
Anthropomorphism, too.
9/
Telling #DL or GPT-3 cannot reason/understand —or questioning it has any common sense or is the only "god"— unties a witch hunt.
It reaches its lowest low when some folks start to question how many papers or lines of code the "intruder" has produced in that area or in #AI.
10/
Speaking of God and all related wording if you like, here a sincere request:
Please stop, *stop*, STOP, *STOP!* using narratives with the expression "Godfathers of #AI"
Thank you.
11/
Coming back to the inconsistencies when analyzing vs. imagining the can's and can'ts of #AI, see this, for instance:
"Of course it cannot do that! You are asking it to do something it was not trained for!"
However...
12/
... there is such a rush to state that the #AI algorithm is "guessing/inventing" new paths (instead of travelling already existing ones depending on statistics or random tries) or even to attribute some level of consciousness to the hardware-powered low-level computations! 🤦🏻♀️
13/
Even worse, stating e.g. GPT-3 is doing "part of the brain's work", as if we were already before the next-to-come greatest invention of a new creature!
(That's when I think of an icon for rolling eyes to the power of a gazillion, but there is none! Let me use 🤦🏻♀️ again).
14/
Also disturbing and unacceptable, the uncontrollable instinct of some people to debate a comment and even troll in some extent the messenger (mansplainers' behaviour included) when that person doesn't agree with the other's claims.
Those 👆 behaviors become even funnier when some folks don't regularly engage in the conversation (or were put in their places before) but now come to upvote a comment against the messenger of the "uncomfortable" opinion.
(I imagine them rubbing hands and smiling cynically)
16/
Must scientific discourse goes that way?
In my experience, social networks boost attitudes that are normally rejected in real life and offer a stage for virtual battles that find no place or are repudiated in a face-to-face scenario.
17/
Some arguments remain polluting the discourse for some reason. That is unacceptable and unnecessary.
And that is damaging the field ostensibly, not to mention the dangers that the #hype brings to the table.
18/
The #AI community (all included, not only those that have access to the latest technology or are more versed in some programming language) must rethink the ways to come forward.
That is an added #responsibility *we all* have. Let's not make the low's lower than they are.
19/19
*go
• • •
Missing some Tweet in this thread? You can try to
force a refresh
"18 [#deeplearning] algorithms ... presented at top-level research conferences ... Only 7 of them could be reproduced w/ reasonable effort ... 6 of them can often be outperformed w/ comparably simple heuristic methods."
In my opinion, the "I" in #AI has been vulgarly kidnapped and abused on many levels (including 'serious' research, sadly) for the sake of anticipated glory.
1/n
This has resulted in a tremendous disparity between how #intelligence in #machines is both perceived and understood by the public, and what does it truly mean (or could) in current human-made 'intelligent' artefacts. #AI
2/
A similar vein:
"By jumping over the long, slow process of cognitive development and instead focusing on solving specific tasks with high commercial or marketing value, we have robbed #AI of any ability to process information in an intelligent manner." 3/ blogs.scientificamerican.com/observations/a…