I've been witnessing the "battles" both around advancements in #AI and of the #AI silos since I entered into the field more than 30 years ago.

It has never been so sick and flawed as in the #DeepLearning and GPT-3 era.
1/
The intensity of the #AI battles varies depending on many factors.

They can boil around concrete examples of algorithms.

They even can explode when some concrete people either second or criticize them. It's either white or black, it's difficult to find gray tones in between.
2/
In what follows, I'll be adding some examples of what I've been observing in #AI discussions in recent years 👇
3/
Some of the arguments are highly biased, extremely controversial, and tremendously speculative of what #AI "could be" or "will be doing" vs. what it "can actually do."

That ignites the #hype.
4/
Other arguments are well-intended in principle, but they are also ill-formulated or not justified well.

Others are very constructive, but not taken as such by the supposed opponents.

Some are full of hope that we will construct better #AI systems in the future.
5/
See, for example:

When GPT-3 fails or doesn't deliver the correct answer: "Oh, well, that one was not correct."

When it does: "OMG, brilliant! Look at this, it's incredible! Just amazingly awesome! That's #AGI or on the path to it! Long live GPT-3!"
6/
There is a hidden fear to criticize anything related to #AI in general and GPT-3 in particular because "what could 'the cooler guys' think of me if I do it! So better to not say anything about an arguably evident truth because I could risk not being considered 'cool' anymore."
7/
Or the sick propensity for rejecting/downplaying anything that resembles symbolic #AI.

It's repulsive the level of arrogance with which some folks express themselves, elevating sub-symbolic #AI to the highest realms or "only" path on the way to "truly" intelligent artefacts.
8/
I've seen huge inconsistencies when people analyze what #AI, say GPT-3, actually cannot do vs. when they imagine what it "may probably be doing behind the scenes", "attributed behaviours," what it "could supposedly be happening."

Magic bullet powers.

Anthropomorphism, too.
9/
Telling #DL or GPT-3 cannot reason/understand —or questioning it has any common sense or is the only "god"— unties a witch hunt.

It reaches its lowest low when some folks start to question how many papers or lines of code the "intruder" has produced in that area or in #AI.
10/
Speaking of God and all related wording if you like, here a sincere request:

Please stop, *stop*, STOP, *STOP!* using narratives with the expression "Godfathers of #AI"

Thank you.
11/
Coming back to the inconsistencies when analyzing vs. imagining the can's and can'ts of #AI, see this, for instance:

"Of course it cannot do that! You are asking it to do something it was not trained for!"

However...
12/
... there is such a rush to state that the #AI algorithm is "guessing/inventing" new paths (instead of travelling already existing ones depending on statistics or random tries) or even to attribute some level of consciousness to the hardware-powered low-level computations! 🤦🏻‍♀️
13/
Even worse, stating e.g. GPT-3 is doing "part of the brain's work", as if we were already before the next-to-come greatest invention of a new creature!

(That's when I think of an icon for rolling eyes to the power of a gazillion, but there is none! Let me use 🤦🏻‍♀️ again).
14/
Also disturbing and unacceptable, the uncontrollable instinct of some people to debate a comment and even troll in some extent the messenger (mansplainers' behaviour included) when that person doesn't agree with the other's claims.

Also fatal: thinking one is anti #AI.
15/
Those 👆 behaviors become even funnier when some folks don't regularly engage in the conversation (or were put in their places before) but now come to upvote a comment against the messenger of the "uncomfortable" opinion.

(I imagine them rubbing hands and smiling cynically)
16/
Must scientific discourse goes that way?

In my experience, social networks boost attitudes that are normally rejected in real life and offer a stage for virtual battles that find no place or are repudiated in a face-to-face scenario.
17/
The #AI discourse is no exception.

Some arguments remain polluting the discourse for some reason. That is unacceptable and unnecessary.

And that is damaging the field ostensibly, not to mention the dangers that the #hype brings to the table.
18/
The #AI community (all included, not only those that have access to the latest technology or are more versed in some programming language) must rethink the ways to come forward.

That is an added #responsibility *we all* have. Let's not make the low's lower than they are.
19/19
*go

• • •

Missing some Tweet in this thread? You can try to force a refresh
 

Keep Current with Dagmar Monett

Dagmar Monett Profile picture

Stay in touch and get notified when new unrolls are available from this author!

Read all threads

This Thread may be Removed Anytime!

PDF

Twitter may remove this content at anytime! Save it as PDF for later use!

Try unrolling a thread yourself!

how to unroll video
  1. Follow @ThreadReaderApp to mention us!

  2. From a Twitter thread mention us with a keyword "unroll"
@threadreaderapp unroll

Practice here first or read more on our help page!

More from @dmonett

26 Jul 19
A "worrying analysis":

"18 [#deeplearning] algorithms ... presented at top-level research conferences ... Only 7 of them could be reproduced w/ reasonable effort ... 6 of them can often be outperformed w/ comparably simple heuristic methods."

Paper:
lnkd.in/dTaGCTv

#AI
[Updates worth tweeting]

2/
There is much concern about #reproducibility issues and flawed scientific practices in the #ML community in particular & #academia in general.

Both the issues and the concerns are not new.

Isn't it time to put an end to them?
3/
There are several works that have exposed these and similar problems along the years.

👏👏 again to @Maurizio_fd et al. for sharing their paper and addressing #DL algorithms for recommended systems (1st tweet from this thread).

But there is more, unfortunately:
Read 18 tweets
9 Mar 19
In my opinion, the "I" in #AI has been vulgarly kidnapped and abused on many levels (including 'serious' research, sadly) for the sake of anticipated glory.

1/n
This has resulted in a tremendous disparity between how #intelligence in #machines is both perceived and understood by the public, and what does it truly mean (or could) in current human-made 'intelligent' artefacts.
#AI
2/
A similar vein:
"By jumping over the long, slow process of cognitive development and instead focusing on solving specific tasks with high commercial or marketing value, we have robbed #AI of any ability to process information in an intelligent manner."
3/
blogs.scientificamerican.com/observations/a…
Read 7 tweets

Did Thread Reader help you today?

Support us! We are indie developers!


This site is made by just two indie developers on a laptop doing marketing, support and development! Read more about the story.

Become a Premium Member ($3/month or $30/year) and get exclusive features!

Become Premium

Too expensive? Make a small donation by buying us coffee ($5) or help with server cost ($10)

Donate via Paypal Become our Patreon

Thank you for your support!

Follow Us on Twitter!