Shlomo Engelson Argamon Profile picture
Associate Provost for AI, @WeAreTouro AI, computational stylistics, forensic linguistics, digital humanities, data science
Feb 6 10 tweets 2 min read
A brief excursion into reading the Bible in Hebrew, from Ecclesiastes 1:2.

הֲבֵל הֲבָלִים אָמַר קֹהֶלֶת הֲבֵל הֲבָלִים הַכֹּל הָבֶל.

KJV: Vanity of vanities, saith the Preacher, vanity of vanities; all is vanity.

Let's focus on the last bit, "all is vanity = הַכֹּל הָבֶל"

1/ First, the word (הבל) translated by KJV as "vanity" literally means "a puff of breath". Alter excellently translates the verse as:

Merest breath, said Qohelet, merest breath. All is breath.

That is, the implication is ephemerality, not necessarily meaninglessness.

But wait.
2/
Apr 27, 2022 8 tweets 3 min read
#PSA #APOLOGY
Recently, a Princeton postdoc posted a thread about a paper he had published with his PI and group in PNAS, which raised serious methodological and ethical concerns. With many others, I tweeted my views of these problems, and did so with strong language.
1/

#ML In doing so, I contributed to a massive Twitter pile-on against this work, which this *junior* research could only have felt directed directly against himself. He has now deleted his Twitter account. I cannot believe that this is a coincidence.
2/
Dec 16, 2019 51 tweets 12 min read
Taking @vgr’s challenge:

1 like = 1 opinion (actually, fact :) on #MachineLearning and the nature of knowledge. @vgr (not to mention the hype machine...)
May 30, 2019 10 tweets 3 min read
This paper, entitled "On Classifying Facial Races with Partial Occlusions and Pose Variations" appeared in the proceedings of the 2017 @IEEEorg ICMLA conference, in Cancun.
researchgate.net/publication/32… As stated in the abstract, the goal of the work is to apply a face classification model "trained on four major human races, Caucasian, Indian, Mongolian, and Negroid." Needless to say, these categories have no empiric or scientific basis.
Dec 17, 2018 12 tweets 3 min read
Regulations, arguably, should not be based on detailed understanding of how AI systems work (which the regulators can't have in any depth). However, AI systems need to be able to explain decisions in terms that humans can understand, if we are to consider them trustworthy. 1/ Not explanations involving specifics of the algorithms, weights in a neural network, etc., but explanations that engage people's theories of mind, explanations at the level of Dennett's intentional stance - in terms of values, goals, plans, and intentions. 2/