#PSA#APOLOGY
Recently, a Princeton postdoc posted a thread about a paper he had published with his PI and group in PNAS, which raised serious methodological and ethical concerns. With many others, I tweeted my views of these problems, and did so with strong language. 1/
In doing so, I contributed to a massive Twitter pile-on against this work, which this *junior* research could only have felt directed directly against himself. He has now deleted his Twitter account. I cannot believe that this is a coincidence.
2/
I deeply regret my part in the pile-on. Even if criticism is not aimed at the researchers (and much of it was), massive numbers of often senior researchers directing harsh words at one's research, calling it unethical and comparing it to blatant racism, can only be a traumatic
3/
experience. This postdoc should not have been subjected to this, nor should anyone in such circumstances. We as a community must do better and must stop this behavior. I am very sorry for my part in this, and pledge to stand up against such behavior in the future.
4/
I call on all senior researchers and scholars, and others with standing to do so, to do likewise. The only way this sort of mob pile-on behavior will stop is for enough people of good will to stand up and say, clearly, "STOP". Let us do so. 5/
Importantly, none of this vitiates the content or importance of the critique. But we must criticize in a manner that sheds light, not mere heat, and not in a way that increases combativeness and decreases comity in the research community. Absent clear proof, assume naiveté
6/
rather than intentional flouting of ethical norms, and seek to educate, rather than ridicule or punish. Yes, it can be exhausting when it happens again and again, and the PIs certainly should do their due diligence. But perhaps there are better ways to spread the word.
7/
I certainly hope so, for all of our sakes.
FIN/
• • •
Missing some Tweet in this thread? You can try to
force a refresh
This paper, entitled "On Classifying Facial Races with Partial Occlusions and Pose Variations" appeared in the proceedings of the 2017 @IEEEorg ICMLA conference, in Cancun. researchgate.net/publication/32…
As stated in the abstract, the goal of the work is to apply a face classification model "trained on four major human races, Caucasian, Indian, Mongolian, and Negroid." Needless to say, these categories have no empiric or scientific basis.
In the body of the paper, we see this table characterizing the supposed "four major human races" in terms redolent of the height of 19th century racist phrenology:
Regulations, arguably, should not be based on detailed understanding of how AI systems work (which the regulators can't have in any depth). However, AI systems need to be able to explain decisions in terms that humans can understand, if we are to consider them trustworthy. 1/
Not explanations involving specifics of the algorithms, weights in a neural network, etc., but explanations that engage people's theories of mind, explanations at the level of Dennett's intentional stance - in terms of values, goals, plans, and intentions. 2/
Previous computer systems, to be comprehensible, and, yes, trustworthy, needed to consistently present behavior that fit people's natural inferences to physical models (e.g., the "desktop"). Anyone old enough to remember programming VCRs? Nerdview is a failure of explanation. 3/