In science and tech, "people believe that men are being fired for subtle comments or minor jokes, or just plain kindness or cordial behavior. This perception makes people very nervous. What I want to say today to all of the men in the room is that you have been misled."
"The truth is this: it takes an incredible--truly immense--amount of mistreatment before women complain. No woman in tech wants to file a complaint because they know the consequences of doing so. The most likely outcome---by far---is that they will be retaliated against."
"Complainants face retaliation in 75% of cases across all sectors, according to the Equal Employment Opportunity Commission."
"What this means is that when you hear about a case---like something in the news or at your institution--your priors should tell you that it’s very likely that some unusually bad behavior happened."
"Offenders almost universally apologize for a minor infraction... They lie by omission. These apologies for minor things mislead people into believing that the accused person is being unfairly persecuted for the minor misstep, and makes those reporting seem unreasonable ... "
"Minor apologies lead people to falsely believe that their own careers could be ended for a similarly minor infraction. This is not true. It is unfair to other men."
"It harms the climate for everyone. For men, who are afraid to interact with women colleagues and train women students. And for women, who miss out on those professional interactions."
"You don’t need to fear being attacked for minor comments and misunderstandings, because that’s not what’s happening. That is a myth that those who have genuinely abused people ... would like you to believe."
"You deserve to benefit from the innovations and ideas of the women at your institutions. We deserve to be interacted with as equal colleagues. We need each other to innovate and thrive."
It is an amazing time to work in the cognitive science of language. Here are a few remarkable recent results, many of which highlight ways in which the critiques of LLMs (especially from generative linguistics!) have totally fallen to pieces.
One claim was that LLMs can't be right because they learn "impossible languages." This was never really justified, and now @JulieKallini and collaborators show its probably not true:
One claim was that they LLMs can't be on the right track because they "require" large data sets. Progress has been remarkable on learning with developmentally-plausible data sets. Amazing comparisons spearheaded by @a_stadt and colleagues:
Yes, ChatGPT is amazing and impressive. No, @OpenAI has not come close to addressing the problem of bias. Filters appear to be bypassed with simple tricks, and superficially masked.
Yeah, yeah, quantum mechanics and relativity are counterintuitive because we didn’t evolve to deal with stuff on those scales.
But more ordinary things like numbers, geometry, and procedures are also baffling. Here’s a little 🧵 on weird truths in math.
My favorite example – the Banach-Tarski paradox – shows how you can cut a sphere into a few pieces (well, sets) and then re-assemble the pieces into TWO IDENTICAL copies of the sphere you started with.
It sounds so implausible, people often think they've misunderstood. But it's true -- chop into a few "pieces" and reassemble to two *identical* (equal size, equal shape) spheres to what you started with.
Everyone seems to think it's absurd that large language models (or something similar) could show anything like human intelligence and meaning. But it doesn’t seem so crazy to me. Here's a dissenting 🧵 from cognitive science.
The news, to start, is that this week software engineer @cajundiscordian was placed on leave for violating Google's confidentiality policies, after publicly claiming that a language model was "sentient" nytimes.com/2022/06/12/tec…
Lemoine has clarified that his claim about the model’s sentience was based on “religious beliefs.” Still, his conversation with the model is really worth reading: cajundiscordian.medium.com/is-lamda-senti…