Gather round, everyone--with Epstein's arrest, we all get to take @LKrauss1's master class on misusing "scientific thinking" to cover up exploitation of women. #MeTooSTEM (Thread) nytimes.com/2019/07/08/nyr…
Scientific thinking is when you look at all the available evidence, including court rulings, testimony, and base rates (about 5% of sexual assault reports are false), and recognize the limitations of your own perceptions.
Are we supposed to believe that Krauss only believes in things he has directly seen? That empirical evidence doesn't include what witnesses say? That Krauss doesn't understand his own conflict of interest in assessing Epstein?
"As a scientist," Krauss is skeptical of everything except his own social acumen.
Emailing with @rebeccawatson, he doubled down: "Based on my direct experience with [Epstein], which is all I can base my assessment on [WHY???!], he is a thoughtful, kind, considerate man ... I honestly don’t know who was the victim in this case. probably everyone was a victim"
When the FBI raided Epstein's mansion in 2019, they found a "trove" of pictures of nude photographs of young girls and other evidence, and charged Epstein with sex trafficking, creating "a vast network of underage victims" nymag.com/intelligencer/…
“'Epstein had sex with underage girls on a daily basis' and that his interest in minor girls was 'obvious' to those in his orbit. His code word for this abuse was 'massage,'” thedailybeast.com/jeffrey-epstei…
"jeffrey apparently paid for massages with sex… I believe him when he told me he had no idea the girls were underage" - professional skeptic @LKrauss1 in 2011
Biologist @TriversRobert, also funded by Epstein, was quoted by @Reuters in 2011: “By the time they're 14 or 15, they’re like grown women were 60 years ago, so I don’t see these acts as so heinous” (he has since apologized) reuters.com/article/us-eps…
Parroting skepticism and reason appears measured, but it often is entirely selfish.
Krauss responded with a tired Christopher Hitchens quote, but if he thought that quote was a convincing argument, he should have deployed it to stop @ASU from finding him guilty)
So @LKrauss1 is "scientific" in the same way flat earthers are, using skeptical posturing to sow doubt about the truth when it serves his interests.
• • •
Missing some Tweet in this thread? You can try to
force a refresh
It is an amazing time to work in the cognitive science of language. Here are a few remarkable recent results, many of which highlight ways in which the critiques of LLMs (especially from generative linguistics!) have totally fallen to pieces.
One claim was that LLMs can't be right because they learn "impossible languages." This was never really justified, and now @JulieKallini and collaborators show its probably not true:
One claim was that they LLMs can't be on the right track because they "require" large data sets. Progress has been remarkable on learning with developmentally-plausible data sets. Amazing comparisons spearheaded by @a_stadt and colleagues:
Yes, ChatGPT is amazing and impressive. No, @OpenAI has not come close to addressing the problem of bias. Filters appear to be bypassed with simple tricks, and superficially masked.
Yeah, yeah, quantum mechanics and relativity are counterintuitive because we didn’t evolve to deal with stuff on those scales.
But more ordinary things like numbers, geometry, and procedures are also baffling. Here’s a little 🧵 on weird truths in math.
My favorite example – the Banach-Tarski paradox – shows how you can cut a sphere into a few pieces (well, sets) and then re-assemble the pieces into TWO IDENTICAL copies of the sphere you started with.
It sounds so implausible, people often think they've misunderstood. But it's true -- chop into a few "pieces" and reassemble to two *identical* (equal size, equal shape) spheres to what you started with.
Everyone seems to think it's absurd that large language models (or something similar) could show anything like human intelligence and meaning. But it doesn’t seem so crazy to me. Here's a dissenting 🧵 from cognitive science.
The news, to start, is that this week software engineer @cajundiscordian was placed on leave for violating Google's confidentiality policies, after publicly claiming that a language model was "sentient" nytimes.com/2022/06/12/tec…
Lemoine has clarified that his claim about the model’s sentience was based on “religious beliefs.” Still, his conversation with the model is really worth reading: cajundiscordian.medium.com/is-lamda-senti…