Then, @dartmouth argues that the "personal and sensitive" nature of sexual assault is not enough to warrant anonymity. But this contradicts many court precedents
You can see that @dartmouth doesn't care about their own.
A lot of @dartmouth's filing is about the status of anonymous students in a class action lawsuit. Turns out, the issue of class representatives is not normally argued at this stage, making it clear that @dartmouth has other motives.
And then the Plaintiff lawyers point out a bunch of case law establishing that members of class actions lawsuits can proceed under pseudonyms.
And what of the case law that @dartmouth cites? Well it turns out that they were using a case where people were *already* deanonymized (accidentally!) and a court held that it wasn't important to allow pseudonyms because their names were already out!
But the biggest news of the filing is this: @dartmouth itself already knows the names of anonymous plaintiffs. It's court filing is just about making these women's public.
Turns out the plaintiffs had already worked out a way for @dartmouth to conduct discovery/investigation without publicly outing anyone. @dartmouth couldn't even be bothered to respond (instead they argued in court it wasn't feasible)
What about Dartmouth's worst argument--that it was too confusing to say "Jane Doe 1", "Jane Doe 2", etc?
It is an amazing time to work in the cognitive science of language. Here are a few remarkable recent results, many of which highlight ways in which the critiques of LLMs (especially from generative linguistics!) have totally fallen to pieces.
One claim was that LLMs can't be right because they learn "impossible languages." This was never really justified, and now @JulieKallini and collaborators show its probably not true:
One claim was that they LLMs can't be on the right track because they "require" large data sets. Progress has been remarkable on learning with developmentally-plausible data sets. Amazing comparisons spearheaded by @a_stadt and colleagues:
Yes, ChatGPT is amazing and impressive. No, @OpenAI has not come close to addressing the problem of bias. Filters appear to be bypassed with simple tricks, and superficially masked.
Yeah, yeah, quantum mechanics and relativity are counterintuitive because we didn’t evolve to deal with stuff on those scales.
But more ordinary things like numbers, geometry, and procedures are also baffling. Here’s a little 🧵 on weird truths in math.
My favorite example – the Banach-Tarski paradox – shows how you can cut a sphere into a few pieces (well, sets) and then re-assemble the pieces into TWO IDENTICAL copies of the sphere you started with.
It sounds so implausible, people often think they've misunderstood. But it's true -- chop into a few "pieces" and reassemble to two *identical* (equal size, equal shape) spheres to what you started with.
Everyone seems to think it's absurd that large language models (or something similar) could show anything like human intelligence and meaning. But it doesn’t seem so crazy to me. Here's a dissenting 🧵 from cognitive science.
The news, to start, is that this week software engineer @cajundiscordian was placed on leave for violating Google's confidentiality policies, after publicly claiming that a language model was "sentient" nytimes.com/2022/06/12/tec…
Lemoine has clarified that his claim about the model’s sentience was based on “religious beliefs.” Still, his conversation with the model is really worth reading: cajundiscordian.medium.com/is-lamda-senti…