I often find myself re-reading this short piece about what peer review was like in the 1860s. A reviewer was someone who helped improve a paper through a collegial, interactive process rather than rejecting it with a withering, anonymous comment. physicstoday.scitation.org/do/10.1063/PT.…
The great benefit of the more formalized system we have today is that it is more impartial, and has helped turn science into less of an old boys' network. But it is also clear that something has been lost.
The problem with reducing bias by formalizing the review process is that it pushes the bias to other parts of the publication pipeline where it is less observable and harder to mitigate.
Better connected authors can:
–Avail of informal, constructive peer review *before* formal review
–Better jump through the hoops required by the ritualized (sometimes years-long) review process
–Better publicize the paper while under review, increasing its impact once published.
I don't think we should go back to the 1860s model. But we should make peer review more constructive. We could also build in an explicit discussion of many types of reviewer biases as part of the review process itself instead of assuming that anonymity solves the problem.
• • •
Missing some Tweet in this thread? You can try to
force a refresh
When a machine learning system uses argmax to select outputs from a probability distribution — and most of them do — it's a clue that it might be biased. That's because argmax selects the "most probable" output, which may amplify tiny data biases into perfectly biased outputs.
Here's an exercise (with solution) I developed for my Fairness in ML course with @ang3linawang's help. It uses a toy model to show how bias amplification like the one in the "Men also like shopping" paper can arise through the use of argmax alone! drive.google.com/file/d/1baK_c4…
This graph is the punchline. α and β are parameters that describe correlations in the input and the graphs show correlations in the (multilabel) output. It should be terrifying from a scientific and engineering perspective even if there are no relevant fairness considerations!
A remarkable thread about messed up corporate power hierarchies. It's worth highlighting something else the story illustrates: the standard way to "solve" online abuse and harassment is to experiment on the victims of abuse and harassment with no consent or transparency.
No surprise here, of course. We all know this is how tech platforms work. But should we take it for granted? Is there no alternative? No way to push back?
It's not A/B testing itself that's the problem. Indeed, in this instance, A/B testing *worked*. It allowed @mathcolorstrees resist a terrible idea by someone vastly more powerful; something that would probably have made Twitter's abuse problem much worse.
This brilliant, far-too-polite article should be the go-to reference for why "follow the science" is utterly vacuous. The science of aerosol transmission was there all along. It could have stopped covid. But CDC/WHO didn't follow the science. Nor did scientists for the most part.
The party line among scientists and science communicators is that science "self corrects". Indeed it does, but on a glacial timescale with often disastrous policy consequences. Our refusal to admit this further undermines public trust in science.
See also @Zeynep's excoriation of public health agencies, including the comparison of their covid responses with the way 19th century Londoners afraid of "miasma" redirected sewers into the Thames, spreading Cholera even more nytimes.com/2021/05/07/opi…
The "tech" part of tech companies has gotten easier while understanding its social impacts has gotten much harder. This trend will only accelerate. Yet most tech co's have resisted viewing ethics as a core competency. Major changes are needed, whether from the inside or outside.
I love pithy analogies but this one breaks down quickly. The world will be better off without fossil fuels. But a world without computing technology is outside the Overton window. Like it or not, we must work to reform the tech industry.
35 million U.S. phone numbers are disconnected each year. Most get reassigned to new owners. In a new study, @kvn_l33 and I found 66% of recycled numbers we sampled were still tied to previous owners’ online accounts, possibly allowing account hijacking. recyclednumbers.cs.princeton.edu
It’s well known that number recycling is a nuisance, but we studied whether an adversary—even a relatively unskilled one—can exploit it to invade privacy and security. We present 8 attacks affecting both new and previous owners. We estimate that millions of people are affected.
Unfortunately, carriers imposed few restrictions on the adversary’s ability to browse available numbers and acquire vulnerable ones. After we disclosed the issue to them a few months ago, Verizon and T-mobile improved their documentation but have not made the attack harder.
At Princeton's Center for Information Technology Policy (citp.princeton.edu) we're hiring our first ever communications manager. Public engagement is a first-rate goal for us, so we are looking for someone to work with us to maximize the public impact of our scholarship.
To explain how CITP differs from most academic groups, I'm happy to share a new case study of our (ongoing) research on dark patterns. It includes many lessons learned about conducting and communicating tech policy research effectively, and how CITP helps. cs.princeton.edu/~arvindn/publi…
The communications manager is a hybrid role. This includes familiar tasks such as managing a website and social media, but also close collaboration with researchers on tasks such as co-authoring an op-ed or figuring out the right analogy to explain a tricky concept.