In my dream version of the scientific enterprise, everyone who works on X would be required to spend some percentage of their time learning and contributing to the philosophy of X. There is too much focus on the "how" and too little focus on the "why" and the "what are we even".
Junior scholars entering a field naturally tend to ask critical questions as they aren't yet inculcated into the field's dogmas. But the academic treadmill leaves them little time to voice concerns & their lack of status means that even when they do, they aren't taken seriously.
One possible intervention is for journals and conferences to devote some fraction of their pages / slots to self-critical inquiry, and for dissertation committees to make clear that they will value this type of scholarship just as much as "normal" science.
In other words, we don't even need to create new incentives to encourage scholars to grapple with meta-questions. We just need to remove the disincentives that are currently in place.

• • •

Missing some Tweet in this thread? You can try to force a refresh
 

Keep Current with Arvind Narayanan

Arvind Narayanan Profile picture

Stay in touch and get notified when new unrolls are available from this author!

Read all threads

This Thread may be Removed Anytime!

PDF

Twitter may remove this content at anytime! Save it as PDF for later use!

Try unrolling a thread yourself!

how to unroll video
  1. Follow @ThreadReaderApp to mention us!

  2. From a Twitter thread mention us with a keyword "unroll"
@threadreaderapp unroll

Practice here first or read more on our help page!

More from @random_walker

1 Jul
We shouldn't shrug off dark patterns as simply sleazy sales online, or unethical nudges, or business-as-usual growth hacking. Dark patterns are distinct and powerful because they combine all three in an effort to extract your money, attention, and data. queue.acm.org/detail.cfm?id=… Image
That's from a 2020 paper by @aruneshmathur, @ineffablicious, Mihir Kshirsagar, and me.

PDF version: dl.acm.org/ft_gateway.cfm…
At first growth hacking was about… growth, which was merely annoying for the rest of us. But once a platform has a few billion users it must "monetize those eyeballs". So growth hackers turned to dark patterns, weaponizing nudge research and A/B testing. queue.acm.org/detail.cfm?id=… Image
Read 5 tweets
30 Jun
I study the risks of digital tech, especially privacy. So people are surprised to hear that I’m optimistic about tech’s long term societal impact. But without optimism and the belief that you can create change with research & advocacy, you burn out too soon in this line of work.
9 years ago I was on the academic job market. The majority of professors I met asked why I chose to work on privacy since—as we all know—privacy is dead because of the Internet and it's pointless to fight it. (Computer scientists tend to be technological determinists, who knew?!)
At fist I didn't expect that "why does your research field exist?" would be serious, recurring question. Gradually I came up with a pitch that at least got interviewers to briefly suspend privacy skepticism and hear about my research. (That pitch is a story for another day.)
Read 5 tweets
22 Jun
The news headlines *undersold* this paper. Widely-used machine learning tool for sepsis prediction found to have an AUC of 0.63 (!), adds little to existing clinical practice. Misses two thirds of sepsis cases, overwhelms physicians with false alerts. jamanetwork.com/journals/jamai…
This adds to the growing body of evidence that machine learning isn't good at true prediction tasks as opposed to "prediction" tasks like image classification that are actually perception tasks.
Worse, in prediction tasks it's extremely easy to be overoptimistic about accuracy through careless problem framing. The sepsis paper found that the measured AUC is highly sensitive to how early the prediction is made—it can be accurate, or clinically useful, but not both.
Read 4 tweets
22 Jun
Academia rewards clever papers over real world impact. That makes it less useful. But it also perpetuates privilege—those with less experience of injustice find it easier to play the game, i.e. work on abstruse problems while ignoring topics that address pressing needs.
I have no beef with fundamental research (which isn't motivated by applications). But most scholarship that *claims* to be motivated by societal needs happens with little awareness of what those needs actually are, and no attempt to step outside academia to actually make change.
Like many of academia's problems, this one is structural. Telling individual scholars to do better is unlikely to work when the incentives are all messed up. Here are some thoughts on what might work. I'd love to hear more.
Read 8 tweets
21 Jun
A student who's starting grad school asked me which topics in my field are under-explored. An important question! But not all researchers in a community will agree on the answers. If they did, those topics won't stay under-explored for long. So how to pick problems? [Thread]
It's helpful for researchers to develop a "taste" for problems based on their specific skills, preferences, and hypotheses about systemic biases in the research community that create blind spots. I shared two of my hypotheses with the student, but we must each develop our own.
Hypothesis 1: interdisciplinary topics are under-explored because it requires researchers to leave their comfort zones. But collaboration is a learnable skill, so if one can get better at it and find suitable collaborators, rich and important research directions await.
Read 6 tweets
20 Jun
I often find myself re-reading this short piece about what peer review was like in the 1860s. A reviewer was someone who helped improve a paper through a collegial, interactive process rather than rejecting it with a withering, anonymous comment. physicstoday.scitation.org/do/10.1063/PT.…
The great benefit of the more formalized system we have today is that it is more impartial, and has helped turn science into less of an old boys' network. But it is also clear that something has been lost.
The problem with reducing bias by formalizing the review process is that it pushes the bias to other parts of the publication pipeline where it is less observable and harder to mitigate.
Read 5 tweets

Did Thread Reader help you today?

Support us! We are indie developers!


This site is made by just two indie developers on a laptop doing marketing, support and development! Read more about the story.

Become a Premium Member ($3/month or $30/year) and get exclusive features!

Become Premium

Too expensive? Make a small donation by buying us coffee ($5) or help with server cost ($10)

Donate via Paypal Become our Patreon

Thank you for your support!

Follow Us on Twitter!

:(