One of the most ironic predictions made about research is from mathematician G.H. Hardy’s famous "Apology", written in 1940. He defends pure mathematics (which he called real mathematics) on the grounds that even if it can't be used for good, at least it can't be used for harm.
Number theory later turned out to be a key ingredient of modern cryptography, and relativity is necessary for GPS to work properly. Cryptography and GPS both have commercial applications and not just military ones, which I suspect Hardy would have found even more detestable.
Hardy’s examples weren’t merely unfortunate in retrospect. I think they undercut the core of his argument, which is a call to retreat to the realm of the mind, concerned only with the beauty of knowledge, freed from having to think about the real-world implications of one’s work.
I don’t think Hardy’s value-free realm exists. Of course, he couldn’t have anticipated the specific applications of number theory that would later be discovered, but the comfort he took in the assumption that any such development was unlikely was a false one.
I often see techies drawn to tinkering and creating new technology, seeing it as a comfortable, value-neutral intellectual activity just because the applications aren’t immediately obvious. Again, this is false comfort. If pure mathematicians don’t have this luxury, no one does.
It’s easy to reject Hardy’s extreme as well as its opposite (that we should stop basic research because of the potential for harm). But what is the middle ground? What are the ethical responsibilities of people doing basic research (whether in math or any other field)?
I personally work on applied research, for which ethical guidelines tend to be relatively well established. But I know many researchers working on pure/basic/fundamental questions who are deeply struggling with these questions. Any pointers and recommendations are welcome.
• • •
Missing some Tweet in this thread? You can try to
force a refresh
When I was a student I thought professors are people who know lots of stuff. Then they went and made me a professor. After getting over my terror of not knowing stuff, I realized I had it all wrong. Here are a bunch of things that are far more important than how much you know.
- Knowing what you know and what you don’t know.
- Being good at teaching what you know.
- Being comfortable with saying you don’t know.
- Admitting when you realize you got something wrong.
- Effectively communicating uncertainty when necessary.
- Spotting BS.
- Recognizing others with expertise.
- Recognizing that there are different domains of expertise.
- Recognizing that there are different kinds of expertise including lived experience.
- Drawing from others’ expertise without deferring to authority.
Many face recognition datasets have been taken down due to ethical concerns. In ongoing research, we found that this doesn't achieve much. For example, the DukeMTMC dataset of videos was used in 135 papers published *after* it was taken down in June 2019. freedom-to-tinker.com/2020/10/21/fac…
A major challenge comes from derived datasets. In particular, the DukeMTMC-ReID dataset is a popular dataset used for person re-identification and continues to be free for anyone to download. 116 of 135 papers that use DukeMTMC after its takedown actually use a derived dataset.
This is a widespread problem. MS-Celeb was removed due to criticism but lives on through MS1M-IBUG, MS1M-ArcFace, MS1M-RetinaFace… all still public. The original dataset is also available via Academic Torrents. One popular dataset, LFW, has spawned at least 14 derivatives.
At Princeton CITP, we were concerned by media reports that political candidates use psychological tricks in their emails to get supporters to donate. So we collected 250,000 emails from 3,000 senders from the 2020 U.S. election cycle. Here’s what we found. electionemails2020.org
Let me back up: this is a study by @aruneshmathur, Angelina Wang, @c_schwemmer, Maia Hamin, @b_m_stewart, and me. We started last year by buying a list of all candidates running for federal and state elections in the U.S. We also acquired lists of PACs and other orgs.
Next, the key bit for data collection: we created a bot that was able to find these candidates’ websites through search engines, look for email sign up forms, fill them in, and collect the emails in a giant inbox. We verified manually that each step works pretty accurately.
The intellectual superiority of depth over breadth is a pervasive fiction in academia that sustains the culture of fetishizing specialization. I tried to fight this culture early in my career, but realized it was like punching a bag of sand.
An amazing benefit of my privilege is being able to say "I didn't understand that. Could you explain it again?" as many times as necessary without having to worry that people will think I'm stupid.
If you didn't understand something I said, please ask me as many times as necessary. In fact, I'm delighted when this happens. As a professor, knowing when something I explained didn't make sense is extremely valuable feedback that helps me do better.
I'm a tenured computer science professor who looks like what many people expect a tenured computer science professor to look like. The follow up I get after someone asks "So what do you do?" is nearly always "Oh, you must be really smart."
By the same token, it should be a sobering moment for computer science academia. With few exceptions, work that tries to bring accountability to big tech companies is relegated to the fringes of our discipline. CS these days cozies up to power far more than speaking truth to it.
There's a lot of concern today about industry funding of specific researchers. That's important, but a 100x deeper problem is that the tech industry warps CS academia's concept of what is even considered a legitimate research topic. This influence is both pervasive and invisible.
Most of the industry influence happens without any money changing hands. Academia's dependence on industry data is one way. Another is that most grad students go on to industry jobs and naturally prefer to work on topics that increase their employability.