Venture capitalists benefit from giving toxic and dangerous advice to startups. That's because the risk to the VC is bounded — the amount invested ⁠— whereas the costs to founders and workers' health, to society, to democracy, and to the environment are unbounded.
Early-stage and seed VCs externalize more of these costs and hence have an even greater incentive to give harmful advice.
Of course, advice from venture capitalists isn't just advice. I can't think of another group with a bigger gap between power and accountability.
This is why the VC-as-thinkfluencer genre bothers me so deeply. They're not like the rest of the chattering class. Because of their messed up incentives, we should view their punditry with a few extra doses of skepticism.
Yes. But that's irrelevant.

What's interesting about this rhetorical question is the implication that outsiders' views are irrelevant. I found this attitude pervasive in Silicon Valley. It's an additional barrier to meaningful reform.

• • •

Missing some Tweet in this thread? You can try to force a refresh
 

Keep Current with Arvind Narayanan

Arvind Narayanan Profile picture

Stay in touch and get notified when new unrolls are available from this author!

Read all threads

This Thread may be Removed Anytime!

PDF

Twitter may remove this content at anytime! Save it as PDF for later use!

Try unrolling a thread yourself!

how to unroll video
  1. Follow @ThreadReaderApp to mention us!

  2. From a Twitter thread mention us with a keyword "unroll"
@threadreaderapp unroll

Practice here first or read more on our help page!

More from @random_walker

16 Dec 20
Many online education platforms track and profit from student data, but universities are able to use their power to negotiate contracts with vendors to get much better privacy. That’s one of the findings in our new paper “Virtual Classrooms and Real Harms” arxiv.org/abs/2012.05867
We analyzed 23 popular tools used for online learning—their code, their privacy policies, and 50 “Data Protection Addenda” that they negotiated with universities. We studied 129 (!) U.S. state privacy laws that impact ed tech. We also surveyed 105 educators and 10 administrators.
A major reason for poor privacy by default is that the regulations around traditional educational records aren’t well suited to the ‘data exhaust’ of online communication, echoing arguments by @elanazeide & @HNissenbaum here: papers.ssrn.com/sol3/papers.cf…
Read 7 tweets
15 Dec 20
Matt Salganik (@msalganik) and I are looking for a joint postdoc at Princeton to explore the fundamental limits of machine learning for prediction. We welcome quantitatively minded candidates from many fields including computer science and social science. [Thread]
This is an unusual position. Here's how it came to be. Last year I gave a talk on AI snake oil. Meanwhile Matt led a mass collaboration that showed the limits of machine learning for predicting kids’ life outcomes. Paper in PNAS: pnas.org/content/117/15…
We realized we were coming at the same fundamental question from different angles: given enough data and powerful algorithms, is everything predictable? So we teamed up and taught a course on limits to prediction. We're excited to share the course pre-read cs.princeton.edu/~arvindn/teach…
Read 5 tweets
1 Dec 20
Job alert: At Princeton we’re hiring emerging scholars who have Bachelor’s degrees for 2-year positions in tech policy. The program combines classes, 1-on-1 mentoring, and work experience with real-world impact. Apply by Jan 10. More details: citp.princeton.edu/programs/citp-…

[Thread]
This is a brand new program. Emerging scholars are recruited as research specialists: staff, not students. This comes with a salary and full benefits. We see it as a stepping stone to different career paths: a PhD, government, nonprofits, or the private sector.
Who are we? At Princeton’s Center for Information Technology Policy (@PrincetonCITP), our goal is to understand and improve the relationship between technology and society. Our work combines expertise in technology, law, social sciences, and humanities. citp.princeton.edu
Read 5 tweets
27 Nov 20
One of the most ironic predictions made about research is from mathematician G.H. Hardy’s famous "Apology", written in 1940. He defends pure mathematics (which he called real mathematics) on the grounds that even if it can't be used for good, at least it can't be used for harm.
Number theory later turned out to be a key ingredient of modern cryptography, and relativity is necessary for GPS to work properly. Cryptography and GPS both have commercial applications and not just military ones, which I suspect Hardy would have found even more detestable.
Hardy’s examples weren’t merely unfortunate in retrospect. I think they undercut the core of his argument, which is a call to retreat to the realm of the mind, concerned only with the beauty of knowledge, freed from having to think about the real-world implications of one’s work.
Read 7 tweets
25 Nov 20
When I was a student I thought professors are people who know lots of stuff. Then they went and made me a professor. After getting over my terror of not knowing stuff, I realized I had it all wrong. Here are a bunch of things that are far more important than how much you know.
- Knowing what you know and what you don’t know.
- Being good at teaching what you know.
- Being comfortable with saying you don’t know.
- Admitting when you realize you got something wrong.
- Effectively communicating uncertainty when necessary.
- Spotting BS.
- Recognizing others with expertise.
- Recognizing that there are different domains of expertise.
- Recognizing that there are different kinds of expertise including lived experience.
- Drawing from others’ expertise without deferring to authority.
Read 5 tweets
21 Oct 20
Many face recognition datasets have been taken down due to ethical concerns. In ongoing research, we found that this doesn't achieve much. For example, the DukeMTMC dataset of videos was used in 135 papers published *after* it was taken down in June 2019. freedom-to-tinker.com/2020/10/21/fac…
A major challenge comes from derived datasets. In particular, the DukeMTMC-ReID dataset is a popular dataset used for person re-identification and continues to be free for anyone to download. 116 of 135 papers that use DukeMTMC after its takedown actually use a derived dataset.
This is a widespread problem. MS-Celeb was removed due to criticism but lives on through MS1M-IBUG, MS1M-ArcFace, MS1M-RetinaFace… all still public. The original dataset is also available via Academic Torrents. One popular dataset, LFW, has spawned at least 14 derivatives.
Read 6 tweets

Did Thread Reader help you today?

Support us! We are indie developers!


This site is made by just two indie developers on a laptop doing marketing, support and development! Read more about the story.

Become a Premium Member ($3/month or $30/year) and get exclusive features!

Become Premium

Too expensive? Make a small donation by buying us coffee ($5) or help with server cost ($10)

Donate via Paypal Become our Patreon

Thank you for your support!

Follow Us on Twitter!