There are many answers to the so-called privacy paradox but the simplest is the analogy to the environment. Why do people say they care but then not do much? Because they understand what many experts don't: individual action feels overwhelming and collective action is needed.
When given the opportunity for collective action on privacy, people are eager to take it. The latest example is the California Privacy Rights Act. brookings.edu/blog/techtank/…
How long would it take to make informed privacy decisions about all the apps we use? More time than we spend using those apps. This has been well known for a long time, yet some scholars insist that people's refusal to waste their time is a "paradox". npr.org/sections/allte…
For a detailed model of privacy regulation inspired by environmental regulation, see "Regulating Mass Surveillance as Privacy Pollution: Learning from Environmental Impact Statements" by @mfroomkin. repository.law.miami.edu/cgi/viewconten…
I call for a study the privacy paradox paradox, which is the tendency of researchers to assert that there is a privacy paradox in spite of all the evidence to the contrary.
Sorry for the broken link to the privacy regulation paper. Here's the correct one: repository.law.miami.edu/cgi/viewconten…

(Link was broken because I've gotten into the habit of reflexively stripping URL parameters before sharing links to prevent tracking. Usually a good thing to do!)

• • •

Missing some Tweet in this thread? You can try to force a refresh
 

Keep Current with Arvind Narayanan

Arvind Narayanan Profile picture

Stay in touch and get notified when new unrolls are available from this author!

Read all threads

This Thread may be Removed Anytime!

PDF

Twitter may remove this content at anytime! Save it as PDF for later use!

Try unrolling a thread yourself!

how to unroll video
  1. Follow @ThreadReaderApp to mention us!

  2. From a Twitter thread mention us with a keyword "unroll"
@threadreaderapp unroll

Practice here first or read more on our help page!

More from @random_walker

2 Jan
Venture capitalists benefit from giving toxic and dangerous advice to startups. That's because the risk to the VC is bounded — the amount invested ⁠— whereas the costs to founders and workers' health, to society, to democracy, and to the environment are unbounded.
Early-stage and seed VCs externalize more of these costs and hence have an even greater incentive to give harmful advice.
Of course, advice from venture capitalists isn't just advice. I can't think of another group with a bigger gap between power and accountability.
Read 5 tweets
16 Dec 20
Many online education platforms track and profit from student data, but universities are able to use their power to negotiate contracts with vendors to get much better privacy. That’s one of the findings in our new paper “Virtual Classrooms and Real Harms” arxiv.org/abs/2012.05867
We analyzed 23 popular tools used for online learning—their code, their privacy policies, and 50 “Data Protection Addenda” that they negotiated with universities. We studied 129 (!) U.S. state privacy laws that impact ed tech. We also surveyed 105 educators and 10 administrators.
A major reason for poor privacy by default is that the regulations around traditional educational records aren’t well suited to the ‘data exhaust’ of online communication, echoing arguments by @elanazeide & @HNissenbaum here: papers.ssrn.com/sol3/papers.cf…
Read 7 tweets
15 Dec 20
Matt Salganik (@msalganik) and I are looking for a joint postdoc at Princeton to explore the fundamental limits of machine learning for prediction. We welcome quantitatively minded candidates from many fields including computer science and social science. [Thread]
This is an unusual position. Here's how it came to be. Last year I gave a talk on AI snake oil. Meanwhile Matt led a mass collaboration that showed the limits of machine learning for predicting kids’ life outcomes. Paper in PNAS: pnas.org/content/117/15…
We realized we were coming at the same fundamental question from different angles: given enough data and powerful algorithms, is everything predictable? So we teamed up and taught a course on limits to prediction. We're excited to share the course pre-read cs.princeton.edu/~arvindn/teach…
Read 5 tweets
1 Dec 20
Job alert: At Princeton we’re hiring emerging scholars who have Bachelor’s degrees for 2-year positions in tech policy. The program combines classes, 1-on-1 mentoring, and work experience with real-world impact. Apply by Jan 10. More details: citp.princeton.edu/programs/citp-…

[Thread]
This is a brand new program. Emerging scholars are recruited as research specialists: staff, not students. This comes with a salary and full benefits. We see it as a stepping stone to different career paths: a PhD, government, nonprofits, or the private sector.
Who are we? At Princeton’s Center for Information Technology Policy (@PrincetonCITP), our goal is to understand and improve the relationship between technology and society. Our work combines expertise in technology, law, social sciences, and humanities. citp.princeton.edu
Read 5 tweets
27 Nov 20
One of the most ironic predictions made about research is from mathematician G.H. Hardy’s famous "Apology", written in 1940. He defends pure mathematics (which he called real mathematics) on the grounds that even if it can't be used for good, at least it can't be used for harm.
Number theory later turned out to be a key ingredient of modern cryptography, and relativity is necessary for GPS to work properly. Cryptography and GPS both have commercial applications and not just military ones, which I suspect Hardy would have found even more detestable.
Hardy’s examples weren’t merely unfortunate in retrospect. I think they undercut the core of his argument, which is a call to retreat to the realm of the mind, concerned only with the beauty of knowledge, freed from having to think about the real-world implications of one’s work.
Read 7 tweets
25 Nov 20
When I was a student I thought professors are people who know lots of stuff. Then they went and made me a professor. After getting over my terror of not knowing stuff, I realized I had it all wrong. Here are a bunch of things that are far more important than how much you know.
- Knowing what you know and what you don’t know.
- Being good at teaching what you know.
- Being comfortable with saying you don’t know.
- Admitting when you realize you got something wrong.
- Effectively communicating uncertainty when necessary.
- Spotting BS.
- Recognizing others with expertise.
- Recognizing that there are different domains of expertise.
- Recognizing that there are different kinds of expertise including lived experience.
- Drawing from others’ expertise without deferring to authority.
Read 5 tweets

Did Thread Reader help you today?

Support us! We are indie developers!


This site is made by just two indie developers on a laptop doing marketing, support and development! Read more about the story.

Become a Premium Member ($3/month or $30/year) and get exclusive features!

Become Premium

Too expensive? Make a small donation by buying us coffee ($5) or help with server cost ($10)

Donate via Paypal Become our Patreon

Thank you for your support!

Follow Us on Twitter!