There are no unmoderated speech platforms. Email may be the closest, but even email at a large scale is complex enough that you have to use intermediaries. (The Princeton Election Emails Corpus confirms no Trump emails since Jan 6 electionemails2020.org/entity/59a162d…)
Of course, relatively unmoderated platforms like Parler are themselves subject to the standards of other intermediaries. It's platforms all the way down.
Some people want social media platforms to be neutral and apolitical. But a major value proposition of speech platforms is their recommender algos and moderation policies that amplify some voices and suppress others. They can’t make those decisions in a politically neutral way.
To state the obvious, I think it's a good thing that there are no unmoderated speech platforms. What's not good is that moderation and deplatforming decisions are in the hands of private companies with little transparency or accountability.
• • •
Missing some Tweet in this thread? You can try to
force a refresh
Professors at top universities are lottery winners, but rarely acknowledge the role of luck in their success. Be skeptical when they give you advice suggesting that the path they took is a repeatable one. If you aspire to an academic research career, have a backup plan.
Deciding to go into academia because all your professors said it worked out pretty well for them is also known as the statistical fallacy of sampling on the dependent variable.
I'm reviewing pre-doctoral, doctoral, post-doctoral, and faculty applications. I'm amazed by how much more these candidates have accomplished than I had at the corresponding stage of my career, and how many more qualified candidates there are compared to available positions.
There are many answers to the so-called privacy paradox but the simplest is the analogy to the environment. Why do people say they care but then not do much? Because they understand what many experts don't: individual action feels overwhelming and collective action is needed.
When given the opportunity for collective action on privacy, people are eager to take it. The latest example is the California Privacy Rights Act. brookings.edu/blog/techtank/…
How long would it take to make informed privacy decisions about all the apps we use? More time than we spend using those apps. This has been well known for a long time, yet some scholars insist that people's refusal to waste their time is a "paradox". npr.org/sections/allte…
Venture capitalists benefit from giving toxic and dangerous advice to startups. That's because the risk to the VC is bounded — the amount invested — whereas the costs to founders and workers' health, to society, to democracy, and to the environment are unbounded.
Early-stage and seed VCs externalize more of these costs and hence have an even greater incentive to give harmful advice.
Of course, advice from venture capitalists isn't just advice. I can't think of another group with a bigger gap between power and accountability.
Many online education platforms track and profit from student data, but universities are able to use their power to negotiate contracts with vendors to get much better privacy. That’s one of the findings in our new paper “Virtual Classrooms and Real Harms” arxiv.org/abs/2012.05867
We analyzed 23 popular tools used for online learning—their code, their privacy policies, and 50 “Data Protection Addenda” that they negotiated with universities. We studied 129 (!) U.S. state privacy laws that impact ed tech. We also surveyed 105 educators and 10 administrators.
A major reason for poor privacy by default is that the regulations around traditional educational records aren’t well suited to the ‘data exhaust’ of online communication, echoing arguments by @elanazeide & @HNissenbaum here: papers.ssrn.com/sol3/papers.cf…
Matt Salganik (@msalganik) and I are looking for a joint postdoc at Princeton to explore the fundamental limits of machine learning for prediction. We welcome quantitatively minded candidates from many fields including computer science and social science. [Thread]
This is an unusual position. Here's how it came to be. Last year I gave a talk on AI snake oil. Meanwhile Matt led a mass collaboration that showed the limits of machine learning for predicting kids’ life outcomes. Paper in PNAS: pnas.org/content/117/15…
We realized we were coming at the same fundamental question from different angles: given enough data and powerful algorithms, is everything predictable? So we teamed up and taught a course on limits to prediction. We're excited to share the course pre-read cs.princeton.edu/~arvindn/teach…
Job alert: At Princeton we’re hiring emerging scholars who have Bachelor’s degrees for 2-year positions in tech policy. The program combines classes, 1-on-1 mentoring, and work experience with real-world impact. Apply by Jan 10. More details: citp.princeton.edu/programs/citp-…
[Thread]
This is a brand new program. Emerging scholars are recruited as research specialists: staff, not students. This comes with a salary and full benefits. We see it as a stepping stone to different career paths: a PhD, government, nonprofits, or the private sector.
Who are we? At Princeton’s Center for Information Technology Policy (@PrincetonCITP), our goal is to understand and improve the relationship between technology and society. Our work combines expertise in technology, law, social sciences, and humanities. citp.princeton.edu