In a new paper, @jonathanmayer, Mihir Kshirsagar, and I investigate a question that has been challenging researchers and policymakers: what makes a dark pattern, well, "dark"? arxiv.org/pdf/2101.04843… [thread]
There's a growing academic literature on dark patterns that defines and describes types of dark patterns. There have also been govt. reports and legislation on dark patterns. We compiled and compared these, and found that dark patterns reflect many related but distinct concerns!
We argue that there is no single definition that can capture all of the dark patterns discourse. Instead, dark patterns are a family of related problems, much like (as @DanielSolove famously observed) privacy is a family of related problems.
We propose two themes that, while lacking the precision of a definition, broadly capture the field: modifying a user's decision space and manipulating information flow to a user.
We then connect the dark patterns literature to related scholarship in other academic disciplines, including nudges, behavioral economics, manipulation in philosophy, and market manipulation in law.
Based on this related work, we suggest 4 normative lenses for examining dark patterns: individual & collective welfare, regulatory compliance, and autonomy. To our fellow researchers: let's go beyond listing dark patterns, and ground our work in these normative concerns.
Doing so will help us build a case for legislators and regulators to step in. That will also help us respond to the frequent counterargument that dark patterns are nothing more than aggressive marketing.
We show how future dark patterns research could directly address normative concerns by applying a range of well established HCI measurement methods. We encourage the HCI community to take the lead in using these methods to examine why, exactly, dark patterns are dark.
Mike Bloomberg's presidential campaign website (mikebloomberg.com) displays a periodic popup that states people in certain states are signing up as volunteers. How terrific!
But here’s a secret: that message is falsely generated using some good ol’ JavaScript.
The message (Someone in Virginia signed up to volunteer) is a social proof nudge, and can be traced back to the famous towel reuse experiment. Researchers showed that appeals that employed descriptive social norms got more people to reuse their towels. assets.csom.umn.edu/assets/118359.…
While such nudges were envisioned to be used for socially relevant causes such as environmental conservation, they are now being used to influence us into making impulse purchases or guilting us into staying on Facebook—often with fabricated messages.
This design is a common dark pattern so it is surprising that the CCPA modifications would recommend it. Unfortunately, opt-out dark patterns have become all too common, so here's a thread about them. 👇
Trick Questions is a classic opt-out dark pattern. Two statements seeking consent; the first checkbox needs to be unticked to opt out, but the second checkbox needs to remain ticked. This pattern has several variants.
Another classic: the opt-out is out of sight, usually tucked away inside a fold or in some other page. How many users even expand the fold?
Sometimes the opt-out is in the last place you'd look.