Now that you’ve read the announcement of our amicus brief to the U.S. Supreme Court – if you haven’t, it’s right here ⬇️ – let’s break down some of the major discussions in our amicus brief in a thread:
We filed this brief because most discussions about Gonzalez v. Google so far talk about “algorithms” in general terms, but there are many different ones. We think it’s important that the Court understands their differences in order to decide based on specificity.
In support of neither party, we underscore that algorithms are used in all computer software and undergird all online experiences. There is no single “algorithm” for any platform, so any decision by the Court should not apply to algorithms in general.
Algorithms used for recommendations are especially pertinent to this case, and we point out that they are core to any platform and well-designed algorithms are essential to a functional and enjoyable experience for users of those platforms.
We explain how algorithms used for recommendations tend to work across different platforms as a solution to sort through and rank large volumes of third-party contents. Platforms need them to make themselves usable, but they can design these algorithms better.
To get specific, we explain three main types of algorithms used for recommendation – content recommendations, content moderation & safety, and advertising & commerce – and the respective harms & benefits they might cause.
First, content recommendation is probably the most well-known use for algorithms in platforms. These recommendation systems direct platform users to third-party contents.
Algorithms for content recommendation are usually designed to maximize value to the platform companies, which tends to mean maximizing engagement. But there are alternatives such as optimizing content quality.
We know content recommendation creates a feedback loop when the algorithms maximize personalized engagement, which can be harmful when users engage with harmful content. Which is bad!
Platforms like Facebook have also acknowledged that this could be harmful. Mark Zuckerberg said so himself.
On the other hand, algorithms for content recommendation can benefit users too. Optimizing matches on a dating app or autofilling URLs on browsers are some examples that users might want.
Second, recommendation algorithms are used for content moderation & safety. Platforms use these algorithms to flag, remove, and re-rank third-party content likely to violate policies or laws.
Because platforms use algorithms to moderate contents at scale, the process is imperfect and would generate both false positives and false negatives. Platforms continuously tweak the algorithms to balance between precision and accuracy.
When platforms prioritize accuracy over precision in the recommendation algorithms used for content moderation, this leads to more false positives (removing contents that don’t actually violate policies or laws).
Most platforms strike the opposite balance, prioritizing precision over accuracy. This ensures users see the most contents but also means more false negatives staying on the platforms and causing harm.
Third, platforms also use recommendation algorithms for advertising & commerce. Algorithms are used in “retargeting” that serves up personalized ads to users. These primarily benefit the companies to increase revenue.
Because platforms use different recommendation algorithms depending on contexts, this brief explains three types of recommendation algorithms in great detail to illustrate to the Court the importance of deciding narrowly and specifically.
If this thread is helpful, please share it widely! Also check out a summary page if tweet threads are not your thing. integrityinstitute.org/amicus-brief-s…
@allriselaw authored our amicus brief, and our friends @aiTransparency co-signed it. Thanks also to @resetdottech for financially supporting this work.
Finally, help @Integrity_Inst get our message out! We cultivate a thriving community of integrity professionals who helped build the architecture of the social internet. You can support us so that we can do more expertise outreach like this brief. integrityinstitute.networkforgood.com/projects/15753…
Share this Scrolly Tale with your friends.
A Scrolly Tale is a new way to read Twitter threads with a more visually immersive experience.
Discover more beautiful Scrolly Tales like this.
