Algorithmic mechanism design (eg ad tech auctions such as Google, multi-sided platforms such as Uber) contains an inherent contradiction:
- theory of preserving autonomy & rationality of participants
- reality of opaquely generating & exploiting information asymmetries @jakusg 1/
The point of an auction is to incentivize truthful disclosures of preference & valuation; by contrast, Google, acting as an auction platform & multi-sided mechanism designer, is effectively profiling market actors so that it can extract more surplus for itself. 2/
In privately controlled & highly automated digital platforms, algorithmic market-like mechanisms simulate how a market might behave without necessarily including any of the features necessary to constitute a market, such as freedom to deal or knowable information rules. 3/
Instead of a commitment to individual autonomy, human rationality becomes a variable to be tuned as algorithmic mechanism designers search for desirable patterns of actions & rewards. 4/
Multi-sided platforms are not neutral intermediaries. Arguably, they are not even really markets. Instead, they are tools of world building: their designs & optimization strategies channel power, structure forms of decision making, and encode economic & legal relationships. 5/
Google's adtech auction mechanism ultimately bids up their own share
Uber uses theory of driver autonomy to push workers further away from capital
Ultimate contradiction: superlative information gathering of the market ends up looking more like a control structure @jakusg 6/
The above notes & quotes are from @jakusg#NeurIPS2021 talk at the Political Economy of Reinforcement Learning workshop and from his paper with @salome_viljoen_@LJamesMcGuigan "Design choices: Mechanism design and platform capitalism" 7/
How can we think about building systems not for individual end users, but for groups and communities? So much of computer science is limited by the assumption that there is an individual end-user who should be the benefactor of what we build.
- @marylgray keynote #NeurIPS2021 1/
ML gravitates to large scale data, even though it has rarely had a robust account of where that data comes from & under what conditions, and is almost always deeply disconnected from the social relationships that produced it. 2/
Data is power. Because data has become so powerful, we must transfer the tools of data collection, aggregation, & sharing from engineers to the communities in society that carry the risks. 3/
22 months in, and I still can find more accurate info about Covid on Twitter than if I had to instead rely on articles in NYTimes by Monica Gandhi, New Yorker by Dhruv Khullar, etc, and official signs from my govt (all emphasize hand-washing) 1/
I went from giving keynote talks warning about disinformation (2019) to helping @jeremyphoward make multiple YouTube videos contradicting the CDC & WHO (#masks4all, starting March 2020), even though I believe trustworthy institutions are essential for combating disinfo. 2/
Our institutions failed: saying masks don’t work; denying covid airborne; failing to address #LongCovid; treating disabled & elderly as expendable; claiming pandemic over; wishful thinking as policy; overpromising vaccines definitely end pandemic; ignoring pre-symptomatic 3/
All the public health messaging I see is incorrectly based on droplet transmission (wash hands, keep 1.5m distance, clean surfaces, cover cough) with NO mention of masks, ventilation, air filters, #COVIDisAirborne 1/
My state is reopening borders tomorrow with NO masks. There are vaccine passports & a QR check-in app, but these are insufficient.
Vaccine passports don't account for breakthroughs, waning immunity, kids unvaxxed, & now Omicron. 2/
Check-in app is retrospective (eg after exposure), whereas ventilation & masks would be proactive (keep covid from spreading). Also, I suspect the check-in app is more effective for small outbreaks/when maintaining covid zero, and will be less useful once covid is widespread. 3/
There is widespread, well-documented ableism, racism, & unnecessary gatekeeping in STEM & medicine, and this is damaging our pandemic response in the West.
Pointing this out does not make you anti-science (I love science, but this is a huge problem). 1/
Disabled & chronically people have crucial expertise, and this expertise is being ignored 2/
We can not fix public health until we reckon w/ how institutions have failed public’s trust (harmful advice, contradictory rules, overconfidence, disbelief of suffering patients). Patronizing "shut up & trust the experts" is not going to address this 1/
Some folks tell me experts gave the best advice known at the time, that nobody knew, that the evidence changed.
I need to share a few receipts. In March 2020, I publicly advocated for ordinary people to wear masks, at a time when CDC & WHO said not to 2/
In March 2020, I said young & healthy people should NOT assume they were safe from potential long-term impacts of covid. I shared historical review of flu pandemics leading to neurological problems. 3/
A problem w telling people "just trust the medical experts" is that they still need enough time & scientific literacy to discern whether to trust "experts" promoting mass infection of kids, droplet transmission, & claims LongCovid is psychogenic, OR experts who say opposite 1/
(to be clear, do NOT trust the 1st group)
There seem to be ZERO professional consequences for repeatedly being wrong for last 22 months. Some folks in 1st group have prestigious credentials & platforms in major media outlets. General public may not know their track records 2/
So general public needs to invest a fair amount of time (which many do not have) just to know who to trust, what is true, & how to stay safe. At the same time, will be condescended to & criticized for disagreeing w/ "experts" 3/