1st paper with @shin10173 published!
‘AI & Antitrust: Reconciling Tensions between Competition Law & Cooperative AI Development’ (Yale Journal of Law & Tech).
We examine 14 forms of cooperation strategies, identify tensions & suggest mitigation steps yjolt.org/ai-antitrust-r… 🧵
Cooperation between AI companies can help them create systems that are safe, secure & with broadly shared benefits. However the goal of comp law is to protect competition between rival companies. Potential conflicts are significant but currently underexplored. The 14 coop forms:
1 Assist Clause ‘stop competing & start assisting any ‘value-aligned’ company that gets close to AGI’. Agreement & implementation could restrict competition. Don't make agreements with rivals & implement 'assist' in ways that don't restrict output/innovation
2 Windfall Clause: redistribute profits above a high threshold. Could have disincentivising effect that restricts sales/output. Avoid entering into an ‘agreement to agree’, structure to minimise disincentives
3 Secure enclaves agreements: standardised isolated execution environments could improve security. Agreement could exclude competitors. Process and standard should be ‘fair, reasonable and non-discriminatory’ (FRAND)
4 Mutual monitoring between companies. Risk is could exchange commercially sensitive info. Use a 3rd party to collate & sanitise info before sharing; minimise info to what is strictly necessary. Mitigation for 5-9 & 11 all similar to 4
5 Red-teaming: uncover vulnerabilities in systems & orgs
6 Incident-sharing: share info on accidents & attacks.
7 Compute accounting: account for computing power used for a major project/product to share lessons learnt.
8 Communication: updates, point-of-contact, joint events, ‘heads up’ to build trust & point out problems 9 Seconding staff: from one lab to another for joint projects, trust-building
10 3rd party auditing: use trade association or safety standards body to carry out independent audits. Don’t use as indirect info intermediary. Should be independent – could aggregate/anonymise info
11 Bias & safety bounties: recognition & compensation for reporting accidents etc could spot problems and fix them. Problem is: bounty hunter could disclose confidential info on *its* tech. Mitigation similar to 4
12 Standardised benchmarks: for fairness, safety, explainability, robustness, etc. Could exclude rivals from process or standard. Process & access to the final standard should be fair, reasonable and non-discriminatory (FRAND)
13 Standardised audit trails: traceable log of steps in system design, testing & operation. Can verify claims about system properties. Be FRAND
14 Publication & release norms: ensure ‘dual-use’ tech made public in responsible way. Be voluntary & not too restrictive
Our paper seeks to reconcile cooperative AI development and competition law. Our aim is to ensure the long-term sustainability of these important safeguards to the responsible and beneficial development of AI.
Read full paper (~26,000 words, ~140 pages) for more! #AI#antitrust
• • •
Missing some Tweet in this thread? You can try to
force a refresh