The EU just published an extensive proposal to regulate AI (100+ pages). What does it says? What does it mean? Here, a short explainer on some key aspects of this proposal to regulate AI. /1 🧵 #AIandEdu#AI digital-strategy.ec.europa.eu/en/library/pro…
First, what is considered AI in this law proposal? Is linear regression AI? AI is defined in Title I & Annex I. My understanding here is that even a simple linear regression model (technically, a "statistical approach" to "supervised learning") would be considered AI. /2
The proposal makes a strong distinction among AI systems based on their application. In fact, it focuses particularly on high-risk systems. These systems would have the highest requirements for transparency, human oversight, data quality, etc.
But what are high-risk systems? /3
Here we find many examples. First we have remote biometric systems (e.g. public security cameras), no matter if they are real-time or performing biometric identification using previously collected images or video. /4
High-risk systems also include infrastructure management systems. That is, AI systems used in traffic control or the supply of utilities (water, electricity, gas). /5
Systems used in education, for admissions or testing, are also considered high risk, since they may determine the educational and professional course of a person's life. /6
Similarly, AI systems used to recruit or manage workers are considered high risk. The list goes on.
Credit scoring systems, systems used to assign government benefits, systems used in law enforcement, & systems used to deal with migrants & border control, are all high risk. /7
Finally, high risk system include systems intended for the administration of justice and democratic processes.
But how would this regulation affect the development of high-risk AI systems? /8
A lot of it involves regulating inputs (e.g. data) and providing documentation & disclosures. E.g. systems should be developed and designed in a way that natural persons can oversee their functioning. Systems should provide documentation and disclaimers. Etc. /9
The law also proposes that high-risk systems should be trained on high quality data. High-risk systems also would be required to meet a certain level of accuracy, perform consistently throughout their life, and be robust to attacks. /10
These obligations also are expected to percolate up and down the AI value chain. Providers of software, data, or models, should cooperate--as appropriate--with AI providers and users. /11
High risk systems will also need to include a CE label, to indicate their conformity with the regulation. Yet, in exceptional cases, systems could be deployed without a conformity assessment. /12
But the law also provides the opportunity to test and develop systems within regulatory sandboxes. Think of a "clinical trail" phase for the development of AI. Small scale providers would have priority access to these sandboxes. /13
So what do I see as the immediate impact of such a legislation? The first impact I see would be the creation of a large AI certification industry. The legislation requires AI systems to be deployed together with documentations, disclaimers, and assessments. /14
High risk-systems will need to be registered in a centralized EU database, and also, include post-market monitoring systems. This is a lot of work. So a likely outcome is for large companies to hire specialized firms, or develop in-house teams, to produce such documentation. /15
This will make future AI development more similar to pharma. A consequence of may be that smaller players will need to sell to, or partner with, larger players to exit the sandbox and enter the market (similar to what happens in pharma today). /16
Overall, this is a clearly written & thoughtful document. At the moment, I am still gathering my thoughts on what it means. If you are interested in my views on AI regulation, you can read Chapter 7 of my latest book: How Humans Judge Machines /END judgingmachines.com
• • •
Missing some Tweet in this thread? You can try to
force a refresh
How big are the digital exports of the U.S. compared to Europe?
Today, trade flows not only through container ports, but through routers. In this new @NatureComms paper, we introduce a method to estimate trade in digital products by combining machine learning methods with corporate revenue data.
Digital product exports, such as purchasing a video streaming subscription from a foreign website, are notoriously hard to estimate because tech firms own foreign subsidiaries. Moreover, existing service trade statistics lack the fine granularity needed to track digital products. /2
Two years ago, with @ViktorStojkoski, @philippmkoch, and @EvaColl8, we started an ambitious project with the goal of estimating digital product exports from corporate revenue data. /3
**New Paper**
Economic complexity methods are popular tools in industrial policy. Yet, despite their widespread adoption, these methods are sometimes misunderstood. In this new paper, I explain & explore the policy implications of economic complexity. /1
https://t.co/BQ6xLOA8cMbuff.ly/3YjBFbh
First, why are economic complexity methods misunderstood?
A key part of the confusion comes from the predictive nature of these methods. The concept of relatedness, for instance, anticipates the probability that a country or region will succeed at an activity./2
So the knee-jerk reaction that people get when they encounter these methods is to develop a strategy that recommends what the method predicts. These are the products/industries that would be easiest to achieve. But this line of thought is both wrong & incomplete./3
What is intelligence?
And how is it different from problem solving?
These questions are central in our current discussion on AI & were debated passionately this week at the Santa Fe Institute’s conference on collective intelligence.
But what did we learn?
🧵 .. 1/N
First, a disclaimer. In this thread I will focus on one idea, not all the ideas discussed in the conference, and will obviate other aspects of intelligence (eg multidimensional intelligence), not because these are not important, but because I want to communicate one point.
My focus will be on a distinction between intelligence & problem solving, because in my experience, when people are pushed to define intelligence on the fly they often gravitate towards a problem solving definitions of intelligence.
AI hype is on full swing, to a large extent, because of language models.
But as a writer, I am not totally convinced about the “productivity boosts.”
You see, writing fulfills a dual purpose. On the one hand, we write to communicate. But on the other hand…. /1
we write to clarify our own ideas.
We write to learn in ways that cannot be accomplished by reading.
A big part of what motivates a writer to work on a book is knowing that at the end of the journey I’ll be a different person.
You write not because you are an expert, but to become one.
Writing is sincere. It pushes you to encounter your own incompetence, repeatedly. And when your own words look stupid & your ideas malformed, you must either abandon them or refine them. In that process you learn.
One frequent criticism of economic complexity metrics is that some countries, such as Mexico or Slovakia, rank too high while others, like Australia and New Zealand, rank too low. But is this a problem with trade data or with the method?
Trade data does not perfectly reflect a country's capabilities because distance plays a role. Despite advances in communication & transportation technologies, it is still more convenient for a car manufacturer in Germany to work with suppliers in Czechia or Slovakia.