The @EU_Commission proposal for #AI Regulation is disappointing form a consumers perspective.
Here are SOME of the reasons why:
1) The scope is so narrow that effective obligations cover only a limited number of high-risk AI applications that affect consumers. All other systems that could inflict serious economic harm are treated negligently. Strikingly the proposal ignores most economic harms.
E.g. when AI-scoring systematically denies people access to services or excludes them from entire markets based on opaque personality analysis: standard.co.uk/tech/airbnb-so…
Instead of relying on independent auditors, companies are entrusted to check if their own systems comply with the regulation.
➡️ What could possibly go wrong if you entrust @Facebook et al. with checking their own opaque AI-Systems?🤷♂️
To be fair: there are some meagre labelling obligations that will have practically little effect: for emotion recognition systems, AI interacting with people so they know they talk to machine and last but not least for labelling deep fakes (good luck with these).
In addition to labelling: In order for consumers to be able to exercise their rights, they must have more Infos, e.g info on the systmens risks, accuracy, robustness & the data on which “their” decision is based upon. The proposal envisages nothing of this sort.
Oh yes: Also the ban of #DarkPatterns that was still in from last week’s leak has largely disappeared.
see here:👇
In contrast to what @ThierryBreton emphasised during the press conference:
This regulation will certainly not inspire trust among consumers into AI as the rules are patchy.
The @Europarl_EN and the @EUCouncil must now im-prove this proposal. END
• • •
Missing some Tweet in this thread? You can try to
force a refresh