The Luca QR code Covid app, (for-profit system flogged to 🇩🇪 Länder) has been compromised (in a way that the official CoronaWarnApp’s QR system can’t be), through a website that lets you check in any phone number to wherever you want—even regional prime ministers! 🧵 on the saga:
While hard to believe, Luca was adopted by Länder after huge lobbying from hospitality who convinced them that a hasty app w a 6 mo free trial for venues & big cost for health authorities would i) allow reopening, ii) help Länder win upcoming 🗳 by making national gov look slow
Luca’s slick PR campaign, where they became mostly known to health authorities by aggressive marketing w celebrities, meant that no-one discussed or scrutinised the technical details. Politicians have even admitted this, and DPAs accepted statements of ‘encryption’ as secure.
Despite it being an infrastructure they had no control or say over, local governments were already shelling out money which, in practice, was just subsidising Luca’s attempt to make a platform. It had even partnered with ticket companies! netzpolitik.org/2021/digitale-…
Luca was also careful to never mention the pandemic, coronavirus or COVID in its app description on app stores, as not to trigger extra checks (and in line with its platform dreams). Not that app stores should govern the world, but it betrays the firm’s mentality.
@thsStadler led an analysis of how misleading the app’s security claims were, even when you gave its under specified schematic (belatedly published) every benefit of the doubt arxiv.org/abs/2103.11958. A letter from hundreds of IT experts was published here digikoletter.github.io
If you speak 🇩🇪 or don’t mind translating, there’s an extensive and surreal thread on the Luca saga here from MP @anked
There’s a legal angle too. Unlike in England, where the law states that those using the official QR code that permits notification do not need to manually provide details, in Germany, regional laws (eg Bavaria GVB 562 (23 June 2020) s 3 verkuendung-bayern.de/baymbl/2020-56…) do not permit this
Lots of selected thoughts on the draft leaked EU AI regulation follow. Not a summary but hopefully useful. 🧵
Blacklisted art 4 AI (except general scoring) exempts include state use for public security, including by contractors. Tech designed to ‘manipulate’ ppl ‘to their detriment’, to ‘target their vulnerabilities’ or profile comms metadata in indiscriminate way v possible for states.
This is clearly designed in part not to eg further upset France in the La Quadrature du Net case, where black boxes algorithmic systems inside telcos were limited. Same language as CJEU used in Art 4(c). Clear exemptions for orgs ‘on behalf’ of state to avoid CJEU scope creep.
Updates announced to England and Wales presence tracing (QR checkin) app functionality. (short thread translating what these things mean)
1st point isn't actually a change to the app or regulations. The regulations have always required everyone individually to scan in if they used the app, but allowed a 'lead member' to represent a group of up to six. This would abolish the latter. More: michae.lv/law-of-qr/
Venue history upload is controversial. The DPIA has not yet been updated to show the privacy-preserving method that the press release claims to use. May also make people wary to upload venue data. Cannot analyse without further information.
We outlines current approaches to accessing enclosed data, and argue that GDPR transparency, access, portability rights can be a powerful bottom-up, adversarial data access tool, if used well.
We outline the nature of those transparency provisions for those unfamiliar, and show how they can be used, elaborating on legal, ethical and methodological challenges — a bit like a mini-manual. A lot more could be said — but we hope this helps researchers make a good start.
Core to the DMA is the idea of "core platform services" and providers thereof, listed here and defined either within the reg or in previous regs. Big and powerful providers of these are in scope, basically.
The juicy parts of the DMA are Articles 5 and 6. These contain obligations for gatekeepers in relation to core services. Art 6 obligations can be further specified by the EC through implementing acts.
Today's Online Harms consultation response is perhaps the first major UK divergence from a big principle of EU law not tied to Brexit directly: it explicitly proposes a measure ignoring the prohibition on requiring intermediaries like platforms to generally monitor content.
the e-Commerce Directive art 15 prohibits member states from requiring internet intermediaries to actively look for illegal content; this is because the awareness would make them liable.
The Online Harms White Paper roughly kept with this, indicating that automatic detection systems were an approach platforms could use, but they would not be required to. Consultation responses (unsurprisingly) agreed.