Facebook has announced they will be deleting over a billion face recognition templates as they shut down their entire face recognition system. This is great news for Facebook users, and for the global movement pushing back on this technology. nytimes.com/2021/11/02/tec…
“Facebook getting out of the face recognition business is a pivotal moment in the growing national discomfort with this technology,” said EFF’s @Adam_D_Schwartz. “Corporate use of face surveillance is very dangerous to people’s privacy.” nytimes.com/2021/11/02/tec…
EFF has serious concerns about facial recognition technology. Here's how we can keep fighting back against the dangers it poses: eff.org/deeplinks/2021…
The Facebook (Meta) statement is here in full: “We’re shutting down the Face Recognition system on Facebook.” about.fb.com/news/2021/11/u…
• • •
Missing some Tweet in this thread? You can try to
force a refresh
While XR has the potential to create new forms of entertainment and enhance our lives, it also risks eroding our civil liberties in novel ways. We address some of these emerging issues on our XR issue page: eff.org/issues/xr
Last year we called for more privacy in VR while the technology is still emerging. We need to push back on development which is locked-down and designed to surveil its users. eff.org/deeplinks/2020…
The standards set by the EU’s Digital Services Act (DSA) will influence platforms’ operations far beyond the EU. @Europarl_EN must prioritize the risks to marginalized communities — both within its borders and beyond. eff.org/deeplinks/2021…
Some of our recommendations to the EU include:
Avoid disproportionate demands on smaller providers that would put people’s access to information in serious jeopardy. @EFFeff.org/document/dsa-h…
-- Don’t impose legally-mandated automated content moderation tools on online platforms, as this will lead to over-removals of legitimate speech.
-- Consider mandatory human rights impact assessment as the primary mechanism for assessing and mitigating systemic risks @EFF
With changes in Brazil’s Fake News Bill in the pipeline, lawmakers must firmly reject the traceability mandate. It undermines users’ rights and key principles of privacy and security on end-to-end encrypted apps. @orlandosilva@DepBrunaFurlan 1/6 eff.org/deeplinks/2020…
@EFF As we pointed out: the traceability mandate moves companies away from the privacy-focused engineering and data minimization principles that should characterize secure messaging apps. 2/6
@EFF A far more proportionate alternative to investigations was already presented, relating to targeted and substantiated preservation warrants for metadata, provided there’s no association with the content of communications. 3/6 jota.info/opiniao-e-anal…
Com mudanças ao PL de Fake News em consideração, a rastreabilidade deve ser rejeitada com firmeza por parlamentares. Ela compromete direitos e princípios-chave de privacidade e segurança em apps com cripto de ponta-a-ponta @orlandosilva@DepBrunaFurlan 1/ eff.org/pt-br/deeplink…
@EFF Como já apontamos: a obrigação de rastreabilidade afasta as empresas dos princípios de engenharia e minimização de dados focados na privacidade que devem caracterizar aplicativos seguros de mensageria. 2/6
@EFF Alternativa bem mais proporcional a investigações já foi apresentada, relacionada à ordem judicial fundamentada e direcionada a pessoas específicas para a preservação de metadados, desde que desassociados do conteúdo das comunicações. 3/6 jota.info/opiniao-e-anal…
EFF joined a letter from civil society organizations urging the UN Human Rights Council @UN_HRC to denounce the human rights violations facilitated by NSO Group’s spyware as highlighted by the #PegasusProejct. accessnow.org/letter-un-hrc-…
The letter also urged the UN Human Rights Council @UN_HRC to act within its power “to investigate and prevent further violations linked to the sale, export, and use of Pegasus spyware and cases of targeted surveillance.”
EFF has raised the alarm for years about tech companies selling their surveillance and censorship products and services to repressive regimes. eff.org/deeplinks/2019…
Students are some of the most surveilled people in the U.S. Now, artificial intelligence is being used to determine whether their behavior online is an indicator of a threat to themselves and others--and reporting them to their school.
Yet, AI-driven student surveillance software flags a high percentage of false positives, begging the question whether the benefits outweigh violating students’ privacy.
As Hina Talib, associate professor at Children’s Hospital at Montefiore in New York, said, “Privacy is a developmental milestone for teens.”