Don't do this - human rights defenders already have images and videos undermined - don't contribute to it by utilizing AI-generated imagery in this way. Yes, there will be ways to use AI-generated imagery for advocacy - this is not it.
Activists we work with @witnessorg constantly defend credibility of pics/ videos.
In Prepare, Don't Panic work we talked to them (+ journalists) about AI-generated media.
A key fear? Undermining their evidence with claims all can be falsified.
What they wanted? - links pinned
It's not that there aren't ways to use AI-generated images/ videos - we explored in recent workshop in Nairobi, but also fundamental reality that #humanrights defenders can't be naive or compete with abusers in using or AI imagery: we have more to lose.
A few things human rights defenders DID ask for: 1. Center people globally facing risk to decide how handle harms/solutions in AI-generated media
2.Responsibility on models to tools to distributors not just hope individuals spot 3. Need access 2 detection wired.com/story/opinion-…
.@witnessorg Oversight Board submission on Trump suspension notes 1) All comments in light of fact that power to push #Facebook on policy change, product/technical infrastructure change, global resourcing, response to extralegal political pressure globally not granted to Board 👇
.@witnessorg Oversight Board submission: 2) Public figures need greater scrutiny on incitement to violence and hate (and misinfo/disinfo), not less. Account suspension was correct.👇 witness.org/witness-facebo…
2. Quick intro for those don't know @witnessorg; work on helping anyone, anywhere using video and tech for human rights. We're focused on making you more effective, ethical and safe if you do. And that means also keeping eye out for emerging tech threats witness.org
3. Big emerging tech threats we're concerned about now are: #AI at intersection with #disinformation, #media manipulation and rising #authoritarianism. We know this is where the rubber hits the road for activists and civic witnesses on the ground.
1/ THREAD. What are possible #solutions to the threats #deepfakes and synthetic media could pose to evidence, truth and freedom of expression? Our survey from recent @witnessorg@firstdraftnews expert convening.
2/ Invest in MEDIA LITERACY + RESILIENCE/discernment for news consumers - how to spot both individual items of synthetic media (e.g. via visible anomalies such as mouth distortion often present in current deepfakes) as well as develop approaches to assessing image credibility
2/We've had fake images and CGI for years, so what's different now (beyond enabling us to transplant Nicolas Cage's face to other people). The ease of making #deepfakes is captured by the vlogger Tom Scott
3/ = Barriers to entry to manipulate audio and video are falling - costs less, needs less tech expertise, uses open-source code + cloud computing power. Plus, sophistication of manipulation of social media spaces by bad actors makes it easier to weaponize and to micro-target.