David Wood Profile picture
14 Jun, 130 tweets, 97 min read
Whilst waiting for the #CogX2021 sessions to start streaming, why not quickly check out the open preview of my forthcoming new book, which places the future of AI as central to the future of humanity. "Vital Foresight" dw2blog.com/2021/05/26/a-p…
CEO of @RollsRoyce, Warren East, looks forward to the company "smashing" the air speed record for an electrically powered craft. As part of exploring possibilities for greener air travel #CogX2021
... though it looks like this record-breaking @RollsRoyce effort has been delayed from the schedule originally announced (back in December 2019). Given the uncertainties involved in such innovative engineering, that's not too unexpected #CogX2021 rolls-royce.com/media/press-re…
My takeaway from all the outages at today's multi-stream hybrid #CogX2021 festival: when technology is pushed to its limits, in new ways, things go wrong. Let's bear that squarely in mind as we create and deploy AI that's ever more powerful Image
Now streaming at #CogX2020: "Existential threats: The unpredictable horizon" Image
Clarissa Rios Rojas, Research Associate of @CSERCambridge, sets the landscape of existential risks #CogX2021 Image
Eight tools to anticipate and control frontier risks, from the #CogX2021 presentation by @Clarissajaz Image
Robert Hercock, @BTGroup Chief Research Scientist, makes the excellent point that methods to anticipate and control existential risks depend on sufficient trust within society - e.g. trust in what scientists say. The spread of viral misinformation complicates matters #CogX2021
On example of the dangers of viral misinformation, reported by Robert Hercock: The damaging attacks on wireless towers, on the assumption they were spreading 5G and therefore increasing the prevalence of Covid #CogX2021 Image
"The danger isn't from intelligent AI, it's from humans using AI stupidly" - I strongly disagree. There are *multiple* types of danger. Consider AI that is given a task by well-motivated, clever people, but that task turns out to be subtly but fatally mis-specified #CogX2021
For a different, but fascinating and erudite, view of the existential risks facing humanity, and of our flawed attempts to anticipate and control them, I recommend "Doom: The Politics of Catastrophe" by Niall Ferguson #CogX2021 goodreads.com/book/show/5475…
Up next on the #CogX2021 Cyber & Defence stage, at 12 noon UK time: "Superpowers and peace: Making a multipolar world work" with @FunmiOlonisakin, Christopher Coker from @lseideas, and @AlexBur0765 of Rebellion Defence Image
"We're living in a time of un-peace", says Christopher Coker at #CogX2021. Cyber attacks are taking place the whole time. And, quoting Eric Schmidt, "cyber attacks don't leave vapour trails". Unlike the case with nuclear war, there seems to be little prospect of direct deterrence Image
The rising generation Z can change the nature of global superpower competition, says @FunmiOlonisakin at #CogX2021, because they are prepared to think differently. Key to winning the support of new leaders around the world will be *encouragement* rather than simply deterrence Image
Two main blockages to positive progress in global relationships, such as reform of the UN: "strategic narcissism" by the US, and "wolf diplomacy" by China (not interested in finding true friends) - Christopher Coker at #CogX2021 Image
On the "Cutting Edge" stage at #CogX2021 from 1pm UK time today: "Mixed realities for all industries: Let the spatial computing revolution begin" with @iamtomcarter, @CathyHackl, and Urho @konttori Image
"What if technology had the right to our thoughts in the future? What if our own minds could be hacked?" - Next up on the #CogX2021 Cutting Edge stage: "Stealing thoughts from our head - exploring neurorights" with @Lone_Frank and @yusterafa Image
Some history with which I'm not familiar: "The electrifying, forgotten history of Robert Heath’s brain pacemaker, investigating the origins and ethics of one of today’s most promising medical breakthroughs: deep brain stimulation" by @Lone_Frank #CogX2021 penguinrandomhouse.com/books/553721/t…
"Welcome to the NeuroRights Initiative: Protecting human rights and promoting ethical innovation in the fields of Neurotechnology and AI" - the initiative described by @yusterafa at #CogX2021 nri.ntc.columbia.edu
The Five NeuroRights:
1. The Right to Personal Identity
2. The Right to Free-Will
3. The Right to Mental Privacy
4. The Right to Equal Access to Mental Augmentation
5. The Right to Protection from Algorithmic Bias
- as reported by @yusterafa at #CogX2021 Image
Society failed to anticipate and manage the Covid pandemic. And it failed in the same way with the pandemic of unregulated social media. We need to do better with the forthcoming even larger disruption of technology that can read and write the human mind - @yusterafa at #CogX2021 Image
"We potentially have a mental atom bomb here" - the final comment at #CogX2021 by @Lone_Frank Image
"The solarwinds attack in 2020 remains an unprecedented moment in cyber history, with 1,000 Russian engineers all dedicated to infiltrating the US administration and intelligence agencies. Is this the start of a brazen Cold War?" #CogX2021, 3pm Image
One great aspect of the #CogX2021 scheduling is the regular rhythm of sessions lasting 40 minutes followed by 20 minute breaks. That leaves plenty of time for physical and mental recharging before the next session starts. Well done to all the moderators!
It's just a coincidence that an electrical fuse tripped in my home, knocking out my wifi, just as I was starting to watch the "Solarwinds" session in the "Cyber & Defence" track at #CogX2021, right? Now that it has rebooted, I'm watching at 1.5x speed to catch up the livestream Image
"Walk softly, but carry a big cyberstick" - @Avast CISO @jayabaloo at #CogX2021 gives a new spin to the Teddy Roosevelt quote Image
Hmm, my #CogX2021 question about the dangers of hoarding zero-day exploits, intending aggressive usage but risking boomerang damage if that info leaks, went over the heads of the panellists. On this topic I recommend the recent @nicoleperlroth book goodreads.com/book/show/4924…
Next up on the #CogX2021 Cutting Edge stage: "Understanding AI systems for a better AI policy" with @jackclarkSF (co-chair, OECD working group on AI) and @jesswhittles (Senior Research Fellow at Leverhulme Centre for the Future of Intelligence) Image
Ah, @jackclarkSF changed the title of this #CogX2021 session. To something even more interesting... Image
The reason why governments generally operate on a different time scale than the tech industry. @jackclarkSF at #CogX2021 Image
The case for investing in technology and institutions to monitor AI systems more systematically and continuously. @jackclarkSF at #CogX2021 Image
Any measurements change the system being measured - in part because companies will be incentivised to make their AI systems "look good" - @jackclarkSF says we need to learn from quantum mechanics (!) as we design tools to measure AI #CogX2021 Image
Something that definitely needs to be measured, in the capability of AI systems: the ability to generate convincing synthetic media (whether text, audio, or video) - @jackclarkSF at #CogX2021 Image
More systematic, accurate measurement of AI capabilities will help reduce, not only cases when AI features are being over-hyped, but also cases when features are under-hyped (when society presently isn't paying enough attention to possible breakthroughs) - @jackclarkSF #CogX2021 Image
Next on #CogX2021 Cyber & Defence stage: "Warfare at the speed of AI: With the advent of AI-powered information warfare, everything you know about the battlefield is about to change" with General @TonyT2Thomas, @sgourley (Mathematics of war) & @willknight Image
General @TonyT2Thomas points out past examples, involving both US and Iranian military, in which humans made bad judgments under pressure for quick decisions, leading to civilian loss of life. It's possible that an AI in these loops could have taken better decisions #CogX2021 Image
A major complication with insisting all military decisions must be approved by humans is the deployment of hypersonic nuclear missiles, that could be launched from submarines just off the US shore. Any human response will inevitably come too late - @sgourley at #CogX2021 Image
Any analysis of military threats in the 2020s needs to avoid any hard separation between different types of warfare - e.g. nuclear, biological, conventional, disinfo, cyber - since they all coexist and interact - General @TonyT2Thomas at #CogX2021 Image
At 6pm on the #CogX2021 Research stage: Jeff Hawkins, Palm computing pioneer and co-Founder & Chief Scientist at Numenta, in discussion with @azeem Azhar. "How do simple cells in the brain create intelligence? And what does this mean for the future of AI?" Image
Ahead of this #CogX2021 session on "A Thousand Brains", here's my mini-review of the book of that name - my assessment is that the book has lots of great content, but it gets one big idea badly wrong (and another idea wrong too)
This session was delayed by "technical difficulties" (apparently), but is now underway, hurrah :-) Image
Day Two of #CogX2021 is kicking off now with "The future of flying". "Discuss the new technologies being used to ensure the aviation industry is at the forefront of the green revolution" Image
Just starting on the Cutting Edge stage at #CogX2021: Why the future of AI is in Asia, with @paragkhanna and @SerenaChaudhry - "Why AI’s future goes beyond China to other parts of Asia" Image
There are multiple trends in increased development and deployment of robots in Asian countries - in factories, healthcare, and homes - trends accelerated by the pandemic. This increased usage will likely spread from Asia to the rest of the world - @paragkhanna at #CogX2021 Image
"If you can do your job from anywhere, someone anywhere can do your job" - @paragkhanna at #CogX2021 quotes @KuperSimon of the FT ft.com/content/9414f4… Image
Next on the Research stage at #CogX2021: "The opportunities of open science: accelerating research, solving real world problems, and improving trust" with @TheaSherer of Springer Nature and @RituDhand of Nature Journals
Open Science is "the practice of science in such a way that others can collaborate and contribute, where research data, lab notes, and other research processes are freely available, under terms that enable reuse..." - @RituDhand shares the FOSTER definition at #CogX2021 Image
Some issues in the governance of new technologies can be addressed locally, but these local approaches are unlikely to scale to handle the global nature of the technology platforms - @katecrawford in discussion with @azeem Azhar at #CogX2021 Image
The way that public functions are now becoming privatised, under systems provided by big tech corporations, should be compared to the enclosures of the commons a few centuries past - @katecrawford replies to @azeem at #CogX2021 Image
The new company-appointed Oversight Boards at corporations like Facebook are constrained in the set of questions they can tackle, points out @katecrawford at #CogX2021. They have no say on the most important issues. That highlights the limits of self-regulation Image
Another insightful historical comparison from @katecrawford at #CogX2021: Consider the various actions that the US government could have taken from 1906 onward after "The Jungle" investigation of the food trade by "muckraker" Upton Sinclair blog.smartsense.co/upton-sinclair…
Technology is NOT neutral, insists @katecrawford at #CogX2021, quoting Kranzberg’s First Law technologystories.org/first-and-seco… Image
"Down The Rabbit Hole: QAnon, conspiracists and the new threat" with @zoetabary, @aoifegall, @AmarAmarasingam, and @mwendling. How do these networks grow, and how can we mitigate the damage - next on the Cyber & Defence stage at #CogX2021 Image
In a conference with many highlights, this discussion seems particularly relevant: "Gone are the days when conspiracy theories existed only in the fringes of society. From capitol riots to vaccine boycotting, they’ve become a mainstream threat with real consequences" #CogX2021 Image
Most people weren't particularly seeking out QAnon content; it arrived in their social media inbox without them asking for it - @aoifegall emphasises the consequences of tech algorithms in drawing more people into a conspiracy #CogX2021
Banning a community like QAnon from mainstream social media is a two-edged sword. It pushes people into unmoderated social media, where there is greater extremism. A better solution is improved media literacy, taught to people of all ages - @aoifegall at #CogX2021 Image
We should have resources in place ready to assist people in conspiracy networks when they are starting to question things, but at any time we shouldn't expect most people in these networks to want to engage with these resources - @AmarAmarasingam at #CogX2021 Image
People are quick learners. They soon work out ways to evade social media checks on what they are publishing, for example by subtly changing the wording they use - @mwendling at #CogX2021 Image
Conspiracy networks are by no means all the same. Some (e.g. anti-vax) claim to look at evidence in favour of their claims. But others thrive on apparent failures of their predictions! QAnon started with a failed prediction about Hillary Clinton - @AmarAmarasingam at #CogX2021 Image
To talk to our "crazy uncles" etc who have fallen into conspiracy networks, a vital skill is empathy. Brow-beating people with facts is unlikely to be effective - @mwendling at #CogX2021 Image
That takes us nicely to the "Empathy and ethics in the workplace" session, up next on the Future of Work & HR stage at #CogX with Sherry Turkle (@STurkle) and Tim Leberecht (@timleberecht) exploring "the subjective side of people’s relationships with tech" Image
"Empathy doesn't mean you like somebody. It means you are ready to listen to them and understand them, and understand not just the place they are coming from but their problem... Empathy is the fuel for democracy" - @STurkle at #CogX2021 (final remark is from Joe Biden) Image
My view, contra what @STurkle is saying at #CogX2021. Psychological advice can be really valuable, even when delivered by a professional counsellor (a human) who only "simulates" genuine concern for clients. It's the same with simulated genuine concern from AIs Image
Following up: therapists (human) can provide advice to people who are experiencing problems they themselves can never directly experience. E.g. they might be a male, with a female client. The ability to have the exact same experience isn't needed to offer useful advice #CogX2021 Image
Big Tech should be kept out of childcare and eldercare; such care is far better provided by humans, says @STurkle at #CogX2021. But AI systems can make a big difference in child education, and Grace (a robot) from @singularity_net is poised to make a big difference in eldercare Image
The discussion reminds me of people who used to scoff at the idea of finding genuine romance on an Internet dating site. Or that consumers would never want to shop somewhere that lacked real human salespeople #CogX2021
"A look at the burgeoning counter-radicalization industry, how we can prevent conspiracy-laced hate, and protect vulnerable individuals" - the next Cyber & Defence session at #CogX2021 is "The dark web", with @vidhya_ra and @ClaudineTinsman Image
The work of @vidhya_ra at @MoonshotTeam started by interviewing people drawn to white supremacist groups: what led them to join, and what in due course led them to leave. It became clear that similar approaches could be useful for other sorts of extremist groups too #CogX2021 Image
An example of how social media can respond to various searches by providing links that can help people avoid extremist routes - shared by @vidhya_ra at #CogX2021 Image
This is an example of an "empathy anger ad" - an ad which asks people if they're feeling angry, and if they would prefer not to feel in that way. It seems that people in various extremist groups sometimes do click on these ads, with good results - @vidhya_ra at #CogX2021
Linkages between different extremist or conspiracist networks are stronger nowadays than before. It appears these links are being deliberately cultivated, in order to grow these groups - @vidhya_ra at #CogX2021 Image
What might @MoonshotTeam do differently in the future? @vidhya_ra answers that it would be great to have more evidence about which specific kinds of online interventions have the best overall effects #CogX2021 Image
Regardless of the type of extremist movement, there's usually a common factor leading people to join them, namely a desire for belonging and meaning, to address their personal feelings of vulnerability. That applies across the ideological spectrum - @vidhya_ra at #CogX2021 Image
"The next wave: the game-changing potential of probabilistic programming, what makes it so different from deep learning, and why it could help propel programming culture forward" - Next at #CogX2021, Prof Stuart Russell in discussion with @gemmamilne Image
Stuart Russell starts by sharing the view of Francois Chollet about serious limitations of "current deep learning techniques" in which many "applications are completely out of reach" #CogX2021 Image
One goal of probabilistic programming, says Stuart Russell at #CogX2021, is to enable knowledge generated by earlier learning exercises to be fed back into the learning system, along with new data, to generate richer knowledge in subsequent iterations Image
Summary slide. The real progress will come from combination approaches. One advantage of probabilistic programming that has already been obtained is a 1000-fold speedup in various compilers #CogX2021 Image
This fascinating talk by Stuart Russell is the first one I've attended at #CogX2021 that gives me a strong reason to change a section in my forthcoming new book (to include a mention of probabilistic programming), before it gets published dw2blog.com/2021/05/26/a-p…
Next up at #CogX2021: "Ethics and Bias in AI" with @carolinefdaniel (Brunswick Group) and Rob Glaser (RealNetworks): "How can we prevent the individual from paying the price for a loss of privacy and still ensure safety for the greater community?" Image
With facial recognition technology, we're at a stage comparable to motor cars before rules on road safety were established, says Rob Glaser at #CogX2021. We need appropriate regulation before we can gain the net benefits from the software Image
The SAFR technology from RealNetworks that Rob Glaser is describing at #CogX2021: "Offers accurate, fast, unbiased face recognition and additional computer vision features. Optimized to run on virtually any camera or camera-enabled device" safr.com
Rob Glaser gives the example of food labelling: regulations prescribe even the minimum size of font for labels giving certain information about the ingredients. Similar disclosure rules are needed for algorithms of all sorts, to convey information transparently #CogX2021 Image
There's no need for all countries to adopt the very same regulations about facial recognition technology. However, a transparent discussion is needed in every country - Rob Glaser at #CogX2021 Image
Recognition technology can be exculpatory as well as suggesting the identification of guilty parties. That is, it can raise the probability that someone is innocent. That applies for DNA evidence and it should apply for facial recognition tech too - Rob Glaser at #CogX2021 Image
Rob Glaser at @CogX2021 gives a shout out to the book "The Alignment Problem: Machine Learning and Human Values" by @brianchristian. I echo that endorsement: it provides a rich perspective on questions of AI fairness and bias goodreads.com/book/show/5048…
Rob Glaser highlights the way in which intuitively compelling notions of fairness actually end up in contradiction with each other (using an example from the Brian Christian book). This shows there's no escape from humans having to review and take hard ethical decisions #CogX2021 Image
The thing that causes Rob Glaser the biggest worry for the next ten years is the potential increase of tribalism and inequality in the world. AI can help us deal with these issues - but only in a secondary way. (That is, political actions are needed too!) #CogX2021 Image
Probably one of the highlights of #CogX2021 next: mega-thinker "Mad Max" @tegmark will be interviewed by @BVLSingler on "How we can use machine learning to detect bias in the media, and how we could wield this power to strengthen our democracy" Image
Another title for this talk: "AI for helping rather than hacking democracy" #CogX2021 Image
The threats from AI to democracy aren't just a few decades away. They're strongly in the present too - Max Tegmark at #CogX2021 Image
We need to move away from trying to treat symptoms. We need to develop a more accurate diagnosis of the underlying problem - Max Tegmark at #CogX2021 Image
Science avoids giving powerful entities (governments, big companies, religions, etc) a special influence over fact-checking - Max Tegmark at #CogX2021 Image
What the data shows is that news sources vary in predictable ways, not just along a left-right axis, but also along a big media / small media axis. It would be far better to receive news in less biased ways - Max Tegmark at #CogX2021 Image
Max Tegmark accepts that scientists are often biased and deserve criticism. Examples include scientists who were motivated to deny the links between smoking and cancer. But science as a whole is arguably the least corrupt institution in society - Max Tegmark at #CogX2021 Image
The biggest problem with news sources isn't just that they often distort what they report about. It's the news stories that are omitted altogether - Max Tegmark at #CogX2021 Image
Next up on the Ethics & Society stage at #CogX2021: "The risks of adopting AI at scale, how we can reduce these risks without restricting progress, and the critical role the public plays" with @drkatedevlin, @gramchurn, and @Jackstilgoe Image
So-called "autonomous systems" incorporate lots of human design and are subject to ongoing human support and influence, points out @Jackstilgoe at #CogX2021 - let's not be misled by the words that are in wide use Image
Whether systems can be trusted depends on three Ps, suggests @Jackstilgoe at #CogX2021: Performance, Process, and Purpose. (Then he mentions Privacy in his next sentence. A fourth P?) Image
We also need to look at Ownership (as well as the three Ps), suggests @gramchurn at #CogX2021. For me, I would highlight Accountability. Providers of technology shouldn't be able to evade responsibility via one-sided license agreements, etc Image
We need to beware "the human crumple zone" says @Jackstilgoe at #CogX2021 - referring to the notion introduced by @m_c_elish - in which blame for tech failures is assigned too easily to "human error" instead of flawed system design epicpeople.org/moral-crumple-… Image
Users will reject smart meters if there's any possibility that they're tracking (or could infer) which films they're watching on Netflix, suggests #GRamchurn at #CogX2021. Without trust, technology won't deliver its potential benefits Image
The final #CogX2021 session of the day: "Why innovation is the only way out of the climate crisis" with @tfadell, @RobertDowneyJr, and @ES_Entrepreneur Image
Issue on the top of his mind, needing innovation: plastics, says @tfadell - we have far too much of it (e.g. in our supermarkets), and it all lasts far too long. #CogX2021 Image
He's been trying to do green investments for ten years, says @tfadell, but there's much greater momentum now than before. Publicity around e.g. Greta Thunberg is helping to change the public mood #CogX2021 Image
We need to beware "imposter innovations" in what is becoming an increasingly crowded space, warns #RobertDowneyJr at #CogX2021 - people who are able to say things that *sound* good regarding green innovation Image
"AI Safety: Today and Tomorrow" with Jaan Tallinn (seed funder of @CSERCambridge), @MarcWarner10, and @Liv_Boeree, starting at 10am on the #CogX2021 Ethics & Society Stage Image
"AI adoption is delegation to automated decision makers... If you are able to delegate well, you are able to increase your overall competence... but if you delegate badly, you might get something completely different than you originally had in mind" - Jaan Tallinn #CogX2021 Image
All three speakers in this #CogX2021 "AI Safety" panel, Jaan Tallinn, @MarcWarner10, and @Liv_Boeree, acknowledge having their life trajectories changed by encountering the writing of @ESYudkowsky Image
The AI industry must avoid repeating the mistake of the nuclear energy industry, which has fallen far short of its potential on account of not putting enough priority on questions of safety - Jaan Tallinn, #CogX2021 Image
Instead of talking about our AI systems as "black boxes", a more accurate description is "dark grey boxes", since it *is* possible to obtain data about what's happening inside these systems (though that's no reason for any complacency) - @MarcWarner10 at #CogX2021 Image
A significant concrete result from recent extra funding of AI safety? The insight that, with AI, there's both an inner alignment problem and and outer alignment problem - Jaan Tallinn at #CogX2021. "It's a bit technical" - but see lesswrong.com/posts/pL56xPon… Image
One unanswered question from this #CogX2021 panel: @MarcWarner10 highlighted five core ways to evaluate AI algorithms: doing what we want them to do, fairness, privacy, robustness, and explainability. Do other AI safety researchers agree these are the right dimensions to monitor?
"What the UK needs from a National AI Strategy... a look at the AI Council's 16 recommendations" with Sir Adrian Smith of @turinginst, @rachelcoldicutt, @carolinefdaniel, and @andy_at_public - next at #CogX2021 Image
The 16 recommendations from the AI Council "to help the government develop a UK National AI Strategy" #CogX2021 from assets.publishing.service.gov.uk/government/upl… Image
The four panellists are all seated together in the same room. But most of the audience (like me) are remote #CogX2021 Image
The single most important pieces of advice for shaping the national AI strategy? "National positioning within an international context", "A strategy, not a wishlist: it needs to be actionable: what to do, and what to stop doing" #CogX2021
I agree with @andy_at_public at #CogX2021 - our strategy for AI needs to be more than a wishlist. A strategy needs an understanding of existing strengths and weaknesses, and decisions on which strengths to build on, and how to make weaknesses irrelevant
I would add: strategy needs to look at OT as well as SW - that is, at forthcoming Opportunities and Threats, as well as current Strengths and Weaknesses. The AI council recommendations seems to offer little foresight on OT #CogX2021
Mentioned just now at #CogX2021: agreement “to establish an EU-US Joint Technology Competition Policy Dialogue that would focus on approaches to competition policy and enforcement, and increased co-operation in the tech sector” ft.com/content/2600c3…
The full text of the EU-US Summit Statement "Towards a renewed Transatlantic partnership" #CogX2021 consilium.europa.eu/media/50443/eu…
I like this questions tabled at #CogX2021 by @rhys_the_davies: "Could the UK national AI strategy not be focused on enabling and facilitating *international* AI strategy and collaboration?"
#CogX2021 Ethics stage at 12: "There’s plenty of evidence of bad practice and filter bubbles that disempower or frustrate users online. Can better design lead users to better decision-making?" - @PhilBeaudoin, @laura_ellis, @CDEIUK’s Nathan Bookbinder-Ryan Image
A "dark pattern" is when a feature of the UI is designed to promote the choices desired by the platform provider, rather than ones that might be better for the user. For example, making the unsubscribe link less visible - Nathan Bookbinder-Ryan at #CogX2021 Image
#CogX2021 @laura_ellis highlights the importance of software being able to explain to users (when asked) why various choices have been made regarding the content displayed. This explainability will increase user trust Image
Current software is good at recognising when a user wants X (e.g. a chocolate bar) but not when the user wants to not want X - @CDEIUK’s Nathan Bookbinder-Ryan envisions future software that is better able to serve user's underlying desires #CogX2021 Image
Software that is better at detecting and respecting user interests sounds attractive, but has a risk of increasing existing problems from filter bubbles, points out @PhilBeaudoin at #CogX2021 Image
Nathan Bookbinder-Ryan points out that @CDEIUK have published this morning a set of examples of "light patterns" in their report "Active choices: Interim findings" #CogX2021 gov.uk/government/pub… Image
"AI Governance: the role of the nation in a transnational world" - next on the #CogX2021 Ethics & Society stage, featuring @j2bryson, @sanakb, @GavinFreeguard, and @simcd Image
Starting at the beginning: three reasons the world still needs nations, suggests @j2bryson at #CogX2021 - areas of the world have mutual interest in good neighbourly behaviour; a local monopoly of force; units responsible for the human rights of everyone in a given area Image
Nations traditionally were large enough to counter the power of large corporate monopolies. But with corporations being even larger these days, transnational coordination (like the EU) is needed to counter their power - @j2bryson at #CogX2021 Image

• • •

Missing some Tweet in this thread? You can try to force a refresh
 

Keep Current with David Wood

David Wood Profile picture

Stay in touch and get notified when new unrolls are available from this author!

Read all threads

This Thread may be Removed Anytime!

PDF

Twitter may remove this content at anytime! Save it as PDF for later use!

Try unrolling a thread yourself!

how to unroll video
  1. Follow @ThreadReaderApp to mention us!

  2. From a Twitter thread mention us with a keyword "unroll"
@threadreaderapp unroll

Practice here first or read more on our help page!

More from @dw2

18 Sep 20
Live now: "AI, Data & Ethics with Prof. Joanna Bryson" @j2bryson "Joanna will join us at this edition of the London Ethics Meetup to share her insights on AI, Data & Ethics in 2020" meetup.com/Tech-Ethics-Lo…
Good definitions from @j2bryson at the start of her talk on "AI, Data & Ethics" Image
People who say they are "against regulation" benefit from huge amounts of regulation inside their own bodies, that is biological regulation, points out @j2bryson Image
Read 5 tweets
17 Sep 20
I'm watching the live stream from #OneSharedWorld #RiseOrFallTogether, "The OneShared World Interdependence Summit 2020". Lots of speakers, artists, and activists are lined up
"The world today faces a wake-up call. The question is: will we wake up?" Opening provocation at #OneSharedWorld by @JamieMetzl
"Because the core problem we face is systemic, our response needs to be systemic too. To safeguard our future, we need to build a new global operating system" - @JamieMetzl at #OneSharedWorld Image
Read 16 tweets
4 Sep 20
Starting in just under 2 hours, at 12.00 UK time: the 1st of the 9 sessions of #WeNeedAChange: "Accelerating a new Kind of Thinking & Science for a Sustainable Future". I'll be offering six theses about the kind of change needed. Video will be streamed at leanbase.de/streamaroundth…
The #WeNeedAChange sessions will also be streamed live at youtube.com/channel/UCLzeg…
Copy of the slides I presented in the first session of #WeNeedAChange: Six Theses slideshare.net/DeltaWisdom/dw…
Read 6 tweets
2 Sep 20
The #CogXtra panel "What's Needed Now" has just started streaming: "In considering how we rebuild, ideas once-considered radical have been up for discussion. But how radical do we need to go? Can we rebuild along the same lines as before?" cogx.co/event/what-is-… Image
"As the storms of the global pandemic continue to buffet businesses and individuals, we are all in need of a double dose of resilience. If only there were a precise recipe for it, we would all be racing towards a brighter future." This can be viewed on
The pandemic has exposed the extent of the digital divide. The number one infrastructure priority for the country is to fix this divide - former UK Chancellor George Osborne at #CogXtra Image
Read 7 tweets
1 Sep 20
Day One of #ARDD2020, "Aging Research & Drug Discovery", is just starting live-streaming. Speakers today include Kai-Fu Lee, Brian Kennedy, João Pedro de Magalhães, Carolina Reis, Nir Barzilai, Aubrey de Grey, Polina Mamoshina, and Steve Horvath agingpharma.org
The #ARDD2020 conference is using a dedicated Slack channel for discussion among attendees, including questions for speakers. It's an interesting experiment
First up is Brian Kennedy of the National University of Singapore, speaking on "Validating Interventions in Humans: Which approach is best?" #ARDD2020 Image
Read 42 tweets
31 Aug 20
The root problem with surveillance capitalism, argues Cory @doctorow in a compelling book-length article, is neither surveillance, nor capitalism, nor technology. It's monopoly, and society's misguided unwillingness to challenge monopoly power onezero.medium.com/how-to-destroy…
Among many other important points in this article is the insight that a big reason that threadbare conspiracy theories are on the rise, is because many people have been victims of actual conspiracies, also known as corruption, and that has increased their distrust of authority Image
"It’s trauma and not contagion — material conditions and not ideology — that is making the difference today and enabling a rise of repulsive misinformation in the face of easily observed facts" onezero.medium.com/how-to-destroy…
Read 6 tweets

Did Thread Reader help you today?

Support us! We are indie developers!


This site is made by just two indie developers on a laptop doing marketing, support and development! Read more about the story.

Become a Premium Member ($3/month or $30/year) and get exclusive features!

Become Premium

Too expensive? Make a small donation by buying us coffee ($5) or help with server cost ($10)

Donate via Paypal Become our Patreon

Thank you for your support!

Follow Us on Twitter!

:(