1/ Interesting (long) post on AI consciousness, which makes some good points. But there are (it seems to me) many misconceptions too. What AI folk think about consciousness is important, so let's have a look 👀
2/ Consciousness (raw subjective experience, C) is not the same thing as self-awareness (which entails more than this) and sentience (which requires less). There is also no necessary connection between consciousness and free will or (arguably) agency (depends on your thoery)
3/ Correct to distinguish consciousness from intelligence, but AGI is different & it's unclear whether consciousness is needed or not. Non-general superhuman AI is different again: non-conscious AI already outpeforms humans in many domains.
4/ While there is no consensus definition of consciousness, the broadest simply means the presence of phenomenal experience. No necessary connection to decision making. Yes, C likely evolved, but in many other animals as well as humans. Good take here: amazon.co.uk/Ancient-Origin…
5/ There are many theories of consciousness ( ) and they all set different conditions for artificial consciousness, as this excellent report from @rgblong & @patrickbutlin sets out nature.com/articles/s4158… arxiv.org/abs/2308.08708
6/ & while correct to note that increased performance doesn't imply consciousness, it's also unclear whether GPT4 *understands* anything (I am doubtful). @mpshanahan has a good take on how to talk about LLMs arxiv.org/abs/2212.03551
7/ Correct that simulating aspects of consciousness is on the cards, but again consciousness doesn't require/mean 'knowing it knows' (metacognition) or self-awareness. Very true that more research into consciousness is needed :-) amcs-community.org/open-letters/
8/ While I agree that C is unlikely to come along for the ride as AI progresses, this does *not* mean we should just plough ahead regardless - there is too much uncertainty. And there are many more risks to untrammeled innovation in this area than 'being completely replaced'
9/ For more on the risks associated with conscious AI, and conscious-seeming AI, see nautil.us/why-conscious-…
10/ In summary, it is right to distinguish C from I and from simulations of C, but by misdefining C the likely scenario and risks are also misidentified. AI innovation should proceed in close tandem with consciousness science, and with due caution. /end
• • •
Missing some Tweet in this thread? You can try to
force a refresh
1/ "Why finding the neural correlates of consciousness is still a good bet" - my @NautilusMag take on the wager between @davidchalmers42 & Christof Koch, and (more so) on the first @ArcCogitate results, announced at @theASSC NYC 👇🏽 https://t.co/bfyixag9mpnautil.us/finding-the-ne…
2/ A few things to emphasise. First, adversarial collaborations are really hard - in design, implementation, and marshalling of the strong minds involved. Huge credit to Lucia Melloni @ncc_lab, @Liad_Mudrik & Michael Pitts for managing it so successfully - a real achievement👏🏽🙏🏽
3/ The students and young researchers who actually did the experiments deserve great credit too - these projects take time & teamwork, and they've invested a great deal. The outcome is definitely worth it. Hat's off again.
2/ Why use a different term ‘perceptual diversity’? The original definition of neurodiversity from @singer_judy, rightly emphasised that *everyone* is different and that differences are not deficits. So there is no single ‘neurotypical’ ‘textbook’ brain (as @mocost said). But:
3/ The term neurodiversity has tended to become associated with neurodivergent conditions, such as autism and ADHD – as evidenced by the focus on these conditions in @adamfleming’s programme.
2/ It's a terrific distillation of a very sensible view on these thorny issues. So much to agree with, e.g.: "There is no such world in which “everything is the same except my decision”. The decision is not somehow superimposed on the rest of the world, but emerges from it."
3/ Also: "our volitional neural circuits are genuine causes of things that happen. We don’t change the future (a meaningless concept), but rather we are a part of what creates it." - to which new work in causal emergence may be relevant arxiv.org/abs/2111.06518
1/25 Since it’s that time of the year, here’s some (maybe all) of the books I’ve read or listened to in 2022, roughly in chronological order 📚👇🏽
2/ Exact thinking in demented times, by Karl Sigmund. A fascinating, lyrical, vivid, and deeply researched history of the Vienna Circle. Takes a while, but repays handsomely. uk.bookshop.org/books/exact-th…
3/ Piranesi, by Susanna Clarke. How she conjures such a vivid and magical world with just words I have no idea. It is a world I did not want to leave. Read it, and then listen to Chiwetel Ejiofor read it to you. Wonderful. uk.bookshop.org/books/piranesi…
2/ I greatly admire the work, but I am concerned about the "exhibit sentience" in the title of the paper. True, sentience can be formally defined merely as 'responsiveness to sensory impressions' - but many people interpret it as a minimal form of consciousness or awareness
3/ There is *no reason* to suppose that @CorticalLabs#DishBrain experiences anything at all, and confusion over this issue is dangerous because the prospect of synthetic awareness in cultures/organoids is ethically highly problematic
Really enjoyed this opening panel on "AI, sentience, and hype" at #WSAI22@WorldSummitAI - many 🙏🏽 to my fellow panelists (& brilliant host @Kantrowitz), and I'm so sad I can't be there IRL ...
I talk more about the prospects and pitfalls of 'machine consciousness' in my book Being You - A New Science of Consciousness, elaborating on the distinction between consciousness and intelligence & much more anilseth.com/being-you/