1/ Interesting (long) post on AI consciousness, which makes some good points. But there are (it seems to me) many misconceptions too. What AI folk think about consciousness is important, so let's have a look 👀
2/ Consciousness (raw subjective experience, C) is not the same thing as self-awareness (which entails more than this) and sentience (which requires less). There is also no necessary connection between consciousness and free will or (arguably) agency (depends on your thoery)
3/ Correct to distinguish consciousness from intelligence, but AGI is different & it's unclear whether consciousness is needed or not. Non-general superhuman AI is different again: non-conscious AI already outpeforms humans in many domains.
4/ While there is no consensus definition of consciousness, the broadest simply means the presence of phenomenal experience. No necessary connection to decision making. Yes, C likely evolved, but in many other animals as well as humans. Good take here: amazon.co.uk/Ancient-Origin…
5/ There are many theories of consciousness ( ) and they all set different conditions for artificial consciousness, as this excellent report from @rgblong & @patrickbutlin sets out nature.com/articles/s4158… arxiv.org/abs/2308.08708
6/ & while correct to note that increased performance doesn't imply consciousness, it's also unclear whether GPT4 *understands* anything (I am doubtful). @mpshanahan has a good take on how to talk about LLMs arxiv.org/abs/2212.03551
7/ Correct that simulating aspects of consciousness is on the cards, but again consciousness doesn't require/mean 'knowing it knows' (metacognition) or self-awareness. Very true that more research into consciousness is needed :-) amcs-community.org/open-letters/
8/ While I agree that C is unlikely to come along for the ride as AI progresses, this does *not* mean we should just plough ahead regardless - there is too much uncertainty. And there are many more risks to untrammeled innovation in this area than 'being completely replaced'
9/ For more on the risks associated with conscious AI, and conscious-seeming AI, see nautil.us/why-conscious-…
10/ In summary, it is right to distinguish C from I and from simulations of C, but by misdefining C the likely scenario and risks are also misidentified. AI innovation should proceed in close tandem with consciousness science, and with due caution. /end
• • •
Missing some Tweet in this thread? You can try to
force a refresh
1/ Interesting & measured piece in @nytimes by @kevinroose on AI Welfare, and the possibility that AI systems are (or might soon be) conscious. Prompted by @AnthropicAI hiring an AI Welfare researcher, Kyle Fish @fish_kyle3. Via @azeem 🙏🏽. A few thoughts ... 🧵
2/ I agree with Roose & Fish that we ought to take seriously the possibility of conscious AI, but I think there are many reasons to be skeptical about its near-term, or even long-term, plausibility. Fish's 15% credence that #Claude is already conscious seems outlandishly high.
3/ We are predisposed to overestimate the likelihood of conscious AI because of our inbuilt anthropic (ha!) biases, especially when it comes to language. While some folks think LLMs might be conscious, nobody thinks @GoogleDeepMind's AlphaFold experiences anything.
1/ What are the prospects for AI that is, or irresistibly appears to be, conscious? Here’s a substantially revised version of my paper “Conscious artificial intelligence and biological naturalism” (link at end of 🧵, because @X)
2/ As AI continues to develop, it is natural to ask whether AI systems can be not only intelligent, but also conscious. But is this likely? And what would the consequences be?
3/ There’s much confusion in this space, and definitive answers are not (yet) possible. But seeing the landscape more clearly will help us make better decisions when interacting with, developing, and regulating these new technologies. With clarity comes agency (h/t @aza)
1/🧵 New (long) preprint: Biological naturalism and conscious artificial intelligence. In which I explore the prospects for, and pitfalls of, AI that is, or irresistibly appears to be, conscious. @sussexcentre .@CIFARnews #ExMachina osf.io/preprints/psya…
2/ As AI continues to develop, it is natural to ask whether AI systems can be not only intelligent, but also conscious. But is this likely? How could we know? And what would the consequences be?
3/ There’s been a lot of confusion in this space, and definitive answers are not (yet) possible. But seeing the landscape more clearly will help us make better decisions when interacting with, developing, and regulating these powerful new technologies.
1/🧵 Lots of fuss and bother about the announcement from @neuralink today about having implanted their tech in a human brain for the first time. bbc.co.uk/news/technolog…
2/ This technology is definitely advancing, and there are many exciting implications, especially in medicine: restoring function in people with paralysis, or after loss of vision, or some other sense, &c. And its great to have more players in this field to drive progress.
3/ But it's important to stress that there's nothing new here, at least not yet. Other groups have been developing brain implants for decades, and have demonstrated many more impressive results than today's highly prelimary announcement.
1/🧵 Here's some books I read & (increasingly) listened to during 2023 - and before we get going, do consider getting #BeingYou if you haven't already - or leaving an @amazon review if you have 🙏🏽 Its now in translated into 8 languages w/ 6 more coming anilseth.com/being-you/
2/ A great companion is #TheExperienceMachine, by Andy Clark @CogsAndy - his latest on predictive processing, and excellent on how ideas like active inference and the extended mind can work together. goodreads.com/en/book/show/6…
3/ I loved #CloudCuckooLand, by Anthony Doerr @DoerrTorresal. A story about stories and a book about books. Unfolding across expanses of space and time, it reminded me of #CloudAtlas in sweep and narrative. One twist left me reeling with pleasure. uk.bookshop.org/p/books/cloud-…
1/🧵 As the UK #AISummit gets going, a reminder that AI is not about to become conscious, but that even AI that merely *seems* conscious will still pose grave ethical/social concerns. As @demishassabis said, this is not the time to move fast & break things nautil.us/why-conscious-…
2/ Intelligence and consciousness are very different thngs. Intelligence is about doing the right thing at the right time, while consciousness is about having subjective experience. You don't have to be (species-level) smart, in order to suffer.
3/ We humans tend to associate intelligence and consciousness together, thanks to strong pyschological biases to see things through a human lens (anthropocentrism) and to project human values into other things on the basis of superficial similarities (anthropomorphism)