Ben Goertzel Profile picture
Benevolent #AGI, #transhumanism & eurycosmos. CEO @singularity_net, Chair @opencog @HumanityPlus @iCog_Labs, https://t.co/LZLU8HQJdF
𝚂.𝚁. 𝚆𝚎𝚊𝚟𝚎𝚛 🖌✒ Profile picture 1 subscribed
Mar 27, 2023 10 tweets 2 min read
1) Compassion of (maybe near) future AGIs toward humans: This is certainly an area our science does not yet cover effectively, so we are all going in some measure on intuition. So let me share some of mine... 2) It's a mistake to over-anthropomorphize or over-biomorphize AGIs, and also a mistake to think about AGIs too closely by reference to current deployed commercial AI systems .... Self-organized minds seeded by engineered AGI systems will be quite different
Mar 19, 2023 25 tweets 4 min read
1) Caricature of an argument for GPT fans, explaining why we need a neural-symbolic-evolutionary cognitive architecture to get to human-level AGI : ... 2) It would seem: LLMs are good at recognizing surface-level patterns in large datasets, and synthesizing patterns from the distribution implicit in the surface-level patterns of a large dataset
Mar 19, 2023 9 tweets 3 min read
1) Read Nicholas Weaver's paper on "The Death of Cryptocurrency" law.yale.edu/sites/default/… ... basically he argues that crypto may as well be regulated to death because it's a technical and conceptual failure. A well thought out but wrongheaded argument. 2) I was pleased to note that every one of the big problems he sees w/ crypto is going to be address by various of our SingularityNET ecosystem projects ;) -- woo hoo! @SingularityNET
Jan 17, 2023 9 tweets 2 min read
1) Periodically folks ask me if I think some gov't is secretly fostering a Manhattan Project for AGI .... I quite doubt it and will summarize here the reasons... 2). The only way I can see this happening is if, say, AGI requires large-scale quantum computing and gov't labs are the only ones to possess this for a few years.... (Which to be clear I really doubt.. I think QC will be super helpful but is probably not needed for HLAGI)
Dec 29, 2022 12 tweets 3 min read
1) ChatGPT is super cool and fun but it's important to recall OpenAI made basically zero fundamental innovations. Actually the basic innovation behind the GPT software was made at Google Brain in Mountain View 2) But Google has (so far) chosen not to roll out such language models publicly due to their propensity for BS generation, and their inability to tell BS from reality.
Dec 15, 2022 30 tweets 5 min read
1) I never met SBF nor had doings with his businesses, but being around the crypto world and the rationalist/effective-altruist world a bunch I almost feel like I know the dude 2) @drvolts thread on SBF is excellent and my analysis bears a lot of overlap with his,
Dec 11, 2022 11 tweets 7 min read
@GaryMarcus @sama 1). LLMs have a problem with truthfulness due to lack of grounding. Another important question though is: Suppose coupling an LLM with some sort of fact-checker produced a non-full-of-shit ChatGPT-ish system ... would this be a human-level AGI? Of course not. @GaryMarcus @sama 2) Such a "non-full-of-shit ChatGPT-ish thingie" would still be repermuting and serving up chunks of human knowledge, rather than forming new chunks of knowledge based on pattern-crystallization seeded by experience.
Apr 26, 2022 32 tweets 9 min read
1) What will a Musk acquisition of Twitter mean? Will this bring about radical decentralization and democratization of social media? more intelligent and productive and consciousness-expanding speech and interaction? @elonmusk @jack @IOHK_Charles 2) Tl;dr -- no a Musk acquisition will more likely bring moderate improvements only, due the the limitations intrinsic to Twitter’s centralized corporate biz model.
Apr 22, 2021 5 tweets 2 min read
@IOHK_Charles 1) Calling Cardano a scam makes massive negative sense given the beautiful working code rolled out, the publication track record and the code in repos and running on testnet. WTF is wrong with the crypto media-verse? @IOHK_Charles 2) Remember Frank Zappa's analysis of rock journalism: "People who can't write, writing for people who can't read, about people who can't play music". ?
Jan 23, 2021 5 tweets 1 min read
1) It's now becoming cringey for young geeks to work for FAANG ... which may be an interesting turning point 2) Could the rising Big Tech Ethical Cringe Factor reverse the trend wherein nearly all the top AI PhDs and hackers join a small number of Big Tech companies?
Jan 16, 2021 6 tweets 3 min read
1). Wow, now the Big Tech censors are coming for Minds.com? No independent minds allowed folks!!! (A brief overview of https://t.co/3j15uzdfTQ is here, techrepublic.com/article/is-min… ) 2) Minds.com is different than Parler in a few ways -- for one thing founder Bill Ottman is clearly opposed to white supremacy and alt-right tropes, see e.g. work on "“combating racism, violence, and authoritarianism.” , inquirer.com/news/new-jerse…
Jan 9, 2021 8 tweets 2 min read
1) While I have extremely little respect for Trump's intellect, character or literary prowess, it actually annoys me that he's been blocked from what is effectively a near-universal national US "communication utility" (Twitter). #TrumpBanned 2) It is just quite suboptimal to have centralized organizations of any kind control the flow of communication and information. If Twitter were replaced with a well-designed decentralized network, then one wouldn't need fractionation into Twitter, Parler and whatever-else ..
Oct 25, 2020 10 tweets 7 min read
@wooldridgemike 1) I'm not going to try to explain why you're almost surely dead wrong about AGI being far away, in the inadequate format of a series of tweets, however in this thread I will give some links for folks who want to do some more in-depth reading/listening on the topic ... @wooldridgemike 2) Re ur historical analogies, I won't insult the intelligence of the Twitterverse by giving a bunch of links regarding the concept of exponential accelerating change. But it's a real thing. The amount of change that used to take a century can (sometimes) now just take years.
Jun 2, 2020 7 tweets 1 min read
1) Part of how I'm thinking about the value of tech to help w/ global inequality — in 10 yrs from now, a huge amount of the value on the planet will be getting generated by new technologies that are now nonexistent or in nascent form 2) If we could tweak these new technologies so that their benefits were distributed in a more egalitarian way, then we'd have a fairer society 10 yrs from now , without redistributing any of current wealth and without requiring large changes in how people cope with legacy tech
Feb 22, 2020 13 tweets 3 min read
1) I started poking around for a fairly comprehensive dynamical simulation model of the whole global financial s ystem. Government economic agencies seem not to have this from what I can tell. 2) Can you guess who are the only folks I chatted with who intimated they might possess such a thing? ... or at least something in that direction ... Yeah, some guys from Goldman Sachs, speaking off the record...
Feb 18, 2020 17 tweets 4 min read
1) Ok so dramatic new revelations (not) — After OpenAI became ClosedAI, and sold out to Big Tech, it stopped being so open after all … egads, my faith in the benevolence and transparency of Silicon Valley elites is shattered ;p technologyreview.com/s/615181/ai-op… @_KarenHao 2) Those with longer than average memories may recall that a few years ago, OpenAI was funded by Musk and Sam Altman with a narrative of guiding AGI development in an open and beneficial direction… but from the start they were clear about their non-commitment to open source