1) While I have extremely little respect for Trump's intellect, character or literary prowess, it actually annoys me that he's been blocked from what is effectively a near-universal national US "communication utility" (Twitter). #TrumpBanned
2) It is just quite suboptimal to have centralized organizations of any kind control the flow of communication and information. If Twitter were replaced with a well-designed decentralized network, then one wouldn't need fractionation into Twitter, Parler and whatever-else ..
3) ... one could have dynamically emergent and self-organizing subnetworks of the overall network, allowing people to get the communications they want labeled/analyzed in the ways they want ...
4) .... and one could also have networks that don't implicitly interpret "want" in the shallowest possible way but supply people with communications meeting the wants of the multiple layers of their being...
5) In a decentralized social media network, no single party is legally or morally responsible for filtering info -- each participant is locally responsible for their own information sphere and relationships. I.e. the network as a whole is emergently responsible for itself.
6) I'm aware the CEO of Twitter actually likes blockchain and decentralization. However, it's just hard to reconcile fundamental decentralization with a corporate business model, in the social media domain. The current social networks need to fall just as mainframe corps did
7) In SingularityNET we're working to contribute toward this end by building decentralized AI tools capable to serve as the AI infrastructure of tomorrow's decentralized social networks. Cardano is contributing fundamental key aspects of the infrastructure needed as well.
8) It's not a frivolous application because social networks are now key to shaping the psyche of the Global Brain, and -- among other points -- for better and worse it's the global brain of humanity that's going to share the psyche of the first superhuman AGIs we build
• • •
Missing some Tweet in this thread? You can try to
force a refresh
@wooldridgemike 1) I'm not going to try to explain why you're almost surely dead wrong about AGI being far away, in the inadequate format of a series of tweets, however in this thread I will give some links for folks who want to do some more in-depth reading/listening on the topic ...
@wooldridgemike 2) Re ur historical analogies, I won't insult the intelligence of the Twitterverse by giving a bunch of links regarding the concept of exponential accelerating change. But it's a real thing. The amount of change that used to take a century can (sometimes) now just take years.
@wooldridgemike 3) Indeed the bulk of commercial AI dev (including AGI-oriented dev) is not grounded in any deep theory of general intelligence. But there is a lot more depth in the AGI research community if you care to look. Deep NNs are not the most interesting stuff going on AGI-wise.
1) Part of how I'm thinking about the value of tech to help w/ global inequality — in 10 yrs from now, a huge amount of the value on the planet will be getting generated by new technologies that are now nonexistent or in nascent form
2) If we could tweak these new technologies so that their benefits were distributed in a more egalitarian way, then we'd have a fairer society 10 yrs from now , without redistributing any of current wealth and without requiring large changes in how people cope with legacy tech
3) This can be as obvious as low-cost solar power systems operating in a decentralized power grid ... or DIY CRISPR kits that can be used by physicians in third-tier developing-nation cities ...
1) I started poking around for a fairly comprehensive dynamical simulation model of the whole global financial s ystem. Government economic agencies seem not to have this from what I can tell.
2) Can you guess who are the only folks I chatted with who intimated they might possess such a thing? ... or at least something in that direction ... Yeah, some guys from Goldman Sachs, speaking off the record...
3) Holy information inequality Batman -- right? Having such a model gives the possessor a tremendous advantage in estimating the (absolute and conditional) probabilities of various future socioeconomic events
1) Ok so dramatic new revelations (not) — After OpenAI became ClosedAI, and sold out to Big Tech, it stopped being so open after all … egads, my faith in the benevolence and transparency of Silicon Valley elites is shattered ;p technologyreview.com/s/615181/ai-op…@_KarenHao
2) Those with longer than average memories may recall that a few years ago, OpenAI was funded by Musk and Sam Altman with a narrative of guiding AGI development in an open and beneficial direction… but from the start they were clear about their non-commitment to open source
3) OpenAI was founded with a $100M dedication of funding from Musk and Altman and others, but it was always a bit waffly —i.e. it wasn’t $100M put into an OpenAI account, it was some sort of MOU roughly committing conditionally for the future based on needs and progress