Just 6 months ago Twitter was flooded with Tweets about #dalle. Then came #midjourney and #stablediffusion. 2 months ago, #chagpt exploded. Now the chatter mostly died down. I wonder if you weren't on Twitter, would you know of these novelties?
I don't know how these technologies captured users' attention on another social media platform. LensaAI exploded elsewhere, but I'm unaware of where. Information bubbles exist across platforms as the exist within platforms.
We make the mistake that true novelties are not novel anymore because we were exposed too early in the adoption cycle. Image2Text is not novel enough anymore to warrant a tweet! But what effect is this on people who were never exposed to it? It's as if that novelty never existed.
Very soon, ChatGPT will recede from the short attention span of society. It will have the same effect as GPT-3 around 2 years ago. People forgot about it! Many never needed to forget because they never heard about it!
Society cannot keep up with the information deluge. It takes a few years for new discoveries to diffuse through society. I wrote my book "Artificial Intuition" 5 years ago. I was pleased to see the concept articulated in the series "Peripheral." amazon.com/Artificial-Int…
The problem with AI technology is that its progress is invisible. Technological progress before the 1970s was conspicuous. The difference between 1970 and 20 years earlier was visibly obvious. It's hard to visually see the difference between 2020 and 2000.
We only know of change when we are immersed in it. So if the chatter about #dalle and #gpt dies down, will general society even be aware of it? When it takes over everything, will everyone just be caught flatfooted, unawares of where it came from?
• • •
Missing some Tweet in this thread? You can try to
force a refresh
So damn fascinating that after artificial intuition systems have now solved language fluency, cognitive scientists are now grasping for arguments as to why artificial fluency is not thinking! theatlantic.com/technology/arc…
Here's my problem with the explanations in the above essay. If there is something other than language fluency to thought, then what would we call this other capability? The essay proposes that it is "visual thinking": newyorker.com/magazine/2023/…
If true thinking is other than language processing, why isn't there any commentary about diffusion models? Aren't stuff like Dall-E, a kind of visual thinking? medium.com/intuitionmachi…
Curiously, very few people accept Nick Chater's "The Mind is Flat" theory. Too many overlook the mind's ability to confabulate explanations. medium.com/intuitionmachi…
Human always has in their head incomplete models of reality. General intelligence is a consequence of just-in-time resolution of any inconsistencies in our mental models. We are cognitively nimble because we make stuff up!
Many cognitive scientists cannot imagine how an intuitive system (s1) may have reasoning (s2) as an emergent property. An s1 system becomes competent at s2 through the habits reinforced by cultural norms. We learn by adopting habits, not by the interpretation of instructions.
Playing with diffusion models leads to the awareness of concepts without words to describe them. Today, words don't precisely control image generation, but I expect this to improve over time. Soon, we might be expanding our vocabulary by generating #aiart!
Our interaction changes with every new version of the AI product. How one prompts in #dalle2 is different from #midjourney. Each version of MidJourney requires different prompting. Each finetuned model of #stablediffusion also prompts differently. Prompting has no standards.
The constraints that drive the generation of an image depend on the relationship between many variables. The relationship of words in a prompt, the strength of knobs, the iteration between images, the iteration between models, and the overall sequence of itself.
Reinforcement Learning is an algorithmic representation of the 4E theory of cognition (i.e., Embodied, Embedded, Enactive, and Extended). But it's often framed in the God's eye perspective of "Reward is Enough." A framing that is in contradiction to the subjective nature of 4E.
2/n The flaw in RL is that the objective function does not originate from the interior of the agent. It is fabricated at the exterior by a designer. Real environments, unlike video games, do not render continuous rewards for an individual's actions.
3/n But RL has proven extremely successful (see: AlphaGo). What an RL algorithm render explicit and obvious that an offline learning system cannot? I lucked upon this podcast of a @deepmind scientist that explains this (see last part). braininspired.co/podcast/159/
This is an intriguing analogy that distinguishes generative AI models from search engines. How is a fuzzy specification that is indexical (i.e., search) different from a fuzzy specification that is iconic (i.e., generative AI)?
Let me rephrase this, how is fuzzy and incomplete symbolic specification that is indexical (i.e., search) different from one that is iconic (i.e., generative AI)? The difference I'm seeking is wrt to the context of enterprise value.
If all meaning-making is ultimately iconic, then a generative AI extends a search engine by rendering new meaning to a search result. Its value add is that it affects the original search query as well as it modifies the search result.
The extended mind hypothesis reveals something critically important that society and science ignores. Our consciousness has radically changed as a consequence of the paradigm shifts in religion and technology that occurred throughout history. 1/n
2/n Throughout history, the thought processes of the nobility were very different from the common man. Thus a small cadre of the elite informed what acceptable public thought to the masses. But impositions don't change minds; only immersion does.
3/n When technology advances, it democratizes access. The technologies that were once exclusive to the elite became available to the masses. The accessibility of language, vehicles, literacy, communication, computers, and AI significantly affects human consciousness.