#Bard appears to be lightning-quick. Also seems to know Alexander's properties of living structures too. Also makes a good guess at the DL technique that matches the properties.
The context window doesn't appear to be as large as GPT-4. I'm unable to paste my entire article as inpu't. So I can't have it self-evaluate my capability maturity model. :-(
But the coolest thing about Bard, is this:
The table manipulations of Bard are also not as robust as in GPT-4. So it's a bit rough on this side. I expect Google to fix their bugs.
Here's #Bard with some definitions. It has a different character than GPT-4. The second image is GPT-4. Compare the two.
#Bard will hallucinate as to the contents of an external URL. Here it makes it all up. GPT-4 also has a similar problem. But to be fair, let's not expect these bots to just ingest any URL.
Of the language AI that I've played with, I would rank them as follows: ChatGPT, Claude, Bard, and GPT-4. This is not a rigorous evaluation, and it will depend on your use case. But I've found GPT-4 more capable than Bard in its current instantiation.
For most casual use cases, I don't think there's much of a difference between these tools. For my needs, I'll need to use the most capable tool.
The state handling of Bard is not even as good or robust as ChatGPT. It is as if Google has yet to build up enough infrastructure around the underlying neural network.
I also found out that Bard interprets instructions differently from ChatGPT or GPT-4. In other words you have to use different words to have similar effect. Like you need to have a different vocabulary!
This makes the job of the AI whisperer a lot more difficult! It's bad enough that MJ, SD and Dall-E all have different kinds of prompts. Now we've got to deal with the differences between GPT, Bard, Poe and whatever will be released in the future.
• • •
Missing some Tweet in this thread? You can try to
force a refresh
Fearless Forecast: General Intelligence is orthogonal to Agency and Consciousness. Humans have a bias that AGI must be human-like because it is all we know. This is false, and I'm preregistering this claim now.
We have proto-AGI today in GPT-4. It has no agency and no consciousness. It's has artificial intuition, fluency and reasoning. It has self-evaluated itself under my capability maturity model. It still has several levels, but I doubt it will require consciousness.… twitter.com/i/web/status/1…
It's plausible that artificial agency or artificial consciousness is created, but I believe it to be a dangerous feature to integrate. I also think it is much more difficult to achieve. medium.com/intuitionmachi…
Many AI experts are fools for failing to recognize that GPT-4 has an understanding that exceeds most human capabilities. Instead, they would rather focus on the knowledge gaps (i.e., human hands) and not its surprising capabilities.
We should not confuse errors due to a lack of knowledge with the folksy notion of the lack of common sense. The errors we recognize in diffusion and transformer models are simply manifestations of gaps in knowledge.
It is an absurdity to claim a gap in understanding without having a good definition of "understanding". What does it mean to have a gap of something when one has a non-existent or perhaps impoverished definition of understanding.
Everyone keeps complaining about the deluge of information that AIs will generate. But only some think of the antidote, AIs that help us build the ark of meaning.
Amidst the digital storm, we yearn for respite; can an AI bastion shield us, though it too is forged of the chaos that assails us? Are we paradoxically safeguarded?
Ensnared in the fathomless sea of knowledge, we find ourselves adrift. Can we rely on a creation of our own invention, an AI Nemo's Nautilus, to be both our anchor and compass?
1/n When you play around long enough with StableDiffusion, you recognize the gaps in knowledge of the models. As an example, a person's face can only be rendered correctly when it is framed in a conventional way. You can't generate a face sideways or upside down. But why?
2/n This knowledge gap is most obvious with hands. The problem with hands is that they exist in reality in pointing in multiple direction and the configuration of a hand can express complex emotions. Humans recognize this, but it is a knowledge gap for StableDiffusion.
3/n When a large language model says something non-sensical, it also results from a knowledge gap. But humans say non-sensical things all the time! Even in the highest places of government, people say non-sensical things! Why? It's a gap in knowledge.
AI is an incredible tool for developing patterns. Here's a pattern it created:
The origin of this pattern is from a metaphor: "pull the ladder up on competition" which has several variants:
Patterns are like words, their value is in the other patterns that recur together in practice. Thus it's an iterative process to build a pattern language for a domain. Every pattern eventually becomes a portal to other patterns. They become jumping-off points to other patterns.
We struggle to discern the critical transition between memorization and generalized cognition. When a LLM regurgitates text verbatim, we claim it's memorized. Yet when a LLM renders a conceptual blend, we often mistake it for cognition. This continuum is difficult to grok.
It's analogous to the transition of an icon to an index and the transition of an index to a symbol. We have trouble with signs that sit in between both classifications. The trouble is that the continuum isn't one-dimensional, but multi-dimensional.
The same continuum can also be seen with analogy-making. There are mundane analogies that are unremarkable and are so common that we associate the acts as habitual, and there are remarkable analogies that are unexpected and unique. There's a spectrum of analogies in between.