Dimension AI Technologies Profile picture
Jul 26 13 tweets 2 min read Read on X
@elonmusk Grok does not think. Neither does any other "AI". LLMs aren't winning Maths Olympiad by doing maths. They are doing it by pattern-matching. They can't even count. 🧵
@elonmusk You can argue that counting is just pattern-matching, but infants (in many species) show innate abilities in counting, grammar, geometric/spatial reasoning from birth.  There is no learned pattern to match.
@elonmusk Cognition itself seems to understand these primitives from birth.  And an orphaned squirrel raised in isolation innately knows about collecting and burying nuts.  These are not learned behaviours.  They are evolutionarily-encoded cognitive primitives.
@elonmusk They can be refined by learning; some abilities can be unlocked by learning e.g. you aren't born knowing programming, but some people still have a talent; coders apply pre-existing cognitive primitives (logical reasoning, pattern recognition, systematic thinking) to a new domain.
@elonmusk The talent is latent in their cognitive architecture until they learn programming.
But LLMs have no innate abilities.  Every capability is trained into them; it is all pattern-matching - unless there is something emergent going on but there isn't yet much evidence of this.
@elonmusk Biological intelligence starts with foundational cognitive primitives (counting, spatial reasoning, causal thinking, social cognition) and then uses learning to refine and extend these.
@elonmusk LLMs start with nothing and try to reconstruct everything through pattern matching from text. Their "understanding" is entirely derived from patterns in training data, not intrinsic conceptual frameworks.
@elonmusk And that being said if there is emergent behaviour - how does it sustain, between instances of an LLM?  Two consecutive prompts in any LLM will be run on fresh instances of that LLM.
@elonmusk Even if something genuinely emergent happened in one prompt/response sequence, it disappears when that instance terminates.
@elonmusk There's no way for insights to compound or evolve across interactions. Each conversation starts from the same trained checkpoint.  Any emergence is transient, at best.
@elonmusk What we call "emergent" behaviours in LLMs are just fancy combinations of trained patterns not genuine emergence. True emergence would have to create lasting changes to the system's capabilities; but LLMs are fundamentally stateless.
@elonmusk So we have pattern-matching systems trying to simulate the outputs of systems with genuine cognitive primitives.  Giant karaoke machines. [ENDS]
@elonmusk @threadreaderapp unroll

• • •

Missing some Tweet in this thread? You can try to force a refresh
 

Keep Current with Dimension AI Technologies

Dimension AI Technologies Profile picture

Stay in touch and get notified when new unrolls are available from this author!

Read all threads

This Thread may be Removed Anytime!

PDF

Twitter may remove this content at anytime! Save it as PDF for later use!

Try unrolling a thread yourself!

how to unroll video
  1. Follow @ThreadReaderApp to mention us!

  2. From a Twitter thread mention us with a keyword "unroll"
@threadreaderapp unroll

Practice here first or read more on our help page!

Did Thread Reader help you today?

Support us! We are indie developers!


This site is made by just two indie developers on a laptop doing marketing, support and development! Read more about the story.

Become a Premium Member ($3/month or $30/year) and get exclusive features!

Become Premium

Don't want to be a Premium member but still want to support us?

Make a small donation by buying us coffee ($5) or help with server cost ($10)

Donate via Paypal

Or Donate anonymously using crypto!

Ethereum

0xfe58350B80634f60Fa6Dc149a72b4DFbc17D341E copy

Bitcoin

3ATGMxNzCUFzxpMCHL5sWSt4DVtS8UqXpi copy

Thank you for your support!

Follow Us!

:(