There is a vast difference in the architecture of a software-intensive system whose output must always be accurate and precise versus one whose output can just be good enough.
There is a vast difference in the architecture of a software-intensive system whose operation requires a hard real time response versus one may operate within the envelop of human patience.
There is a vast difference in the architecture of a software-intensive system that has only a few users versus one that may have millions upon millions of users with global, elastic, and chaotic operation.
The story of computing is written in its artifacts, in the organizations in which they were forged, and expecially in the people who shaped them.
To inform my understanding of the stories of computing, I have studied several thousand books, here organized according the places where computing and the human experience intersect.
As software engineers, we are descendants of the high priests of the sun god Ra.
Perhaps I should explain...
Imhotep is considered the first engineer; he lived in Egypt around the 27th century BCE, and served as the chancellor to the pharaoh Djoser, architect of the step pyramid, and high priest of the sun god Ra.
Mind you, as an engineer, it's not just all fun and games.
"Do LLM understand?" is a question that yields passionate answers.
As for me and my house: no, LLMs do not reason and in fact are architectural incapable of reasoning.
Let's unpack that.
In a recent video of Hinton and Ng, Ng observes that “to the extend that the [LLM] is building a world model it’s conveying learning some understanding of the world, but that’s just my current view”. Both go on to cast some stones at Bender and Gebru.
Hinton goes on to say that “we believe that this process of creating features of embeddings and then interactions between features is actually understanding” and then “once you’ve taken the raw data of symbol strings and then you can predict the next symbol...
Unpin reflection, I realize that I should not be so surprised or deeply disappointed that AIs such as ChatGPT distort reality by MSU (Making Shit Up).
You see, the AIs in our cameras have been doing this for years, distorting images to nudge them closer to some sort of illusory perfection: eliminating blemishes, smoothing tones, slimming the lumps and bumps.
In a manner of speaking, we’ve been conditioned to accept these subtle uses of visual MSU because we want to believe reality can be as good as we hope.