The story of computing is written in its artifacts, in the organizations in which they were forged, and expecially in the people who shaped them.
To inform my understanding of the stories of computing, I have studied several thousand books, here organized according the places where computing and the human experience intersect.
As software engineers, we are descendants of the high priests of the sun god Ra.
Perhaps I should explain...
Imhotep is considered the first engineer; he lived in Egypt around the 27th century BCE, and served as the chancellor to the pharaoh Djoser, architect of the step pyramid, and high priest of the sun god Ra.
Mind you, as an engineer, it's not just all fun and games.
"Do LLM understand?" is a question that yields passionate answers.
As for me and my house: no, LLMs do not reason and in fact are architectural incapable of reasoning.
Let's unpack that.
In a recent video of Hinton and Ng, Ng observes that “to the extend that the [LLM] is building a world model it’s conveying learning some understanding of the world, but that’s just my current view”. Both go on to cast some stones at Bender and Gebru.
Hinton goes on to say that “we believe that this process of creating features of embeddings and then interactions between features is actually understanding” and then “once you’ve taken the raw data of symbol strings and then you can predict the next symbol...
Unpin reflection, I realize that I should not be so surprised or deeply disappointed that AIs such as ChatGPT distort reality by MSU (Making Shit Up).
You see, the AIs in our cameras have been doing this for years, distorting images to nudge them closer to some sort of illusory perfection: eliminating blemishes, smoothing tones, slimming the lumps and bumps.
In a manner of speaking, we’ve been conditioned to accept these subtle uses of visual MSU because we want to believe reality can be as good as we hope.
The thing is, for many web, centric, software-intensive systems at global elastic scale, they don’t have to be right; they just have to be good enough to grow engagement.
Take Twitter for example: I intentionally follow 512 people because they represent a broad set of interests and beliefs that challenge me and help me grow.
Despite Twitter’s apps changing my preferences at random from time ordered to top tweets I still rarely see the vast multitude of those 512. I typically have to go to their feed directly to find what they’ve said.