this is an extremely common view in tech. views extend from "this is nothing (nonsense)" to "this is god (much worse nonsense)," but the people in the middle who do not believe that the terms are even adequately defined have very little to say and are typically ignored.
i believe that the position that @mmitchell_ai or @mer__edith has staked out on LLMs is justified by the state of the evidence yet (i do not believe that a conclusion is justified), but it is certainly more reasonable than the MIRI position or even, for that matter, Sam Altman's.
every ontology has problems with classifying synthetic aggregates which are never disaggregated in nature. "sentience" is almost certainly not a single property but a bundle which virtually always co-occurs in humans. LLMs may possess one or more of those properties. not all.
there should be a "not" in there somewhere.
• • •
Missing some Tweet in this thread? You can try to
force a refresh
if (1-e)^n, then any method which extends the length of the context vector will tend to reduce the accuracy of the prediction. we find the opposite,
the issue, i think, is that he is calculating the probability of monkeys correctly typing the entire script of Hamlet word-by-word, when in fact "Cliff's Notes version of Hamlet," "Rozencrantz and Guildenstern are Dead," and "Hamlet except his name is Dave" satisfy the constraint