Language models are some of the most interesting and most promising research topics in AI. After all, being able to communicate with humans naturally has long been considered *the* ultimate goal for AI (Turing test).
However, even though large language models in particular are very powerful at generating new text, it is still an ongoing source of debate of how much of that ability is just "rote memorization", and how much is rooted in genuinely fundamental language understanding. 2/3
The interesting paper above tries to answer some of those questions. It would seem that the language models are quite capable of coming up with genuinely novel texts, especially for larger paragraphs, but they still seem to lack the basic semantic understanding of language. 3/3
• • •
Missing some Tweet in this thread? You can try to
force a refresh
2/ A year ago I was approached with a unique and exciting opportunity: I was asked to help out with setting a Kaggle Open Vaccine competition, where the goal would be to come up with a Machine Learning model for the stability of RNA molecules.
3/ This is of a pressing importance for the development of the mRNA vaccines. The task seemed a bit daunting, since I have had no prior experience with RNA or Biophysics, but wanted to help out any way I could.
One of the unfortunate consequences of Kaggle's inability to host tabular data competitions any more will be that the fine art of feature engineering will slowly fade away. Feature engineering is rarely, if ever, covered in ML courses and textbooks. 1/
There is very little formal research on it, especially on how to come up with domain-specific nontrivial features. These features are often far more important for all aspects of the modeling pipeline than improved algorithms. 2/
I certainly would have never realized any of this were it not for tabular Kaggle competitions. There, over many years, a community treasure trove of incredible tricks and insights had accumulated. Most of them unique. 3/