In a highly anticipated move, @Google yesterday announced that they are launching Bard, a conversational AI app that is based on their LaMDA model.
1/5
LaMDA - Language Model for Dialogue Applications - has been around for at least a year, but due to variety of considerations it has never been accessible to to the public.
2/5
Bard will be using a lighter, more computationally efficient, version of LaMDA. Bard’s rollout will proceed gradually, and in a month or so Google will be releasing an API for it to some trusted partners and developers.
3/5
According to Google’s CEO Sundar Pichai, the size of the large AI models has been doubling every six months, far outpacing the Moore’s Law. At the current rate the largest AI models will be 10X larger than what we currently have by the end of 2024.
4/5
We are on a precipice of an extraordinary technological revolution.
Things seem to be moving at a breakneck speed in the world of generative AI and large language models. In a surprise press event yesterday, @Microsoft announced a wide integration of @OpenAI tools into a couple of their major products,
1/4
Bing search engine and Edge web browser. In particular, this seems to be the first time that we'll see anywhere a public use of OpenAI's next generation LLM, GPT4. Most of the new features are still relatively limited, and you'll need to join the waitlist for the full access. 2/4
This announcement is bringing a whole another level of interest and enthusiasm for Bing and Edge. I have used them only occasionally over the years, but these new capabilities might make me use them on a regular basis.
3/4
Deep Learning and Neural Networks have become the default approaches to Machine Learning in recent years. However, despite their spectacular success in certain domains (vision and NLP in particular),
1/5
their use across the board for all ML problems and with all datasets is problematic, to say the least. Oftentimes better and more robust results can be obtained with simpler, easier to train and deploy, classical ML algorithms.
2/5
One such “traditional” approach was recently used to reevaluate sleep scoring on a few publicly available datasets. The results were published in the journal of Biomedical Signal Processing and Control.
3/5
There was nothing that shocked me more when I entered the industry from academia than this kind of attitude. I came from an environment where teaching and learning were the norm, to the one where giving help to “underperformers” was viewed with disdain as a liability.
1/5
Fortunately not all organizations and managers are this cutthroat, but this kind of mindset is pervasive, especially at startups. There is a widespread attitude that *it’s someone else’s responsibility to do the educating*: yours, your previous job’s, your college’s etc.
2/5
And in some way this is a *rational* attitude to have: there are hardly *any* incentives to help others get better, as this is almost never a peer of your performance evaluation.
3/5
Last week @DeepMind’s research on AlphaCode - a competative programming system - has been published in Science. AlphaCode has been able to beat 54% of humans on a competative coding challenges, putting it on par with many junior-level developers.
1/4
The original announcement from DeepMind came out in February, which in the fast-paced world of AI is already ancient history.
2/4
The explosive rise of generative AI over the past few months will most certainly have a major impact, if it already hasn’t, on the future versions of AlphaCode and similar AI-enabled coding resources.
3/4
Last week @OpenAI released ChatGPT - a Large Language AI Model that interacts with users in a natural conversational way. The chatbot is able to answer complex questions, even in highly technically demanding categories.
1/7
It is also able to answer the follow up question, backtrack on wrong assumptions, and provide other detailed resources, including code fragments.
2/7
Most people in tech consider this to be the greatest technological advancement of the year. Many of us consider it even more epochal, perhaps one of the biggest turning points in history.
3/7
PyTorch 2.0 is out! This major release upgrade brings about many new features, but the main improvements are under the hood.
1/6
The three main principles behind PyTorch
1. High-Performance eager execution 2. Pythonic internals 3. Good abstractions for Distributed, Autodiff, Data loading, Accelerators, etc.
PyTorch 2.0 is fully backward compatible with the previous versions of PyTorch.
2/6
The main new feature is torch.compile, "a feature that pushes PyTorch performance to new heights and starts the move for parts of PyTorch from C++ back into Python."
3/6