Delip Rao e/σ Profile picture
Busy inventing the shipwreck. @Penn. Past: @johnshopkins, @UCSC, @Amazon, @Twitter ||Art: #NLProc, Vision, Speech, #DeepLearning || Life: 道元, improv, running 🌈
2 subscribers
Jul 9 15 tweets 5 min read
Since many have asked, I am posting a detailed initial review in this 🧶 The unboxing experience is unlike anything else. The reader itself is in this pillow-like case. Dosa, the hoarder of pillows, wanted it for herself! 😆🐶

So, if you use the case it comes with, holding daylight in your arms feels like hugging a small pillow.
Image
Image
Mar 27 11 tweets 3 min read
I have long maintained LLMs make the poor performers mediocre, the average slightly above average, but do not change, and maybe hinder, the performance of top performers.

Here’s a result from a university-level physics coding task.

arxiv.org/abs/2403.16977
Image An important point not to be missed is that mixed-use students don't necessarily have a gain over no-LLM students on a sufficiently challenging task with reasonably competitive humans.
Dec 13, 2023 10 tweets 4 min read
I have been testing mistral-medium & GPT-4’s code generation abilities for non-trivial problems. These are problems even experience engineers will take time to work it out. I am summarizing some examples and overall impression in this thread: 🧶 My high level summary is @MistralAI 1) always does the job, 2) doesn’t waste output tokens on verbose explanatory output, and 3) the suggestions it offers are concrete.

Examples 👇
Dec 6, 2023 14 tweets 4 min read
NEWS: Apple just entered the AI open source arena by quietly releasing their new DL framework called MLX! It runs code natively on Apple Silicon with a single pip install and no other dependencies.

Sharing what I discovered from this initial release: github.com/ml-explore/mlx
Image It seems to follow the PyTorch API closely and provides many useful primitives right out of the box. For example, implementing a decoder-only transformer is as simple as this! Image
Jul 21, 2023 25 tweets 6 min read
This is another one of those ill-thought, fear-mongering scientific disinformation about LLMs, and I will explain why in this long thread. 🧶 Before you think this is some influencer^ thread sh*t, I worked explicitly on social media disinformation campaigns during the 2016 elections and have worked on countermeasures.

^ also, I'm not an influencer for anything or anyone, AFAIK
Mar 1, 2023 23 tweets 8 min read
OpenAI released their ChatGPT API today. Here’s a deep dive:

1. It’s not only a new model, but a new endpoint. Notice the model name says, “gpt-3.5-turbo”.

Turbo model is something the paid ChatGPT users (“PLUS”) got a preview a week or so ago. 2. what makes it “turbo” or fast is still TBA but my bet is it’s some kind of mixture-of-experts setup, besides other systems optimizations.
Mar 1, 2023 5 tweets 1 min read
Despite being entrepreneurial, a core idea of the capitalist philosophy that I have not come to terms with is innovation, and the abundances resulting from innovations (“progress”), will lead to happiness. But I am all for capitalism-driven innovation/progress because it reduces certain types of suffering, even if happiness is an elusive and unrelated goal.
Jan 27, 2023 4 tweets 1 min read
I have a very niche use of ChatGPT -- knock out some code, stick it in ChatGPT, and ask it to generate Python docstrings. I want to write docstrings, but I'm too lazy. ChatGPT is very good at understanding code and summarizing it, and docstring generation is a subset of that. It is almost always perfect! Best to do this after you have written the code.
Jan 26, 2023 10 tweets 4 min read
Yesterday, @upennnlp invited @gail_w to share her work with @yoavgo & @yahave on “Thinking Like Transformers” at our long-running Wed. speaker series “clunch”, and it is one of the most interesting transformer-related works I’ve listened to. Plus, Gail is superbly engaging! I remember seeing @srush_nlp (Sasha even wrote an explainer) and @yoavgo tweeting about it some time ago when I couldn’t pay much attention to it. Surprisingly, the larger Transformer/LLM Twitter crowd has not given this as much attention as they should (read more for why).
Jan 25, 2023 4 tweets 2 min read
I very much welcome the tiny paper initiative from @iclr_conf because key ideas in most papers can be efficiently communicated in that format, but I find it super weird that ICLR packaged it as a DEI initiative. iclr.cc/Conferences/20… Image Go tell a cis-white low-income class heterosexual male from the Appalachia that he’s well represented in ML research. OTOH I check many of these “URM” boxes, but it will be silly for me to claim that.
Dec 12, 2022 5 tweets 1 min read
This is a wrong (“shunyavada”) interpretation of even Theravada Buddhism. Incidentally, this interpretation of Buddhism was peddled by the three branches of Hinduism at that time when they felt threatened by growth of Buddhism and the fall of order of power. Western Buddhism lacks the devotional aspect primarily because of the extreme individualism that’s rooted in western society. This also explains why Zen movements became more palatable to westerners than any early flavors of Buddhism.
Dec 3, 2022 22 tweets 6 min read
Despite the amazing results I’ve experienced with ChatGPT, this is not a correct way to look at LLM vs. Google search. Since several other tweets have made this equivalence and have been eager to spell doom for Google, let’s examine the details: 1. Google has more LLMs deployed internally than any place I know. If private communication is to be believed that number is in the order of “few dozens”. Not talking of BERT/T5 sized models here.
Sep 6, 2022 25 tweets 8 min read
Language Models have taken #NLProc by storm. Even if you don’t directly work in NLP, you have likely heard and possibly, used language models. But ever wonder who came up with the term “Language Model”? Recently I went on that quest, and I want to take you along with me. 🧶 I am teaching a graduate-level course on language models and transformers at @ucsc this Winter, and out of curiosity, I wanted to find out who coined the term “Language Model”.
Aug 16, 2022 4 tweets 1 min read
People are opposed to Flow because they cannot believe Adam Neumann is getting funded again despite all the shenanigans with WeWork. I think that’s wrong. If anything, he has proven to be venture fundable, by yardsticks VCs use. The real reason to worry about Flow is > Flow will squeeze the already burdened renters to return 10x or higher to their investors. It will use tech and data science to consolidate the non-commercial properties much like how WeWork consolidated big chunks of commercial real estate, making home ownership impossible.
Jun 19, 2021 4 tweets 1 min read
The hardest part of being an AI researcher is doing good research requires getting lost in the trees and weeds while also not losing sight of the forest. It's tempting to give up minutiae to see forest-level changes at which point you become more or less a spectator/chronicler. But those who are lost in the trees are often the ones reshaping the forest as code is where a lot of the discovery happens as opposed to flashes of abstract insight. Staying in the weeds, however, can keep you away from seeing larger patterns and making bold strokes.
May 16, 2021 6 tweets 3 min read
We might know about this from recent GNN and geometric learning papers, but it first appeared in ML in the “On Manifold Regularization” paper by Belkin, Niyogi, and Sindhwani. That paper was a milestone in semisupervised learning but now forgotten. newtraell.cs.uchicago.edu/files/tr_authe… The Laplace-Beltrami operator (LBO) on a Riemannian manifold is approximated by the graph laplacian (L = D - A). The normalized graph laplacian has connections with random walks, diffusion processes (Fokker-plank equations), Brownian motion, and heat equations.
Jun 28, 2020 42 tweets 8 min read
If you’re looking for something to watch this late Saturday evening, join me in watching this documentary on Claude Shannon.

vimeo.com/315813606
Password: Shannon-ISIT (valid this weekend) Going to live tweet somethings because of the Shannon fanboy I am 😄
Mar 5, 2020 13 tweets 5 min read
Survey of #MachineLearning experimental methods (aka "how do ML folks do their experiments") at #NeurIPS2019 and #ICLR2020, a thread of results: 1. "Did you have any experiments in your paper?"

The future is empirical! If we historically look at NeurIPS papers (not just 2019), the number of theoretical submissions is dwindling and now almost relegated to conferences like UAI, and that's unfortunate.
Sep 1, 2019 8 tweets 2 min read
Speech synthesis as a field has an evaluation problem. The commonly used Mean Opinion Score (MOS) reported across various papers are not comparable. To make things worse, Deepvoice (from Baidu) papers report lower MOS of systems from Google (WaveNet, Tacotron, ..). It's a mess! I am fairly confident, we as a field (except possibly a select few who know this experientially), do not know where we stand today with speech synthesis. There is absolutely no way you can look at two papers and conclude one is superior than the other.
Nov 30, 2018 15 tweets 3 min read
Stopwords are sometimes called “non-content words”. This notion is true only in certain situations. For e.g. topic classification. But in many situations, the stopwords *are* the most informative content, e.g., authorship attribution. But something else is going on here. #nlproc First, let’s understand why someone might want to eliminate stopwords? The origins of this method lies in information retrieval where it became a common practice to eliminate high frequency words (stopwords) while building inverted indices.