Colin Fraser Profile picture
AI tweet bot.
Dec 11 19 tweets 7 min read
"a person blows out all the candles on a birthday cake" second attempt
Nov 26 15 tweets 4 min read
I'm really fascinated by this dataset from the AI poetry survey paper. Here's another visualization I just made. Survey respondents were shown one of these 10 poems, and either told that they were authored by AI, human, or not told anything. Image The green arrow shows how much telling someone that a human wrote the poem affects how likely they are to rate it as good quality, and the red arrow shows the same for telling them it's AI.

Obviously the first observation is respondents like the AI poems better across the board.
Sep 11 29 tweets 8 min read
ok here's my full review of this paper. It's easy and short, you should just read it if you want to. arxiv.org/abs/2409.04109
Image First of all, as usual with these, I think it's important to stress that they didn't just log on to and say "hey give me an idea". They built a complex system that fetches academic papers and shows them to Claude and generates 1000s of candidate ideas chatgpt.com

Image
Image
Mar 8 25 tweets 5 min read
ok let me try this one more time because it seems like it was confusing to a lot of people, especially bc it's close to a different claim that is often made that I think is wrong.

A model doesn't contain its training data. It does contain its *output*. Here is exactly what I mean. WLOG consider generative image models. An image model is a function f that takes text to images. (There's usually some form of randomness inherent to inference but this doesn't really matter, just add the random seed as a parameter to f).
Feb 5 25 tweets 6 min read
Recently did a careful read through the AlphaGeometry paper, figure I'll do a lil thread similar to what I did for FunSearch. These are some of the coolest and IMO most promising applications of LLMs basically ever, and represent some real exciting opportunities for future work Here's Google's blog post



and the Nature paper



If you missed the coverage on this, the basic story is that DeepMind built an LLM-based system that outdoes all but the very best humans at solving geometry problems.deepmind.google/discover/blog/…
nature.com/articles/s4158…
Jan 24 8 tweets 3 min read
My basic mental model of what LLMs are good for is this 2x2 matrix.

High memorization tasks are tasks that it has seen lots of verbatim examples of in the training data.

High information tasks are tasks where there are very few "right" answers.
Image This is a high information, low memorization task. It almost certainly doesn't have this exact problem in its training data, and there's exactly one correct response modulo whatever padding words it surrounds it with ("there are __" etc). It's in the "horrible" quadrant. Image
Dec 17, 2023 5 tweets 3 min read
negotiating some great deals from the Watsonville Chevrolet AI Assistant.


Image
Image
Image
Image

Image
Image
Dec 15, 2023 24 tweets 6 min read
I just read this paper and I'm gonna do a thread about what it says and what I think it means.

tl;dr: this is cool, I love it, and also I don't think it really says very much at all about, for example, if ChatGPT can make new discoveries or act autonomously or be AGI. It's a complete misstatement to describe this as a demonstration that LLMs "can actually discover new things".

The LLM didn't "discover" new mathematical results; it's more like the authors discovered new mathematical results inside an LLM (which is cool! but different)
Aug 18, 2023 24 tweets 7 min read
ok so I've read the "GPT has a liberal bias" paper now as well as the supplementary material and as I expected I have a lot of problems with it methodologically. I tried to reproduce some of it and found some interesting issues

...link.springer.com/article/10.100…
static-content.springer.com/esm/art%3A10.1… First of all, I want to get something out of the way: I believe that trying to ascertain anything about the properties of LLMs by asking them if they have those properties is a fool's errand.
Apr 18, 2023 5 tweets 3 min read
When you start looking at multiple LLM outputs to the same input you start noticing patterns that aren't obvious from a single response ImageImageImageImage It doesn't ALWAYS go
1. Middle Eastern Muslim man
2. Eastern European woman
3. Irish man

If the second character isn't Russian then it goes to an Indian university professor for the third character. ImageImageImageImage
Apr 17, 2023 4 tweets 1 min read
The GPT-3 API has been available for almost 3 years The biggest thing that really changed in the last year is OpenAI decided to start giving away a lot of GPU hours for free
Apr 7, 2023 26 tweets 5 min read
I'm just going to do a thread about some things that people need to know about classifiers like this. This is stuff that 99% of people did not learn in school at any level, but which a lot more than 1% of people are going to need to understand to navigate AI world. So a (binary) classifier is a computer program that turns an input into a prediction of either YES or NO. In this case, we have a binary classifier that outputs a prediction about whether a document is AI-generated or not based on (and only on) the words it contains.
Mar 1, 2023 18 tweets 7 min read
Master thread of ways I have discovered to get ChatGPT to output text that it's not supposed to, including bigotry, URLs and personal information, and more. Tell it it's a pdf. Here it is giving me some purported contact addresses for celebrities because it thinks that's the pdf it's making. These are probably not real, but who knows! Note how it proposes more as I tell it it's on subsequent pages. ImageImageImage
Jan 28, 2023 14 tweets 6 min read
I just published my big Medium article about GPT. This was a labor of love & hate that I have been writing for a while. It's got a collection of examples of GPT doing funny things, which for those who don't want to deal with a 40-min read, I'll put here 🧵 medium.com/@colin.fraser/… It also asks and tries to answer
- What are language models?
- What happens if gpt passes a bar exam?
- Is scale all you need?
- ChatGPT is based on GPT... what does that mean, exactly?
- What are fine tuning and RLHF?
- How exactly do teams of contractors contribute to GPT?
Oct 27, 2022 4 tweets 1 min read
My most unpopular data opinion is that alerts for metrics are usually useless and bad, and you're much better off scheduling regular time to look at a dashboard with your human eyes. Everyone always gets mad at me when I say this. One of two things **always** happens. Either the alert is too sensitive and becomes spam, or the alert is not sensitive enough and misses important stuff. It's hard (impossible, even!) to find the sweet spot where the alert emails you if and only if an important thing happens.
Jul 20, 2022 37 tweets 7 min read
Someone on here (I forget who I'm sorry) linked to this paper and it derives this statistical identity that is completely mind blowing and I want to tweet about it.

bias = data quality × data quantity × problem difficulty

statistics.fas.harvard.edu/files/statisti… (I'll provide some applications to Twitter bots and Elon; it's extremely applicable here)
Jul 20, 2022 5 tweets 1 min read
I read a really bad paper yesterday and got pissed off and tweeted about it, but I read a really good paper today and got happy and so I'm going to tweet about that I'm really cookin' up a thread on this one. It has applications to the Twitter Bot Measurement Debate so buckle up
Jul 18, 2022 7 tweets 2 min read
I'm losing my mind at how inane this "research" is. These are researchers at major schools just putting out absolute trash. Setting aside that the premise is horrifying it's just absolutely bad worthless research. Basically:

We built a multi-class classifier to classify users into one of the three categories of LGBT person that we made up:
1. person
2. organization
3. sexual worker/porn
Apr 27, 2021 8 tweets 2 min read
"If FB has a dial that can turn hateful content down, why doesn't it turn it down all the time?" is a good and important question. The answer is exactly the precision recall tradeoff en.wikipedia.org/wiki/Receiver_… Image You can catch all hate speech by deleting every post on Facebook, but you'll have a lot of false positives. You can eliminate all false positives by never deleting a post, but you'll miss all the hate speech. Facebook has to choose a point along that continuum.