Linus Profile picture
thought & craft • ai @ notion
2 subscribers
Mar 1 4 tweets 1 min read
Hypothesis: information work is overwhelmingly bottlenecked on availability of high-signal context more than by correct inference over the context. If right, implies higher ROI-per-flop of context building over pure logical inference.

h/t @anandnk24 Also, virtually all of the valuable context is in the tail of the information distribution. h/t @paraga
Aug 11, 2023 5 tweets 2 min read
had a chance last night to meet with some of the best minds in AI to discuss the most pressing challenge facing society today:

✨how to afford attending the ERAS TOUR ✨

after much discussion, we've arrived at a breakthrough, what we've termed the "Taylor Swift Scaling Laws" 👇 Image the Taylor Swift Scaling Laws (TS2L) take inspiration from Scaling Laws for transformer-based LLMs, and apply the same log-log regression methodology to model and understand components of Taylor's ticket prices.

dare I say, we may have found something equally impactful
Feb 25, 2023 4 tweets 2 min read
I built a personal chatbot from my personal corpus[1] a couple weeks ago on fully open-source LMs. On a whim I gave it iMessage.

Didn't expect the iMessage bit to matter, but it made a huge difference in how it feels to interact. Much more natural.

[1] thesephist.com/posts/monocle/ ImageImageImage Full write up hopefully coming soon, but I'm using cosmo-xl for text generation with my own prompt, retrieving from an in memory vector DB with sentence-transformers embeddings, and using @sendbluedotco for iMessage.
Nov 16, 2022 11 tweets 3 min read
Small rant about LLMs and how I see them being put, rather thoughtlessly IMO, into productivity tools. 📄

TL;DR — Most knowledge work isn't a text-generation task, and your product shouldn't ship an implementation detail of LLMs as the end-user interface

stream.thesephist.com/updates/166861… The fact that LLMs generate text is not the point. LLMs are cheap, infinitely scalable black boxes to soft human-like reasoning. That's the headline! The text I/O mode is just the API to this reasoning genie. It's a side effect of the training paradigm.
Nov 2, 2022 10 tweets 3 min read
NEW DEMO!

Exploring the "length" dimension in the latent space of a language model ✨

By scrubbing up/down across the text, I'm moving this sentence up and down a direction in the embedding space corresponding to text length — producing summaries w/ precise length control (1/n) Length is one of many attributes that I can control by traversing the latent space of this model — others include style, emotional tone, context...

Here's "adding positivity" 🌈

It's a continuous space, so attributes can all be mixed/dialed more precisely than by rote prompting
Sep 14, 2022 8 tweets 2 min read
Good tools admit virtuosity — they have low floors and high ceilings, and are open to beginners but support mastery, so that experts can deftly close the gap between their taste and their craft.

Prompt engineering does not admit virtuosity. We need something better. Tools like Logic, Photoshop, or even the venerable paintbrush can be *mastered*, so that there is no ceiling imposed by the tool for how good you can get to going from image in your mind -> output. Masters of these tools can wield tools as extensions of themselves.
May 22, 2022 9 tweets 3 min read
Hyperlink maximalism: everything should be a hyperlink — and what happens afterwards ⚔️

stream.thesephist.com/updates/165317… Image As computers become superhuman at understanding language, I think it'll become more and more foolish to build knowledge tools that depend solely on human authors to make connections between everything you know and read.
Jan 4, 2022 10 tweets 2 min read
To expand on this a bit: some early-stage thoughts about notation and language 🧠

The core value of notation is that takes abstractions that are useful to generalize, and makes them concrete. Natural language is expressive + ambiguous; programming languages are narrow + specific By making abstract ideas concrete, good notation frees up our squishy biological brains to work with those abstractions the same way we work with sticks and stones and physical objects. This is why it helps to draw/write things down when we think.

Every notation makes tradeoffs:
Jan 2, 2022 7 tweets 2 min read
2022's first email dispatch is about intelligence: before we try to augment it, we should first understand what it is. How could we define and measure intelligence? I pick on three perspectives

- Run-time adaptation
- Generalization
- Data compression

💌 linus.zone/latest 1. Intelligence is "run-time adaptation", as opposed to "compile-time adaptation" of some system. The ability to learn and adapt to new/changing environments is sort of the ultimate evolutionary advantage.

Intelligence means learning + adapting without a need for re-design.
Nov 3, 2021 7 tweets 4 min read
Many things I've read recently focus on an interesting theme of "spatial interfaces". I'm collecting them below👇

Main 💡—we interact with the physical world in a huge range of ways, but all our software lives in 2D rectangles, limiting what they can do.

stream.thesephist.com/updates/163589… Image My top recommended read on this topic is simply this blog on "Spatial Interfaces".

It invites you to notice all the things that most "normal" software we use can't do today or make very unergonomic, because basically every app is just a 2D window.

darkblueheaven.com/spatialinterfa…
Jul 8, 2021 8 tweets 4 min read
NEW PROJECT — I made a "personal search engine" that lets me search all my blogs, tweets, journals, notes, contacts, & more at once 🚀

It's called Monocle, and features a full text search system written in Ink 👇

GitHub ⌨️ github.com/thesephist/mon…
Demo 🔍 monocle.surge.sh Image One of my goals for this project was to learn about full text search systems, and how a basic FTS engine worked. So I wrote a FTS engine in Ink.

The project's readme goes into a little detail about how each step works, and how it all fits together.

📖 github.com/thesephist/mon… Image
Sep 16, 2020 20 tweets 4 min read
1. I've been thinking/talking lots about /community building/
2. I've been meaning to write these in a blog/book but it seems like it's gonna take a while, so instead—

✨✨✨

3. Here's the sparknotes version in a mega-thread for now:

💌 How to build a community 💌 1/ A community is a group of people who come together because of some shared identity. That identity can be generational like "gen z" or geographical like "Boston". Commonly it's about some shared experience, like "We went to Cal."