One of the most profound books, I've ever read is called "How to Read a Book". It's where I learned that there were levels to reading that I'd never even imagined.
It changed my life.
Read this if you want it to change yours:
Imagine you're passionate about a topic or question, but there isn't a single book dedicated to that subject. Instead, you have a lengthy list of potentially relevant books.
How should you proceed?
According to "How to Read a Book", the answer is syntopical reading.
It is the highest level of reading.
It is a type of reading that places *you* and your question at the center of the process.
The Syntopic method is simple to describe:
· Pick the books
· Find the relevant passages
· Reconcile terminologies
· Identify the core questions
· Compare answers
· Analyze the discussion
PICK THE BOOKS. Quickly skim each book. Read the:
- title
- preface
- introduction
- table of contents
- chapter headings
- book index
Scan any summaries. This should take ten minutes or less. The goal is to make a binary decision about whether to put more time into the book.
FIND THE RELEVANT PASSAGES. Skim the books again more slowly. Identify the specific passages that speak to your topic. Don't read them. Just skim. At this point, you're just making a binary decision on whether to read the passages more deeply later.
There are lots of ways to keep track of the passages but one of the simplest ways is to create a spreadsheet where the columns are the book title, page number and a short, hash-tag like description of the content.
3. RECONCILE TERMINOLOGIES. Use your collection of passages to find out how different writers define things and come up with a shared language for your authors. It's important that this language be neutral but comprehensive so as to be fair to all viewpoints.
Some authors will appear to agree because they use the same words for different things. Others will appear to disagree because they use different words for the same thing. Constructing a shared language is the best way to avoid confusion.
4. IDENTIFY THE CORE QUESTIONS. Any collection of writers on a single topic are usually wrestling with a particular set of big questions.
Your goal at this stage is to identify this list of shared questions for your chosen set of authors.
5. COMPARE ANSWERS. Identify how your authors answer your central questions and in particular, find out how they differ. The places where they differ most will tend to be the most important questions.
6. ANALYZE THE DISCUSSION. This is where you answer the question you had at the start, the one that launched you on this journey. Having constructed a shared language and identified the big questions, describe what everybody is saying to each other and most importantly WHY.
I can't describe it but when you get to this stage with your topic, you will feel a deep sense of accomplishment like arriving at the end of a long hike.
Your knowledge on the topic will feel authoritative, grounded and well-earned.
You will feel like a scholar.
TO SUMMARIZE:
The process of syntopical reading is a beautiful thing. When done correctly, it is an immersive experience. It's like attending a party where all of your favorite thinkers are debating your chosen topic.
Best of all, you'll be the host.
Thanks for reading!
If you like this kind of content, follow me so you don’t miss out on upcoming threads.
You can also support me by liking and retweeting the thread.
• • •
Missing some Tweet in this thread? You can try to
force a refresh
I've been thinking about different mathematical constructs as metaphor. You can often tell what kind of mathematical scientist a person is by the flavor of their mathematical metaphors.
Frequentists use mathematical metaphors for frequency. Bayesians use mathematical metaphors for belief.
Physicists use metaphors for momentum, energy and entropy. They do this even when modeling completely unphysical things like social networks or political preferences.
I used to react very negatively to this style of mathematical modeling.
This diagram outlines a mistake I see people making where they basically assume that large language models ought to produce output that has some kind of “intent” behind it.
I don’t think there is any intent.
It’s just words. There is no purpose or plan.
Extrapolation happens at the textual level. That’s why they are called *language* models.
There is no explicit attempt to model the intent behind the text which means any intent that can be discerned in their outputs must be there almost by accident.
An LLM’s output is like the output of an evolutionary process. It appears as if there is purpose in the design but there mostly isn’t.
To the extent there seems to be, it’s second-hand “purpose” coming from the humans that created the training data.