I’ve been studying dynamics of reader memory with the mnemonic medium, running experiments on interventions, etc. A big challenge has been that I'm roughly trying to understand changes in a continuous value (depth of encoding) through discrete measurements (remembered / didn’t).
I can approximate a continuous measure by looking at populations: “X% of users in situation Y remembered.” Compare that % for situations Y and Y’ to sorta measure an effect. This works reasonably well when many users are “just on the edge” of remembering, and poorly otherwise…
It’s a threshold function on the underlying distribution. Imagine that a person will remember something iff their depth-of-encoding (a hidden variable)—plus some random noise (situation)—is greater than some threshold. Our population measure can distinguish A vs A’, not B vs B’.
So it works pretty well initially, when the distribution’s spread out. e.g.: I’ve been running an RCT on retry mechanics. Of readers who forget an answer while reading an essay, about 20% more will succeed in their first review if the in-essay prompt gave them a chance to retry.
But it doesn’t work well when the distribution’s skewed to one side. eg: I’ve run RCTs manipulating schedules. You might think shortened intervals would help struggling readers, but it has little effect on the population measure—just (likely) nudges some closer to the threshold.
Lack of a good continuous measure makes it hard to characterize the dynamics of what’s going on, which makes it hard to make iterative improvements. I’ll need to find some good solution here. Unfortunately, response times are (AFAICT) not a strong enough predictor to use.
Incidentally, this is part of why Ebbinghaus used nonsense syllables: he was memorizing sequences he’d *never* remember on the first try in subsequent tests. But it’d take less time to re-learn well-rehearsed sequences—time savings as a continuous proxy for depth of encoding.
(Yes, I’m aware that some memory systems ask users to subjectively “grade” their memory 1-5, which would be slightly less discrete. I suspect it probably doesn’t add enough measurement resolution to be worth the user burden, but could be worth trying.)
The thing I have to keep reminding myself about a statement like this is that it does *not* mean that the mechanic causes 20% increase in depth-of-encoding. It's more likely a fairly small increase for a large number of people right below the threshold.
In practice, now with ~5 substantive texts written in the medium, it's pretty consistent that ~2-5% of readers engage with the prompts; 25-50% answer ~all (very length dependent); around half of those do any reviews.
What are the implications for authors and their incentives?
If you have thousands of readers, only a few tens might actually review your material over time. Writing those prompts takes a lot of effort—is it "worth it"?
It's an easier case to make for "platform knowledge" like Quantum Country, which can draw 100k's of readers.
But of course "visitor" numbers are misleading. For every 100 unique visitors an article's analytics count, it wouldn't surprise me if 80 bounce without reading much and 10+ read shallowly. So maybe this is actually reaching most of the serious readers.
Team environments contain lots of activities (meetings, answering questions, reports, etc) which are quite tricky because they’re *sometimes* very valuable. And that makes it easy not to notice when you’re only doing them to hide when feeling aversion to tough creative problems.
I’ve noticed that displacement activities are much more obvious when working alone. There’s no mailing list of questions to answer, so hiding often look more like surfing the internet, cleaning the house, etc. It’s much harder to accidentally convince yourself that that’s work!
It’s funny: if I get into a good awareness state while in the middle of some tough creative work, I can feel a noticeable “baud rate” of aversive impulses, graspings for easy escapes, etc. Many times a minute! Usually can’t de-identify enough to see that.
One favorite detail comports with my experience: not feeling “beholden” to members, but that they “formalize” my activities—a sense of seriousness and earnest responsibility.
@craigmod Interesting to see how much more serious Craig is about promoting and enriching his membership program than I am. I wonder sometimes about how much I “leave on the table” by keeping mine at greater distance—but I’m terrified of the “cage” he describes, and I feel its proximity.
@craigmod On cynical days, I fear that almost everyone everywhere (incl me) is accidentally spending most of their time pursuing fake goals (being an artist -> “doing” a membership program), and that one must summon tremendous obstinacy, determination, and inconvenience to do otherwise!
I was surprised by some very odd typographic choices in Tufte’s new book. Halfway through, he explains: “Systematic regularity of text paragraphs is universally inconvenient for readers… Idiosyncratic paragraphs assist memory and retrieval” A fascinating idea—I’m not sure!
The tyranny of the grid! The tyranny of text-in-boxes! The oppressive constancy of text-in-boxes-in-rectangles! It is good to see attempts to systematically break this.
“Nearly every paragraph in this book is deliberately visually unique."
Unsurprisingly, he draws a great deal on typographic ideas from poetry, but his ideas about “text matrices” seem mostly influenced by principles of information architecture.
It’s an odd phenomenon: if I tweet anything even slightly crypto-adjacent, my inbox suddenly overflows with grifters—along with thoughtful, well-intentioned people to be sure, but the grifter quotient is quite noticeable. What produces this effect?
In general I really like the variability that comes from having a tweet amplified into different communities.
e.g. sometimes I’ll get retweeted into meditation/philosophy spaces and get lots of great responses with a wonderfully different way of seeing the world
Sometimes Weird Anonymous Twitter will notice something I’ve written, and I’ll get to bear witness to a ton of inscrutable but fascinating conversation reliant on mysterious memes!
No one's yet made a workable solution for web micropayments, but one aspirational design metaphor I like is an electricity meter.
I don't think about running my dishwasher as a transaction with a price and a receipt: I just do things, and I get a bill at the end of the month.
Prices are (fortunately!) calibrated so that the monthly bill is not usually a big deal. If it seems high, I might dig in: hey, this appliance is wasteful! Or maybe I need to turn off the mining bots or whatever. But default-batched transactions really lowers friction.
It's interesting to think about monetizing web content along these lines: you just read things; small charges accumulate; you pay the bill at the end of the month and maybe change future behavior if it seems too high. You could set a cap if you wanted. Aim for effortlessness.