In SRS design, Anki and Quantum Country ask you to think of the answer; Duolingo and Execute Program ask you to input an answer.
I’d thought: latter’s likely more effective, but annoying & slow. Surprised to see these studies found little diff in recall: andymatuschak.org/files/papers/L…
(See chapter 6, which describes the three experiments. Some limitations: targets were Swahili–English word associations; performed on smallish sample of undergrads; maximum retrieval interval of a week. This thesis is intensely interesting throughout!)
One fun oddball: at least as of Dec ’17, Quizlet presents a multiple-choice input the first time, then transitions to text input / self-graded afterwards. The theory is that recognition is easier than recall, so maybe makes sense to “bootstrap” that way. quizlet.com/blog/selecting…
I wonder how this turned out! The cogsci as I understand it could go either way:
+: performance on initial trial strongly affects subsequent forgetting
–: "desirable difficulty”; recall promotes slower forgetting than recognition
• • •
Missing some Tweet in this thread? You can try to
force a refresh
One recent way this helped me: I think we can make inboxes (email, tasks, tabs, reading lists) less burdensome by replacing high-stakes mechanics (“close tab”) w/ low-stakes ones (“not right now”, decay). The insight comes from understanding these as queueing systems! (cont)
Inboxes only “work” if you trust how they’re drained. From a queue-processing perspective: the departure rate must be below some threshold. Inbox Zero “works” by aggressively increasing that rate (via defer, delegate, drop)—blunt, but ensures that departure rate > arrival rate.
This tactic requires you to make a decision about every item in the inbox. Maybe fine when queue is small, but explicitly deferring a task imposes an emotional cost, possibly unnecessarily: “inbox zero” is only necessary if the arrival rate *always* exceeds the departure rate.
In cogsci, Marr suggests 3 levels at which a system (eg vision) can be analyzed: computational (the fundamental problem being solved), algorithmic (how it’s solved, abstractly), and implementation (hardware details).
It’s an interesting taxonomy for analyzing tools for thought!
e.g. for memory systems, three kinds of analysis:
* computational: the dynamics of human memory
* algorithmic: schedules which optimize learning relative to those dynamics
* implementation: details of software implementing those schedules
All important and intertwined!
One thing I like about this approach (same motivation for Marr in cogsci): it pushes you to characterize the computational task your system is performing.
e.g. if you’re designing creativity support systems, you’ll benefit from insights about what creative problem-solving *is*
In practice, now with ~5 substantive texts written in the medium, it's pretty consistent that ~2-5% of readers engage with the prompts; 25-50% answer ~all (very length dependent); around half of those do any reviews.
What are the implications for authors and their incentives?
If you have thousands of readers, only a few tens might actually review your material over time. Writing those prompts takes a lot of effort—is it "worth it"?
It's an easier case to make for "platform knowledge" like Quantum Country, which can draw 100k's of readers.
But of course "visitor" numbers are misleading. For every 100 unique visitors an article's analytics count, it wouldn't surprise me if 80 bounce without reading much and 10+ read shallowly. So maybe this is actually reaching most of the serious readers.
I’ve been studying dynamics of reader memory with the mnemonic medium, running experiments on interventions, etc. A big challenge has been that I'm roughly trying to understand changes in a continuous value (depth of encoding) through discrete measurements (remembered / didn’t).
I can approximate a continuous measure by looking at populations: “X% of users in situation Y remembered.” Compare that % for situations Y and Y’ to sorta measure an effect. This works reasonably well when many users are “just on the edge” of remembering, and poorly otherwise…
It’s a threshold function on the underlying distribution. Imagine that a person will remember something iff their depth-of-encoding (a hidden variable)—plus some random noise (situation)—is greater than some threshold. Our population measure can distinguish A vs A’, not B vs B’.
Team environments contain lots of activities (meetings, answering questions, reports, etc) which are quite tricky because they’re *sometimes* very valuable. And that makes it easy not to notice when you’re only doing them to hide when feeling aversion to tough creative problems.
I’ve noticed that displacement activities are much more obvious when working alone. There’s no mailing list of questions to answer, so hiding often look more like surfing the internet, cleaning the house, etc. It’s much harder to accidentally convince yourself that that’s work!
It’s funny: if I get into a good awareness state while in the middle of some tough creative work, I can feel a noticeable “baud rate” of aversive impulses, graspings for easy escapes, etc. Many times a minute! Usually can’t de-identify enough to see that.
One favorite detail comports with my experience: not feeling “beholden” to members, but that they “formalize” my activities—a sense of seriousness and earnest responsibility.
@craigmod Interesting to see how much more serious Craig is about promoting and enriching his membership program than I am. I wonder sometimes about how much I “leave on the table” by keeping mine at greater distance—but I’m terrified of the “cage” he describes, and I feel its proximity.
@craigmod On cynical days, I fear that almost everyone everywhere (incl me) is accidentally spending most of their time pursuing fake goals (being an artist -> “doing” a membership program), and that one must summon tremendous obstinacy, determination, and inconvenience to do otherwise!