I've mentioned this study before but Sweller et al. (1998) point out that humans are bad at complex reasoning particularly long chains of complex reasoning in working memory. They're esp bad when they have no previous experience to reference. +
Sweller & co looked at chess players & asked them to reproduce board configurations. Experts were able to reproduce board configurations more accurately than novices as long as those board configs came from previous matches they had played. If the experts were given random +
board configs, they faired no better than the novices. The experts were relying on prior experiences stored in long term memory in order to reproduce the boards. The novices had to use means-end analysis (MEA) to logic their way to reproducing the board configs +
That MEA maxes out or exceeds the capacity of working memory which we call cognitive overload. Working memory is limited so once you burn it up that's it. Your brain can't brain beyond that limit. Long-term memory comes in w/ the assist by automating chunks of knowledge.
It's why reading letters & sounds as a kid is so taxing but as adults we read & produce words much more effortlessly. It's why driving at first it seems like there's so much to pay attention to but as we reach automaticity w/ parts of the process, it requires less cognitive load.
Threat modeling is also a complex process. It requires reproducing the architecture, knowing where & how to recognize what is at risk & why, & how to mitigate those threats. A dev might know their architecture (knowing part of it is more likely than all of it), but knowing what +
Is at risk, why, & how to mitigate that risk is a different domain of knowledge. Thus, pulling frm the infinite chasm of "what is a threat" w/ little previous experience to draw from means they're likely engaging in MEA in order to figure it out which leads to cognitive overload+
Threat modeling esp. for those of us w/ wild brains, can venture to all sorts of places when you ask "What is a threat?" 1st time I did it my answer was "Elephants". It should be no surprise that things requiring lots of cognitive effort tend to be avoided +
As a dev I can diagram the architecture but if all the steps beyond that are amorphous & overwhelming chances are I won't do it. Something to consider next time you wonder if you're in #infosec why devs don't do more threat modeling.
I've been saving this because I just did a lit review about cognitive apprenticeship in pair programming, but much of the research talked about pair programming generally.
Also, if you've recently started following me, I just started my PhD studies because I want to improve scaffolds around pair programming. Follow for more pair programming and other random learning sciences/cognitive science content... To the research!
There is research that supports and refutes the usefulness of pair programming. More in the support than refute. The challenge in pair programming research is how do you measure success (research design & methodological rigor) and who is doing the pairing?
My tech folks interested in learning & mentoring, this one's for you. A while back someone tweeted asking, “when do we introduce abstracts?” I explained a bit websonthewebs.com/tackling-the-a… but now I have research to back it up
In the article "Cognitive architecture and instructional design" Sweller, van Merrienboer, & Paas examine the difference between novice & grandmaster chess players. When asked to re-create board configurations from previous games, chess grandmasters were able to do that easily.
However, when asked to re-create random board configurations, chess grandmasters were no better at re-creating the configurations than novice players.