Webs Profile picture
3 Nov, 13 tweets, 3 min read
I've been saving this because I just did a lit review about cognitive apprenticeship in pair programming, but much of the research talked about pair programming generally.
Also, if you've recently started following me, I just started my PhD studies because I want to improve scaffolds around pair programming. Follow for more pair programming and other random learning sciences/cognitive science content... To the research!
There is research that supports and refutes the usefulness of pair programming. More in the support than refute. The challenge in pair programming research is how do you measure success (research design & methodological rigor) and who is doing the pairing?
A lot of the research is done w/ college students or middle & high school students on class assignments. I've had trouble finding research done with professional developers let alone professionals using a mature codebase. The more challenging thing is how do you measure success?
Sun et al. (2019) in Search & Research on Economics of Pair Programming solve algebraically using defect removal time, product size, defect density, work time, number of developers, number of pairs, and developer salary.
Smith et al. (2018) in Long Term Effects of Pair Programming examine partnership participation, project and exam scores, withdraw rates (the participants were college students), time between courses, GPA, and gender.
Plonka et al. (2014) look at knowledge transfer in Knowledge transfer in pair programming: an in-depth analysis. They point out that Pandey et al. (2003) suggest that knowledge spread among team members can reduce project risk because of less reliance on individuals in the team.
Baheti (2002) in Assessing distributed pair programming measures productivity, software quality, and student feedback. Depending on the study outcomes can vary because there's no consensus on how to measure if pair programming is "successful".
I was discussing with my advisor how I plan to measure "success" when studying pair programming. One of the topics that came up was code or program quality. Some studies will create a test suite & measure time to complete based on getting all passing tests.
Pairing is a highly complex activity that does offer benefits, however quantifying those benefits to determine if pairing is worth it is very challenging. Because it depends. It depends on the environment, the team, the individuals, the organization, power dynamics, and so on...
Even in the original tweet, "pair must deliver more work than an individual" Which individuals? Which pairs? How are we quantifying work? Because I can definitely sink a bunch of hours into writing a pretty simple program 🤣😂 Did I deliver "more work"?
Also, my opinion, if the only dimension you're looking at is work output that's a pretty limited view. We do know that people working in pairs find work more enjoyable (generally) (Xinogalos et al., 2017; Celepkolu & Boyer, 2018). Happy people tend to stay where they're happy.
Anyway, I love this topic and could talk about it for a while. Can't wait to share what I find next. :)

• • •

Missing some Tweet in this thread? You can try to force a refresh
 

Keep Current with Webs

Webs Profile picture

Stay in touch and get notified when new unrolls are available from this author!

Read all threads

This Thread may be Removed Anytime!

PDF

Twitter may remove this content at anytime! Save it as PDF for later use!

Try unrolling a thread yourself!

how to unroll video
  1. Follow @ThreadReaderApp to mention us!

  2. From a Twitter thread mention us with a keyword "unroll"
@threadreaderapp unroll

Practice here first or read more on our help page!

More from @weberswords

5 Nov
Why #threatmodeling doesn't work well with developers: a hypothesis based on cognitive science #tech #infosec
I've mentioned this study before but Sweller et al. (1998) point out that humans are bad at complex reasoning particularly long chains of complex reasoning in working memory. They're esp bad when they have no previous experience to reference. +
Sweller & co looked at chess players & asked them to reproduce board configurations. Experts were able to reproduce board configurations more accurately than novices as long as those board configs came from previous matches they had played. If the experts were given random +
Read 11 tweets
9 Oct
My tech folks interested in learning & mentoring, this one's for you. A while back someone tweeted asking, “when do we introduce abstracts?” I explained a bit websonthewebs.com/tackling-the-a… but now I have research to back it up
In the article "Cognitive architecture and instructional design" Sweller, van Merrienboer, & Paas examine the difference between novice & grandmaster chess players. When asked to re-create board configurations from previous games, chess grandmasters were able to do that easily.
However, when asked to re-create random board configurations, chess grandmasters were no better at re-creating the configurations than novice players.
Read 10 tweets

Did Thread Reader help you today?

Support us! We are indie developers!


This site is made by just two indie developers on a laptop doing marketing, support and development! Read more about the story.

Become a Premium Member ($3/month or $30/year) and get exclusive features!

Become Premium

Too expensive? Make a small donation by buying us coffee ($5) or help with server cost ($10)

Donate via Paypal Become our Patreon

Thank you for your support!

Follow Us on Twitter!

:(