Brooke N. Macnamara Profile picture
Jul 24 49 tweets 9 min read Twitter logo Read on Twitter
Our systematic review & pre-registered meta-analysis of growth mindset interventions on academic achievement and our reply to commentaries is now out in Psychological Bulletin.

A (long) thread.psycnet.apa.org/record/2023-14…
First, some findings from the systematic review:

• studies authored by researchers with financial incentives to report positive effects were > 2.5x as likely to report positive effects

• > 90% of studies had confounds in their study design
• some studies found null results but interpreted them as significant anyway (inc. highly-cited studies)

• some studies didn’t adjust for clustering, leading them to erroneously report significant effects (inc. a highly-cited study)
• 97% of samples were not preregistered. In fact, there were more studies that described themselves as preregistered that were not preregistered than there were actual preregistered studies.
• many studies never tested whether students’ mindsets were affected by the intervention

• of the studies that tested whether students’ mindsets were affected by the intervention, many found no evidence of a mindset change
Meta-analysis 1: Across all studies, we found a small effect.

We tested for heterogeneity in effects by age, SES, level of challenge (risk), and how long the intervention had to take effect before the outcome measure.

No theoretically-meaningful moderators were significant.
We tested for publication bias using multiple approaches (Egger’s, Duval & Tweedie’s, PET-PEESE).

All suggested publication bias.

When correcting for publication bias, the overall effect was non-significant.
Meta-analysis 2: We next tested if any effects from growth mindset interventions were from the assumed cause—change in students’ mindsets from the intervention.
Here, we included all studies that demonstrated the intervention changed treatment students’ mindsets.
< 25% of studies demonstrated the intervention changed treatment students’ mindsets. For these studies, the overall effect was non-significant.

Again, no theoretically-meaningful moderators were significant.
Meta-analysis 3: We focused on the highest quality evidence. We aimed to only include interventions that changed students’ mindsets & met 100% best practices—e.g., no confounds, full blinding, active control group, no authors with financial COIs.
No study met these criteria.
We had to considerably lower the threshold for what was considered the highest-quality studies in the growth mindset intervention literature.

Among the highest-quality studies available, the effect on academic achievement was not significant.
We then conducted over 200 meta-analytic models examining adherence to every combination of best practice criteria. As the number of best practices adhered to increased, the number of significant models decreased.
No model was significant after correcting for publication bias.
As a credit to the popularity of mindset, another meta-analysis on growth mindset interventions was submitted to PsychBull around the same time. Their interpretation of results was more favorable.

3 commentaries on the 2 meta-analyses were then accepted.psycnet.apa.org/record/2023-10…
Commentary 1: Yan & Schuetze discussed the problems with mindset theory and its measurement. They commented that our meta-analysis took a more theoretically-driven approach to examining moderators than the other meta-analysis. psycnet.apa.org/fulltext/2023-…
Commentary 2: Oyserman explained how growth mindset is intuitively appealing because it fits w culture-based assumptions. Growth mindset “feels right” and so there is a risk of adopting a less critical lens when evaluating the theory.
psycnet.apa.org/record/2023-90…
Commentary 3: Tipton et al. made 5 main claims: , which we addressed in our reply: https://t.co/O92Xrlg1Xnpsycnet.apa.org/doiLanding?doi…
psycnet.apa.org/record/2023-90…
Tipton et al. claim 1: The two meta-analyses differed primarily because the analytical approaches differed.

Ex. 1: They claimed we only included one effect size per study. This is false. We included multiple effect sizes per study.
Ex. 2: We separately analyzed moderators. Tipton et al. state that meta-analysts should conduct simultaneous moderator analysis. BUT, this approach was inappropriate for our dataset given the effects-to-moderators ratio. (Tipton also uses separate moderator analyses herself.)
Why did the 2 meta-analyses differ? Many reasons. Here are 3.

1st: We included 63 studies; the other meta-analysis only included a subset of studies of academic achievement (32 studies).
2nd: We specified characteristics of subgroups proposed by mindset theory to demonstrate greater treatment effects (e.g., high risk, low SES), and preregistered these characteristics; the other meta created focal groups with a mix of characteristics that differed study to study.
3rd: We aimed to reduce bias in our analyses and to evaluate bias in the literature. We preregistered hypotheses, search protocol, moderators, analyses, and a set of best practice criteria; the other meta-analysis did not preregister any component of their meta-analysis.
Our reply has more details on the differences between the two meta-analyses. Please read it here: . (Non-paywall versions of all the articles appear at the end of the thread.)psycnet.apa.org/record/2023-90…
Tipton et al. claim 2: Re-analyzing our dataset with the analytic approach from the other meta-analysis leads to a different conclusion “from the exact same data set.”

BUT, Tipton et al. *changed* the dataset. These changes were inconsistent and often without explanation.
Ex. 1: Tipton et al. changed effect sizes: Sometimes they changed effects to no longer account for baseline performance and sometimes not; sometimes they included low-SES subgroups and sometimes they excluded them.
The changes appeared to favor larger effects.
Ex. 2: Tipton et al. changed at-risk statuses. Changes were inconsistent and often unexplained. E.g., of Yeager’s studies with 9th-graders transitioning to a new school, samples w smaller effects were changed to “low risk,” studies w larger effects kept the “medium risk” status.
Tipton et al.’s changes to effect sizes and risk statuses appeared to favor including larger effects for at-risk students.

Their re-analysis also introduced multiple errors.

For example,
Tipton et al. coded several studies as the same when authors’ names shared the 1st letter. E.g., they coded Peterson (2018) as the same study as Paunesku et al. (2015) and coded Schubert (2017) as the same study as Saunders (2013), erroneously increasing w/in-study heterogeneity.
What happens when we re-analyze the original dataset using Tipton et al.’s approach?
It confirms our previous results:
1) Small effect for all studies
2) No sig effect for studies w a minimal standard of evidence
3) No sig effect when examining the best available evidence
Also, though Tipton et al. stated that our moderators should have been simultaneously analyzed, they did not simultaneously analyze our set of moderators, likely because it was not possible with our dataset.
Tipton et al. claim 3: They take issue with several of our preregistered best practice criteria. Many growth mindset intervention studies failed to follow best practices.
E.g.,
• 42% of samples failed to compare their treatment to an active control group
• 94% of samples failed to isolate the key treatment variable of interest (i.e., mindset)
• 72% of samples failed to blind students, study administrators, and teachers to condition.
We should be concerned with the pattern of threats to internal validity. Studies that don’t follow best practices in design/reporting/avoiding bias may be more likely to present invalid results, skewing conclusions about the benefits of growth mindset interventions.
Tipton et al. did not comment on the above best practices.

Instead they argued against the inclusion of other best practice criteria, such as conducting an a priori power analysis and randomly assigning students to condition.
Despite seeming to argue against preregistration as a criterion, Tipton et al. imply that their own preregistered studies, e.g., Yeager et al.’s (2019) “National Learning Mindset Study,” offer the best evidence in support of growth mindset in part because they were preregistered.
BUT, the Yeager et al. (2019) “preregistration” is a document the authors wrote after analyzing a portion of the data to help “inform” the preregistration.

Analyzing data before writing the preregistration violates the fundamental purpose of the preregistration.
Tipton et al. claim 4: Authors coded as having a financial incentive to report positive effects do not have a financial incentive. E.g., they argue Dweck has no financial incentives because she divested from the for-profit company she founded that sells GM intervention products.
BUT, Dweck is registered with speakers’ bureaus where she charges $20-$50,000 per motivational/keynote talk on GM, is a corporate consultant, and has earned an estimated >$4 million in book royalties for her book “Mindset.”
They changed multiple authors’ financial incentive statuses when they re-analyzed our data.
These changes were inconsistent, sometimes countering their own arguments.
For example,
Tipton et al. argue 2 authors do not have a financial incentive for the same reason, but only change the status of 1 in their re-analysis.
The changes to statuses Tipton et al. made appear to favor equalizing effect sizes between authors with and without financial incentives.
Tipton et al. claim 5: When re-analyzing our model of the highest quality evidence, there is a significant effect of growth mindset interventions on academic achievement.
BUT, the data Tipton et al. included in this model has almost no resemblance to the original model.
1st, Tipton et al. dropped one of the two inclusion criteria for this model without explanation. This more than doubled the number of studies Tipton et al. included.
Average d of Tipton et al.’s added studies from this change = 0.12.
2nd, Tipton et al. reverse coded financial COIs w/o explanation. Studies where author(s) had financial incentives were coded as better; studies where authors had no financial incentives were docked in their quality rating by Tipton et al.
They removed 2 studies, ds = –0.33, 0.00
3rd, Tipton et al. removed a study with d = –0.68, claiming it didn’t have an a priori power analysis and so should not have been included.

But, it *did* have an a priori power analysis.
4th, Tipton et al. added two studies claiming they were preregistered and so should be included (average d = .07).

But, they were *not* preregistered.
Only 3 of the 13 studies Tipton et al. included in their “re-analysis” of this model were the same as in our model.
For 10 of 13 there was no explanation.
They excluded the negative and 0 effects.
Conclusion 1: Meta-analytic decisions should be a priori, transparent, and consistent.
Conclusion 2: Bias is a potential issue in the growth mindset intervention literature.
Significant effects appear most likely to emerge when researchers make post-hoc, selective, and problematic study design and reporting decisions.
Conclusion 3: Yan & Schuetze’s and Oyserman’s commentaries contextualize the problems in study design, reporting, and avoiding bias we found in the growth mindset intervention literature. Tipton et al.’s re-analysis illustrates these issues.
Open-access versions:
Metas
Macnamara & Burgoyne
Burnette et al. https://t.co/0Fj9ucMX2F
Commentaries
Yan & Schuetze https://t.co/OfFjSJpbO4
Oyserman https://t.co/RRR1B7pjAj
Tipton et al. https://t.co/KBObVEt4Uc
Reply
Macn. & Burg. https://t.co/pxvmPsMi6Mpsyarxiv.com/ba7pe
tinyurl.com/yhfxcrms
psyarxiv.com/mp84a
psyarxiv.com/pgswh
tinyurl.com/msx2x6tk
psyarxiv.com/embhs

• • •

Missing some Tweet in this thread? You can try to force a refresh
 

Keep Current with Brooke N. Macnamara

Brooke N. Macnamara Profile picture

Stay in touch and get notified when new unrolls are available from this author!

Read all threads

This Thread may be Removed Anytime!

PDF

Twitter may remove this content at anytime! Save it as PDF for later use!

Try unrolling a thread yourself!

how to unroll video
  1. Follow @ThreadReaderApp to mention us!

  2. From a Twitter thread mention us with a keyword "unroll"
@threadreaderapp unroll

Practice here first or read more on our help page!

More from @BrookeMacnamara

Feb 4, 2020
We examined 6 key claims of mindset theory. The strongest association (r=−.12) was in the opposite direction from the theory's claim. We suggest the foundations of mindset theory are not firm and that bold claims about mindset appear to be overstated. 1/8 journals.sagepub.com/doi/full/10.11…
Mindset theorists claim mindset has "profound" effects on motivation & achievement, creates diff
"psychological worlds," & forms the "core" of people’s meaning systems. Evidence should be at least as strong as the average effect in social psych research, right? We tested this.
Mindset claim #1: People with a growth mindset hold learning goals.
Evidential strength for claim: weak
Results: Growth mindset was weakly associated with learning goals, r = .10, p = .041; this is significantly weaker than the average social psych effect (r = 0.20), p = .015.
Read 8 tweets

Did Thread Reader help you today?

Support us! We are indie developers!


This site is made by just two indie developers on a laptop doing marketing, support and development! Read more about the story.

Become a Premium Member ($3/month or $30/year) and get exclusive features!

Become Premium

Don't want to be a Premium member but still want to support us?

Make a small donation by buying us coffee ($5) or help with server cost ($10)

Donate via Paypal

Or Donate anonymously using crypto!

Ethereum

0xfe58350B80634f60Fa6Dc149a72b4DFbc17D341E copy

Bitcoin

3ATGMxNzCUFzxpMCHL5sWSt4DVtS8UqXpi copy

Thank you for your support!

Follow Us on Twitter!

:(