During graduate school I worked for Mindset Works, a small, globally renowned growth mindset (GM) training company, founded by the original researcher of mindsets and their impact on academic achievement, Dr. Carol Dweck. I was most familiar with Brainology, Mindset Works’ GM intervention for middle schoolers, which I was conducting efficacy and UX research on. However, in my courses I learned of other interventions, some which looked incredibly different from Brainology, yet were also labelled as Growth Mindset Interventions. I wondered, how do other GM interventions look? Are some types, or designs, more effective than others?
Research questions
I knew what composed Brainology. But how did other interventions compare? What do they look like? How are they used, where, and with whom?
RQ1: Do GM interventions differ in qualitative factors not formerly examined in meta-analyses, such as in their instructional design, or samples tested?
RQ2: Are there certain features in a GM intervention that are associated with higher performance outcomes?
RQ3: What are design suggestions for creating a successful intervention?
Methods
I conducted a quantitative review of 19 GM interventions, spending dozens of hours examining their features, instructional materials, the process of the intervention, the demographics the intervention was used with, and outcomes.
Process
This involved a lot of labeling, categorizing, and sorting. Picture an iterative process of piling categories, refining categories, re-piling categories, refining categories, until coming up with a clear way to distinguish the successful interventions, from the no-impact interventions, from the worse outcome interventions and understanding how the design and demographic features related within a category and differentiated between categories.
Findings
What I found was:
RQ1: Do GM interventions differ in quality?
A: Yes. I found that the 19 GM interventions I studied (which I sourced/recruited from a well-known, previous meta-analysis of GM intervention efficacy) differed on 7 major dimensions that had not been entirely examined before. These dimensions fit under two major Factor Categories: 1) Intervention Design Factors and 2) Student Factors. I created the labels of these dimensions, listed below.
7 Dimensions on which the 19 GM Interventions differed
Intervention Design Factors
1. Instruction of GM
2. Subcomponents
3. Presence of an advising (or similar) activity
4. Length
Student Factors
5. Country
6. Age Range
7. Sample (at-risk or normal)
RQ2: Are there features in a GM intervention that are associated with higher performance outcomes?
In other words, what is/are the major distinctive factor(s) within each outcome category?
A: My review found that:
✅ Successful Interventions (58%) consisted of one of the following features:
Reading a brief GM-framed story
Creating an artifact (such as a letter, a testimonial or a website) that advises another person on or about GM
Reading briefly about GM, then making it self-relevant through the activity that follows.
⭕️ No Impact Interventions (no significant difference) (32%) had common themes of either:
Used Brainology (only a portion of it)
Were in the form of a workshop
Often involved student factors that were not ideal for the intervention.
❌ Worse Outcome Interventions (10%):
Were both long-term workshop style.
RQ3: What are design suggestions for creating a successful GM intervention?
A: [Coming Soon! - Read the paper, linked below, to learn more.]
To be continued…