, 73 tweets, 12 min read Read on Twitter
THREAD on the cancellation of the 2018-19 Game Outcomes Project.

I want to thank everyone who participated in the 2018-19 Game Outcomes Project survey. We've decided to cancel it, and I wanted to write a Twitter thread to explain the reasons.
First, before we get into it, we just made a TON of new charts available online for the 2014-15 Game Outcomes Project. Read this thread ---->

for the link along with an explanation of how to read them,
To be fair, the decision to cancel was not unanimous. And we did get some very interesting results, and some on the survey team felt we should go ahead with publishing our summary of the findings.

But the TL;DR is this:
We want to hold the study to increasingly higher standards with each iteration. We wanted a new version of the survey that would not only build on the 2014-15 Game Outcomes Project, but would also give us significantly more robust and more defensible conclusions.
This meant we needed more total data points (more survey responses), more meaningful and insightful questions, AND better methodology and analysis of the results.
Despite an enormous amount of effort on our part -- including translating the survey into six different languages, and quite a bit of time spent redesigning the survey and collecting and analyzing the results -- the 2018-19 survey did not quite meet these goals.
Five steps forward and two steps back isn't really good enough.

We're not going to do a full write-up of the 2018-19 survey, but we wanted to discuss some of the reasons -- essentially a quick postmortem -- in this thread.
First, let's talk about what worked, and the ways the new study DID improve on the 2014-15 survey.
1. We added a new pair of outcome questions regarding the effect of the project on the dev team and the individuals on the team. After all, success is not just shipping a successful game, but growing a team of professional developers who can stick together to make a sequel ...
... or at least continue on to develop some other successful game. It also means team members shouldn't be so burned out by the end of the project that they suffer from snowballing physical and mental health crises as a result of development.
The addition of this new pair of outcome questions was incredibly useful. Essentially everything that correlated with better project outcomes correlated with team / individual growth outcomes in some interesting way.
Typically, this is what you would expect -- things that positively influenced the outcome of the project also positively influenced team growth and individual growth.

In a few cases, however, they were polar opposites. The details are beyond the scope of this thread, but ...
.... we could see clear evidence of a subset of teams that burned out badly while trying to make a great project, and factors that seemed to be correlated with better project outcomes but worse outcomes for the teams and individuals who made them.
2. We narrowed and consolidated our set of survey questions significantly. We received some feedback that the 2014-15 survey took to long to complete, and we wanted to make sure the 2018-19 iteration was shorter.
We looked at questions where there was significant overlap (two questions that correlated strongly with each other), and removed the questions with weaker correlations as they appeared to be redundant.
3. We redesigned the survey to make it clearer & less intimidating. This, in combination with the smaller overall survey size, probably helps account for the higher completion rate (tho the total number of responses was lower, as we'll discuss later in the thread).
4. We made the answers to several questions more consistent, especially the four outcome questions (regarding the game's return on investment (ROI), critical reception (MetaCritic score / etc), project delays, and team satisfaction). This helped us get a more robust analysis.
5. We improved our approach to data analysis. The 2014-15 study allowed survey respondents to leave a certain small % of the culture questions blank (esp. some of the outcome Q's & the huge page of Likert scale questions from "1 - Strongly Disagree" to "7 - Strongly Agree").
Although this was a very small percentage of the total set of culture questions in the 2014-15 survey (roughly around 5% at most of questions skipped), some respondents did skip a few of these, and it made data analysis more difficult, as it forced us to do some subtle ...
... data substitution. Although we went to extreme efforts to ensure that we substituted data that was as accurate as possible & consistent with that respondent's other survey answers, this undoubtedly affected the final correlations slightly. We'd really rather not do that.
In the 2018-19 survey, we forced all questions to be answered. We ended up with slightly weaker correlations as a result, but the result was more defensible.
Also, it's important to note that in almost all cases, our results exactly matched the results of the 2014-15 study. At least 95% of our correlations were essentially identical in both studies in cases where we had similar input questions and similar outcome questions.
6. We kicked a few questions to the end of the survey & made them optional - such as the game's genre, and the game engine that was used to create it. These questions are interesting but showed too little correlation w/ outcomes in the 2014-15 study to be kept as mandatory ...
... questions, & kicking them to the optional questions section shortened the survey & improved our response rate. Also, in the case of the game's genre category, the data points were spread among so many categories that there no statistical significance.
However, in both cases, the responses in 2018-19 were entirely consistent with those in 2014-15.

7. We extended the survey with a number of new questions. While not all of these were useful, a few of them showed strong correlations with outcomes. In particular:
"When the team found itself in a crisis -- missing a milestone or in a situation that threatened to miss a project milestone -- how did it typically respond to that crisis? Select ALL that apply."
"How would you describe the noise level in the working environment?"
"How would you describe the physical layout of the working space?"
"Was there any sort of review process for new work or changes to existing work in any of the following disciplines on the team? ..."
All 4 of these showed at least some interesting correlations with the project outcome questions or the two new questions around team and individual growth.
8. We looked much more closely at team size & budget in our (unpublished) analysis when teasing out the effects of various factors. The details are beyond the scope of this thread, but we were able to roughly estimate project budgets based on the questions we asked around ...
... team size & project duration. The median was ~ $19 million; 1/2 of projects were above, 1/2 below. We found very different dynamics for small ("indie") projects vs large ("AAA") projects across a wide range of parameters, even more so than in 2014-15.
------------------------------

Having said all that, there were enough problems that we ultimately had to decide not to go forward with publishing the detailed results of this new version of the Game Outcomes Project study.

The reasons:
1. We had issues getting enough responses.

This came down to issues with our promotion of the survey in the IGDA newsletter. The survey URL was mangled in the IGDA mailer we paid for, which prevented us from getting responses.
The IGDA then included us a subsequent second mailer for free, but the URL was mangled here, too.

We didn't get the correct URL until the third mailer, but by then it was likely too late.
As a result, we got fewer total useable responses this time around (only 188, compared to 273 from the 2014-15 study). This in spite of a higher survey completion rate for those who began taking the survey, a much broader reach on Twitter via @GameOutcomes, more total $ ...
... spent promoting the survey, translating the survey into six different languages, and setting up a dedicated mailing list on MailChimp.
2. Our discussion of crunch in the 2014-15 study was successful -- probably TOO successful, to the point where it overshadowed much of the rest of our study.
The core of the Game Outcomes Project about what factors really pay off and make games better: how to build high-performance game development teams, support them, maintain them, & set them up for success in every way possible.
Ultimately, that's what will improve your odds of getting projects done on time, and reduce the need for crunch in the first place. The best teams in the game industry have an enormous amount to teach all the rest.
Crunch was intended to be one part of the study, but it ended up becoming a distraction.
We plugged in to the red-hot glowing circuit of the fractious industry debate around crunch, & this brought more visibility, but it also served as an unfortunate distraction from the study's broader goals and intentions.
The nuts & bolts of learning good management skills and the craft of building high-performance teams are less sexy and less controversial, but ultimately far more important to avoiding the need for crunch in the first place and minimizing its impact when it does occur.
While there is a lot to be said for addressing the symptoms of crunch, especially where it is extreme or chronic, addressing the underlying causes is much more effective in the long run.
At the end of the day, our goal has always been to offer insights in the hope of improving the performance and professionalism teams across the industry.
We want not only healthier individuals, but more professional teams, better management, better products, and happier customers and shareholders. And discussion around crunch has turned into an counterproductive distraction from that.
3. As mentioned in #7 above, we added a number of new questions to the study in the hope of exploring new territory. While some of these paid off, there were a few that did not:

-Whether the team had worked together before.
-Questions around short & long-term tasking that replaced the old, questions around scrum, agile, & waterfall production approaches.
-Questions around the ethnic & gender composition of the team.
-Whether discussion of political topics proved to be a distraction for the team.
For the most part, these new questions did not show significant correlations with outcomes. While this lack of correlation is itself an interesting finding, we can't help but feel that we should have perhaps omitted these questions ...
... or, in the case of the production questions, we should have substituted different questions entirely, that might have given us some more interesting correlations.
4. We wanted to get more quantitative around the issue of crunch and overtime. After all, different people have radically different definitions for the # hours/week at which overtime becomes "crunch."
So we completely replaced the crunch questions with a new set of questions that would specifically ask about the number of hours per week of overtime in the middle and end of the project, as well as the REASONS for crunching.
The questions about the reasons for implementing crunch worked very nicely, giving us lots of interesting correlations with outcomes (the detail of which are outside the scope of this thread).
However, we were surprised to find that the questions about the amount of overtime involved on each project did not offer any particularly new insights. We were left feeling that our design of the questions in this area could have been improved.
It's important to note that our data pointed to overall conclusions around crunch in the 2018-19 survey that would have been ENTIRELY CONSISTENT with the 2014-15 study.
We did not find ANYTHING in the 2018-19 study that was inconsistent with the findings of the 2014-15 study in any way or that that would have contradicted any of the conclusions of the previous study.
Crunch was still clearly a net negative, and also showed an extremely strong negative correlation with the 2 new outcome questions we added regarding the after-effects of the project's development cycle on the health, growth, and integrity of the teams and team members.
5. Our survey design and analysis team shrank, and it took too long to start round 2. I was much too busy developing @AvenColony to kick off the Game Outcomes Project again until late 2018.
That 4-year delay meant a loss of momentum, and we also lost a few key survey team members in the process who just weren't available to participate the second time around.
6. The concept of the "aggregate outcome score" itself is a bit problematic.
Most game developers would agree that a game that has a high ROI, ships on schedule, is well-received by critics, and that the team is happy with, generally constitutes a more "successful" game than one that does not have these attributes.
However, there was some discussion of the notion that simply adding

(ROI outcome)+(critical reception outcome)+(project delays outcome)+(team satisfaction outcome)

... in an unweighted linear combination did not itself add up to a robust measure of a "successful outcome."
If we end up conducting future iterations of the Game Outcomes Project, we will likely replace this with an additional outcome question such as:
"In light of the previous questions (ROI, critical reception, project delays, and team satisfaction), to what degree do you feel the project could be considered a success?"
While the answer to this question will of course still be subjective, just like many of the other outcome questions, it is at least more directly from those responding to the survey, without the artifically calculated "aggregate outcome" ...
... which we cannot strictly defend as being based on any objective criteria other than "it's straightforward and it correlates better with everything else than using any sort of weighted combination."
7. We need to do a better job discussing correlation and causation in future versions of the study.
8. We are an unpaid volunteer effort. All of us on the Game Outcomes Project team are hard-working professional game developers, and our work on this project is solely to help make the industry a better place. We have no industry or academic funding or support for this effort.
As the visibility of the Game Outcomes Project has grown, so too has the need to discuss it, defend the results, and explain what interpretations of the study can be supported by the data and which ones cannot.
Also, while the feedback on the 2015 study was overwhelmingly positive, it's also clear that we need to ensure that our approach the next time around is not only more robust, but also more robust by a wide enough margin to make it truly helpful in moving the industry forward.
We feel thankful to have been able to create the 2014-15 Game Outcomes Project study & publish the results on @gamasutra.
We'd wanted to open up a discussion of key issues in game culture, management, leadership, production, overtime, and the art of building high-performance gamedev teams.

In that regard, mission very much accomplished.
We do not rule out the idea of revisiting the survey again in the future, but we want to ensure that the next iteration takes all of the above into account, and not only builds on the 2014-15 study, but exceeds it by a wide margin.

Thank you!
@threadreaderapp Please unroll!
Missing some Tweet in this thread?
You can try to force a refresh.

Like this thread? Get email updates or save it to PDF!

Subscribe to Game Outcomes
Profile picture

Get real-time email alerts when new unrolls are available from this author!

This content may be removed anytime!

Twitter may remove this content at anytime, convert it as a PDF, save and print for later use!

Try unrolling a thread yourself!

how to unroll video

1) Follow Thread Reader App on Twitter so you can easily mention us!

2) Go to a Twitter thread (series of Tweets by the same owner) and mention us with a keyword "unroll" @threadreaderapp unroll

You can practice here first or read more on our help page!

Follow Us on Twitter!

Did Thread Reader help you today?

Support us! We are indie developers!


This site is made by just three indie developers on a laptop doing marketing, support and development! Read more about the story.

Become a Premium Member ($3.00/month or $30.00/year) and get exclusive features!

Become Premium

Too expensive? Make a small donation by buying us coffee ($5) or help with server cost ($10)

Donate via Paypal Become our Patreon

Thank you for your support!