06-29-2019, 01:38 PM
Don’t Miss: The Game Outcomes Project: Learning how teams succeed and fail
<div style="margin: 5px 5% 10px 5%;"><img src="http://www.sickgaming.net/blog/wp-content/uploads/2019/06/dont-miss-the-game-outcomes-project-learning-how-teams-succeed-and-fail.png" width="646" height="646" title="" alt="" /></div><div><p><strong><i><small> The following blog post, unless otherwise noted, was written by a member of Gamasutras community.<br />The thoughts and opinions expressed are those of the writer and not Gamasutra or its parent company. </small></i></strong> </p>
<hr>
<p><em>This article is the first in a 5-part series.</em></p>
<p><em>The Game Outcomes Project team includes Paul Tozour, David Wegbreit, Lucien Parsons, Zhenghua “Z” Yang, NDark Teng, Eric Byron, Julianna Pillemer, Ben Weber, and Karen Buro.</em></p>
<p><strong>The Game Outcomes Project, Part 1: The Best and the Rest</strong></p>
<p>What makes the best teams so effective?</p>
<p><span>Veteran developers who have worked on many different teams often remark that they see vast cultural differences between them. </span>Some teams seem to run like clockwork, and are able to craft world-class games while apparently staying happy and well-rested. Other teams struggle mightily and work themselves to the bone in nightmarish overtime and crunch of 80-90 hour weeks for years at a time, or in the worst case, burn themselves out in a chaotic mess. Some teams are friendly, collaborative, focused, and supportive; others are unfocused and antagonistic. A few even seem to be hostile working environments or political minefields with enough sniping and backstabbing to put a game of <em>Team Fortress 2 </em>to shame.</p>
<p>What causes the differences between those teams? <span>What factors separate the best from the rest?</span></p>
<p>As an industry, are we even trying to figure that out?</p>
<p>Are we even asking the right questions?</p>
<p>These are the kinds of questions that led to the development of the Game Outcomes Project. In October and November of 2014, our team conducted a large-scale survey of hundreds of game developers. The survey included roughly 120 questions on teamwork, culture, production, and project management. We suspected that we could learn more from a side-by-side comparison of many game projects than from any single project by itself, and we were convinced that finding out what great teams do that lesser teams don’t do – and vice versa – could help everyone raise their game.</p>
<p>Our survey was inspired by several of the classic works on team effectiveness. We began with the 5-factor team effectiveness model described in the book <a href="http://www.amazon.com/Leading-Teams-Setting-Stage-Performances/dp/1578513332/ref=sr_1_1ie=UTF8&qid=1415287077&sr=8-1&keywords=Leading+teams%3A+Setting+the+stage+for+great+performances"><em>Leading Teams: Setting the Stage for Great Performances</em></a>. We also incorporated the 5-factor team effectiveness model from the famous management book <a href="http://www.amazon.com/The-Five-Dysfunctions-Team-Leadership/dp/0787960756/ref=sr_1_1?ie=UTF8&qid=1414819847&sr=8-1&keywords=the+five+dysfunctions+of+team"><em>The Five Dysfunctions of a Team: A Leadership Fable</em></a> and the 12-factor model from <a href="http://www.amazon.com/12-The-Elements-Great-Managing/dp/159562998X/ref=sr_1_3?ie=UTF8&qid=1414819902&sr=8-3&keywords=12"><em>12: The Elements of Great Managing</em></a><em>,</em> which is derived from aggregate Gallup data from 10 million employee and manager interviews. We felt certain that at least <em>one </em>of these three models would surely turn out to be relevant to game development in some way.</p>
<p>We also added several categories with questions specific to the game industry that we felt were likely to show interesting differences.</p>
<p>On the second page of the survey, we added a number of more generic background questions. These asked about team size, project duration, job role, game genre, target platform, financial incentives offered to the team, and the team’s production methodology.</p>
<p>We then faced the broader problem of how to quantitatively measure a game project’s outcome.</p>
<p>Ask any five game developers what constitutes “success,” and you’ll likely get five different answers. Some developers care only about the bottom line; others care far more about their game’s critical reception. Small indie developers may regard “success” as simply shipping their first game as designed regardless of revenues or critical reception, while developers working under government contract, free from any market pressures, might define “success” simply as getting it done on time (and we did receive a few such responses in our survey).</p>
<p>Lacking any objective way to define “success,” we decided to quantify the outcome through the lenses of four different kinds of outcomes. We asked the following four outcome questions, each with a 6-point or 7-point scale:</p>
<ul>
<li><span>“To the best of your knowledge, what was the game’s financial return on investment (ROI)? In other words, what kind of profit or loss did the company developing the game take as a result of publication?”</span></li>
<li>“For the game’s primary target platform, was the project ever delayed from its original release date, or was it cancelled?”</li>
<li>“What level of critical success did the game achieve?”</li>
<li>“Finally, did the game meet its internal goals? In other words, to what extent did the team feel it achieved something at least as good as it was trying to create?”</li>
</ul>
<p>We hoped that we could correlate the answers to these four outcome questions against all the other questions in the survey to see which input factors had the most actual influence over these four outcomes. We were somewhat concerned that all of the “noise” in project outcomes (fickle consumer tastes, the moods of game reviewers, the often unpredictable challenges inherent in creating high-quality games, and various acts of God) would make it difficult to find meaningful correlations. But with enough responses, perhaps the correlations would shine through the inevitable noise.</p>
<p>We then created an aggregate “outcome” value that combined the results of all four of the outcome questions as a broader representation of a game project’s level of success. This turned out to work nicely, as it correlated very strongly with the results of each of the individual outcome questions. <span>Our </span><a href="http://intelligenceengine.blogspot.com/2014/11/game-outcomes-project-methodology-in.html">Methodology</a><span> blog page has a detailed description of how we calculated this aggregate score.</span></p>
<p>We worked carefully to refine the survey through many iterations, and we solicited responses through forum posts, Gamasutra posts, Twitter, and IGDA mailers. We received 771 responses, of which 302 were completed, and 273 were related to completed projects that were not cancelled or abandoned in development.</p>
<p><strong>The Results</strong></p>
<p>So what did we find?</p>
<p>In short, a gold mine. The results were staggering.</p>
<p>More than 85% of our 120 questions showed a statistically significant correlation with our aggregate outcome score, with a <a href="http://en.wikipedia.org/wiki/P-value">p-value</a> under 0.05 (the p-value gives the probability of observing such data as in our sample if the variables were be truly independent; therefore, a small p-value can be interpreted as evidence against the assumption that the data is independent). This correlation was moderate or strong in most cases (absolute value > 0.2), and m<span>ost of the p-values were in fact well below 0.001</span>. We were even able to develop a linear regression model that showed an astonishing 0.82 correlation with the combined outcome score (shown in Figure 1 below).</p>
<p><img align="left" alt height="646" src="http://www.sickgaming.net/blog/wp-content/uploads/2019/06/dont-miss-the-game-outcomes-project-learning-how-teams-succeed-and-fail.png" width="646"></p>
<p><span><strong>Figure 1</strong>.</span><em> Our linear regression model (horizontal axis) plotted against the composite game outcome score (vertical axis). The black diagonal line is a best-fit trend line. 273 data points are shown.</em></p>
<p>To varying extents, all three of the team effectiveness models (Hackman’s “Leading Teams” model, Lencioni’s “Five Dysfunctions” model, and the Gallup “12” model) proved to correlate strongly with game project outcomes.</p>
<p>We can’t say for certain how many relevant questions we <em>didn’t </em>ask. There may well be many more questions waiting to be asked that would have shined an even stronger light on the differences between the best teams and the rest.</p>
<p><span>But the correlations and statistical significance we discovered are strong enough that it’s very clear that we have, at the very least, discovered an excellent partial answer to the question of what makes the best game development teams so successful.</span></p>
<p><strong>The Game Outcomes Project Series</strong></p>
<p><span>Due to space constraints, we’ll be releasing our analysis as a series of several articles, with the remaining 3 articles released at 1-week intervals beginning in January 2015. We’ll leave off detailed discussion of our three team effectiveness models until the second article in our series to allow these topics the thorough analysis they deserve.</span></p>
<p>This article will focus solely on introducing the survey and combing through the background questions asked on the second survey page. And although we found relatively few correlations in this part of the survey, the areas where we <em>didn’t </em>find a correlation are just as interesting as the areas where we did.</p>
<p><strong>Project Genre and Platform </strong><strong>Target(s)</strong></p>
<p>First, we asked respondents to tell us what genre of game their team had worked on. Here, the results are all across the board.</p>
<p align="center"><img alt height="878" src="http://www.sickgaming.net/blog/wp-content/uploads/2019/06/dont-miss-the-game-outcomes-project-learning-how-teams-succeed-and-fail-1.png" width="646"></p>
<p><strong>Figure 2</strong>.<em> Game genre (vertical axis) vs. composite game outcome score (horizontal axis). Higher data points (green dots) represent more successful projects, as determined by our composite game outcome score.</em></p>
<p>We see remarkably little correlation between game genre and outcome. In the few cases where a game genre appears to skew in one direction or another, the sample size is far too small to draw any conclusions, with all but a handful of genres having fewer than 30 responses.</p>
<p>(Note that Figure 2 uses a box-and-whisker plot, as described <a href="http://www.tableausoftware.com/new-features/box-and-whisker-plot">here</a>).</p>
<p>We also asked a similar question regarding the product’s target platform(s), including responses for desktop (PC or Mac), console (Xbox/PlayStation), mobile, handheld, and/or web/Facebook. We found no statistically significant results for any of these platforms, nor for the total number of platforms a game targeted.</p>
<p><strong>Project Duration and Team Size</strong></p>
<p>We asked about the total months and years in development; based on this, we were able to calculate each project’s total development time in months:</p>
<p><img align="left" alt height="711" src="http://www.sickgaming.net/blog/wp-content/uploads/2019/06/dont-miss-the-game-outcomes-project-learning-how-teams-succeed-and-fail-2.png" width="646"></p>
<p><span><strong>Figure 3</strong>.</span><em> Total months in development (horizontal axis) vs game outcome score (vertical). The black diagonal line is a trend line.</em></p>
<p>As you can see, there’s a small negative correlation (-0.229, using the <a href="http://en.wikipedia.org/wiki/Spearman%27s_rank_correlation_coefficient">Spearman</a> correlation coefficient), and the p-value is 0.003. This negative correlation is not too surprising, as troubled projects are more likely to be delayed than projects that are going smoothly.</p>
<p>We also asked about the size of the team, both in terms of the average team size and the final team size. Average team size was between 1 and 500 with an average of 48.6; final team size was between 1 and 600 with an average of 67.9. Both showed a slight positive correlation with project outcomes, as shown below, but in both cases the p-value is well over 0.1, indicating there’s not enough statistical significance to make this correlation useful or noteworthy.</p>
<p>Note that in both figures below, the horizontal axis is shown on a logarithmic scale, which makes the linear trend line appear curved.</p>
<p align="center"><img alt height="675" src="http://www.sickgaming.net/blog/wp-content/uploads/2019/06/dont-miss-the-game-outcomes-project-learning-how-teams-succeed-and-fail-3.png" width="646"></p>
<p><strong><span>Figure</span><span> 4</span></strong><span>.</span><em> Average team size correlated against game project outcome (vertical axis).</em></p>
<p><img align="left" alt height="671" src="http://www.sickgaming.net/blog/wp-content/uploads/2019/06/dont-miss-the-game-outcomes-project-learning-how-teams-succeed-and-fail-4.png" width="646"></p>
<p><strong><span>Figure</span><span> 5</span></strong><span>.</span><em> Final team size correlated against game project outcome (vertical axis).</em></p>
<p>We also analyzed the ratio of average to final team size, but we found no meaningful correlations here.</p>
<p><span><b>Game Engines</b></span></p>
<p><span>We asked about the technology solution used: whether it was a new engine built from scratch; core technology from a previous version of a similar game or another game in the same series; an in-house / proprietary engine (such as EA Frostbite); or an externally-developed engine (such as <a href="http://unity3d.com/">Unity</a>, <a href="https://www.unrealengine.com/what-is-unreal-engine-4">Unreal</a>, or <a href="http://cryengine.com/">CryEngine</a>).</span></p>
<p>The results are as follows:</p>
<p><img align="left" alt height="727" src="http://www.sickgaming.net/blog/wp-content/uploads/2019/06/dont-miss-the-game-outcomes-project-learning-how-teams-succeed-and-fail-5.png" width="646"></p>
<p><strong><span>Figure 6</span></strong><span>.</span><em> Game engine / core technology used (horizontal axis) vs game project outcome (vertical axis), using a <a href="http://www.tableausoftware.com/new-features/box-and-whisker-plot">box-and-whisker</a> plot.</em></p>
<table border="0" cellpadding="0" cellspacing="0" width="96%">
<tbody>
<tr>
<td> </td>
<td>
<p><strong>Average composite score</strong></p>
</td>
<td>
<p><strong>Standard Deviation</strong></p>
</td>
<td>
<p><strong>Number of responses</strong></p>
</td>
</tr>
<tr>
<td>
<p><strong>New engine/tech</strong></p>
</td>
<td>
<p>53.3</p>
</td>
<td>
<p>18.3</p>
</td>
<td>
<p>41</p>
</td>
</tr>
<tr>
<td>
<p><strong>Engine from previous version of same or similar game</strong></p>
</td>
<td>
<p>64.8</p>
</td>
<td>
<p>15.8</p>
</td>
<td>
<p>58</p>
</td>
</tr>
<tr>
<td>
<p><strong>Internal/proprietary engine / tech (such as EA Frostbite)</strong></p>
</td>
<td>
<p>60.7</p>
</td>
<td>
<p>19.4</p>
</td>
<td>
<p>46</p>
</td>
</tr>
<tr>
<td>
<p><strong>Licensed game engine (Unreal, Unity, etc.)</strong></p>
</td>
<td>
<p>55.6</p>
</td>
<td>
<p>17.5</p>
</td>
<td>
<p>113</p>
</td>
</tr>
<tr>
<td>
<p><strong>Other</strong></p>
</td>
<td>
<p>55.5</p>
</td>
<td>
<p>19.5</p>
</td>
<td>
<p>15</p>
</td>
</tr>
</tbody>
</table>
<p>The results here are less striking the more you look at them. The highest score was for projects that used an engine from a previous version of the same game or a similar one – but that’s exactly what one would expect to be the case, given that teams in this category clearly already had a head start in production, much of the technical risk had already been stamped out, and there was probably already a veteran team in place that knew how to make that type of game!</p>
<p>We analyzed these results using a <a href="http://en.wikipedia.org/wiki/Kruskal%E2%80%93Wallis_one-way_analysis_of_variance">Kruskal-Wallis one-way analysis of variance</a>, and we found that this question was only statistically significant on account of that very option (engine from a previous version of the same game or similar), with a p-value of 0.006. Removing the data points related to this answer category caused the p-value for the remaining categories to shoot up above 0.3.</p>
<p>Our interpretation of the data is that the best option for the game engine <em>depends entirely on the game being made and what options are available for it,</em> and that any one of these options can be the “best” choice given the right set of circumstances. In other words, the most reasonable conclusion is there is no universally “correct” answer separate from the actual game being made, the team making it, and the circumstances surrounding the game’s development. That’s not to say the choice of engine isn’t terrifically important, but the data clearly shows that there plenty of successes and failures in all categories with only minimal differences in outcomes between them, clearly indicating that each of these four options is entirely viable in some situations.</p>
<p>We also did not ask which <em>specific</em> technology solution a respondent’s dev team was using. Future versions of the study may include questions on the specific game engine being used (Unity, Unreal, CryEngine, etc.)</p>
<p><strong>Team Experience</strong></p>
<p>We also asked a question on this page regarding the team’s average experience level, along a scale from 1 to 5 (with a ‘1’ indicating less than 2 years of average development experience, and a ‘5’ indicating a team of grizzled game industry veterans with an average of 8 or more years of experience).</p>
<p align="center"><img alt height="707" src="http://www.sickgaming.net/blog/wp-content/uploads/2019/06/dont-miss-the-game-outcomes-project-learning-how-teams-succeed-and-fail-6.png" width="646"></p>
<p><strong>Figure 7</strong>.<em> Team experience level ranking (horizontal axis, by category listed above) mapped against game outcome score (vertical axis)</em></p>
<p>Here, we see a correlation of 0.19 (and p-value under 0.001). Note in particular the complete absence of dots in the upper-left corner (which would indicate wildly successful teams with no experience) and the lower-right corner (which would indicate very experienced teams that failed catastrophically).</p>
<p>So our study clearly confirms the common knowledge in the industry that experienced teams are significantly more likely to succeed. This is not at all surprising, but it’s reassuring that the data makes the point so clearly. And as much we may all enjoy stories of random individuals with minimal game development experience becoming wildly successful with games developed in just a few days (as with <a href="http://en.wikipedia.org/wiki/Flappy_Bird">Flappy Bird</a>), our study shows clearly that such cases are extreme outliers. </p>
<p><a id="Incentives" name="Incentives"><strong>Surprise #1: Incentives</strong></a></p>
<p>This first page of our survey also revealed two major surprises.</p>
<p>The first surprise was financial incentives. The survey included a question: “Was the team offered any financial incentives tied to the performance of the game, the team, or your performance as individuals? Select all that apply.” We offered multiple check boxes to say “yes” or “no” to any combination of financial incentives that were offered to the team.</p>
<p>The correlations are as follows:</p>
<p align="center"><img alt height="837" src="http://www.sickgaming.net/blog/wp-content/uploads/2019/06/dont-miss-the-game-outcomes-project-learning-how-teams-succeed-and-fail-7.png" width="646"></p>
<p><span><strong>Figure 8</strong>.</span><em> Incentives (horizontal axis) plotted against game outcome score (vertical axis) for the five different types of financial incentives, using a <a href="http://www.tableausoftware.com/new-features/box-and-whisker-plot">box-and-whisker plot</a>. From left to right: incentives based on individual performance, team performance, royalties, incentives based on game reviews/MetaCritic scores, and miscellaneous other incentives. For each category, we split all 273 data points into those excluding the incentive (left side of each box) and those including the incentive (right side of each box).</em></p>
<p>Of these five forms of incentives, only individual incentives showed statistical significance. Game projects offering individually-tailored compensation (64 out of the 273 responses) had an average score of 63.2 (standard deviation 18.6), while those that did <em>not </em>offer individual compensation had a mean game outcome score of 56.5 (standard deviation 17.7). A <a href="http://en.wikipedia.org/wiki/Mann%E2%80%93Whitney_U_test">Wilcoxon rank-sum test</a> for individual incentives gave a p-value of 0.017 for this comparison.</p>
<p>All the other forms of incentives – those based on team performance, based on royalties, based on reviews and/or MetaCritic ratings, and any miscellaneous “other” incentives – show p-values that indicate that there was no meaningful correlation with project outcomes (p-values 0.33, 0.77, 0.98, and 0.90, respectively, again using a <a href="http://en.wikipedia.org/wiki/Mann%E2%80%93Whitney_U_test">Wilcoxon rank-sum test</a>).</p>
<p>This is a very surprising finding. Incentives are usually offered under the assumption that they are a huge motivator for a team. However, our results indicate that only individual incentives seem to have the desired effect, and even then, to a much smaller degree than expected.</p>
<p>One possible explanation is that perhaps the <a href="http://www.ted.com/talks/dan_pink_on_motivation?language=en">psychological phenomenon popularized by Dan Pink</a> may be playing itself out in the game industry – that financial rewards are (according to a great deal of recent research) usually a completely ineffective motivational tool, and actually backfire in many cases.</p>
<p>We also speculate that in the case of royalties and MetaCritic reviews in particular, the sense of helplessness that game developers can feel when dealing with factors beyond their control – such as design decisions they disagree with, or other team members falling down on the job – potentially compensates for any motivating effect that incentives may have had. With individual incentives, on the other hand, individuals may feel that their individual efforts are more likely to be noticed and rewarded appropriately. However, without more data, this all remains pure speculation on our part.</p>
<p>Whatever the reason, our results seem to indicate that individually tailored incentives, such as <a href="http://en.wikipedia.org/wiki/Performance-related_pay">Pay For Performance</a> (PFP) plans, seem to achieve meaningful results where royalties, team incentives, and other forms of financial incentives do not.</p>
<p><a id="ProductionMethodologies" name="ProductionMethodologies"><strong>Surprise #2: Production Methodologies</strong></a></p>
<p>Our second big surprise was in the area of production methodologies, a topic of frequent discussion in the game industry.</p>
<p>We asked what production methodology the team used – 0 (don’t know), 1 (<a href="http://en.wikipedia.org/wiki/Waterfall_model">waterfall</a>), 2 (<a href="https://www.google.com/url?sa=t&rct=j&q=&esrc=s&source=web&cd=2&cad=rja&uact=8&ved=0CD8QFjAB&url=http%3A%2F%2Fen.wikipedia.org%2Fwiki%2FAgile_software_development&ei=JlGIVJTuBcT2yQSizYGAAw&usg=AFQjCNHHClAN2UfGQdH_MuFIDxYa3T86jA&sig2=Kx9achgBwDJFKe6q415LPQ&bvm=bv.81456516,d.aWw">agile</a>), 3 (agile using “<a href="http://scrummethodology.com/">Scrum</a>”), and 4 (other/ad-hoc). We also provided a detailed description with each answer so that respondents could pick the closest match according to the description even if they didn’t know the exact name of the production methodology. The results were shocking.</p>
<p><img align="left" alt height="779" src="http://www.sickgaming.net/blog/wp-content/uploads/2019/06/dont-miss-the-game-outcomes-project-learning-how-teams-succeed-and-fail-8.png" width="646"></p>
<p><span><strong>Figure 9</strong>.</span><em> Production methodology vs game outcome score.</em></p>
<p>Here’s a more detailed breakdown showing the mean and standard deviation for each category, along with the number of responses in each:</p>
<table border="0" cellpadding="0" cellspacing="0" width="564">
<tbody>
<tr>
<td> </td>
<td>
<p><strong>Average composite score</strong></p>
</td>
<td>
<p><strong>Standard Deviation</strong></p>
</td>
<td>
<p><strong>Number of responses</strong></p>
</td>
</tr>
<tr>
<td>
<p><strong>Unknown</strong></p>
</td>
<td>
<p>50.6</p>
</td>
<td>
<p>17.4</p>
</td>
<td>
<p>7</p>
</td>
</tr>
<tr>
<td>
<p><strong>Waterfall</strong></p>
</td>
<td>
<p>55.4</p>
</td>
<td>
<p>17.9</p>
</td>
<td>
<p>53</p>
</td>
</tr>
<tr>
<td>
<p><strong>Agile</strong></p>
</td>
<td>
<p>59.1</p>
</td>
<td>
<p>19.4</p>
</td>
<td>
<p>94</p>
</td>
</tr>
<tr>
<td>
<p><strong>Agile using Scrum</strong></p>
</td>
<td>
<p>59.7</p>
</td>
<td>
<p>16.9</p>
</td>
<td>
<p>75</p>
</td>
</tr>
<tr>
<td>
<p><strong>Other / Ad-hoc</strong></p>
</td>
<td>
<p>57.6</p>
</td>
<td>
<p>17.6</p>
</td>
<td>
<p>44</p>
</td>
</tr>
</tbody>
</table>
<p>What’s remarkable is just how tiny these differences are. <em>They almost don’t even exist.</em></p>
<p>Furthermore, a <a href="http://en.wikipedia.org/wiki/Kruskal%E2%80%93Wallis_one-way_analysis_of_variance">Kruskal-Wallis H test</a> indicates a very high p-value of 0.46 for this category, meaning that we truly can’t infer any relationship between production methodology and game outcome. Further testing of the production methodology against each of the four game project outcome factors individually gives identical results.</p>
<p>Given that production methodologies seem to be a game development holy grail for some, one would expect to see major differences, and that Scrum in particular would be far out in the lead. But these differences are tiny, with a huge amount of variation in each category, and the correlations between the production methodology and the score have a p-value too high for us to deny the assumption that the data is independent. Scrum, agile, and “other” in particular are essentially indistinguishable from one another. “Unknown” is far higher than one would expect, while “Other/ad-hoc” is also remarkably high, indicating that there are effective production methodologies available that aren’t on our list (interestingly, we asked those in the “other” category for more detail, and the <a href="http://www.slideshare.net/holtt/cerny-method">Cerny method</a> was listed as the production methodology for the top-scoring game project in that category).</p>
<p>Also, unlike our question regarding game engines, we can’t simply write this off as some methodologies being more appropriate for certain kinds of teams. Production methodologies are generally intended to be universally useful, and our results show no meaningful correlations between the methodology and the game genre, team size, experience level, or any other factors.</p>
<p>This begs the question: where’s the payoff?</p>
<p>We’ve seen several significant correlations in this article, and we will describe many more throughout our study. Articles 2 and 3 in particular will illustrate many remarkable correlations between many different cultural factors and game outcomes, with more than 85% of our questions showing a statistically significant correlation.</p>
<p>So it’s very clear that where there were significant drivers of project outcomes, they stood out very clearly. Our results were not shy. And if the specific production methodology a team uses is really vitally important, we would expect that it absolutely should have shown up in the outcome correlations as well.</p>
<p>But it’s simply not there.</p>
<p>It seems that in <span>spite of all the attention paid to the subject, </span>the particular type of production methodology a team uses is not terribly important, and it is not a significant driver of outcomes. E<span>ven the much-maligned “Waterfall” approach can apparently be made to work well</span>.</p>
<p><span>Our third article will detail a number of additional questions we asked around production that give some hints as to what aspects of production actually impact project outcomes regardless of the specific methodology the team uses — although these correlations are still significantly weaker on average than any of our other categories concerning culture.</span></p>
<p><strong>Conclusions</strong></p>
<p>We are beginning to crack open the differences that separate the best teams from the rest.</p>
<p>We have seen that four factors – total project duration, team experience level, financial incentives based on individual performance, and re-use of an existing game engine from a similar game – have clear correlations with game project outcomes.</p>
<p>Our study found several surprises, including a complete lack of any correlations between factors that one would assume should have a large impact, such as team size, game genre, target platforms, the production methodology the team used, or any additional financial incentives the team was offered beyond individual performance compensation.</p>
<p>In the <a href="http://gamasutra.com/blogs/PaulTozour/20150106/233254/The_Game_Outcomes_Project_Part_2_Building_Effective_Teams.php">second article in the series</a>, we discuss the three team effectiveness models that inspired our study in detail and illustrate their correlations with the aggregate outcome score and each of the individual outcome questions. We see far stronger correlations than anything presented in this article.</p>
<p><span>Following that, the <a href="http://www.gamasutra.com/blogs/PaulTozour/20150113/233922/The_Game_Outcomes_Project_Part_3_Game_Development_Factors.php">third article</a> explores additional findings around many other factors specific to game development, including</span><span> technology risk management, design risk management, crunch / overtime, team stability, project planning, communication, outsourcing, respect, collaboration / helpfulness, team focus, and</span><span> organizational perceptions of failure. We</span><span> also provide a self-reflection tool that teams can use for postmortems and self-analysis.</span></p>
<p>Finally, our <a href="http://gamasutra.com/blogs/PaulTozour/20150120/234443/The_Game_Outcomes_Project_Part_4_Crunch_Makes_Games_Worse.php">fourth article</a> brings our data to bear on the controversial issue of crunch and draws unambiguous conclusions, and our <a href="http://www.gamasutra.com/blogs/PaulTozour/20150126/235024/The_Game_Outcomes_Project_Part_5_What_Great_Teams_Do.php">fifth article</a> summarizes our results.</p>
<p><em>The Game Outcomes Project team would like to thank the hundreds of current and former game developers who made this study possible through their participation in the survey. We would also like to thank IGDA Production SIG members Clinton Keith and Chuck Hoover for their assistance with question design; Kate Edwards, Tristin Hightower, and the IGDA for assistance with promotion; and Christian Nutt and the Gamasutra editorial team for their assistance in promoting the survey.</em></p>
<p><em>For announcements regarding our project, follow us on Twitter at <a href="https://twitter.com/GameOutcomes">@GameOutcomes</a></em></p>
</div>
<div style="margin: 5px 5% 10px 5%;"><img src="http://www.sickgaming.net/blog/wp-content/uploads/2019/06/dont-miss-the-game-outcomes-project-learning-how-teams-succeed-and-fail.png" width="646" height="646" title="" alt="" /></div><div><p><strong><i><small> The following blog post, unless otherwise noted, was written by a member of Gamasutras community.<br />The thoughts and opinions expressed are those of the writer and not Gamasutra or its parent company. </small></i></strong> </p>
<hr>
<p><em>This article is the first in a 5-part series.</em></p>
<p><em>The Game Outcomes Project team includes Paul Tozour, David Wegbreit, Lucien Parsons, Zhenghua “Z” Yang, NDark Teng, Eric Byron, Julianna Pillemer, Ben Weber, and Karen Buro.</em></p>
<p><strong>The Game Outcomes Project, Part 1: The Best and the Rest</strong></p>
<p>What makes the best teams so effective?</p>
<p><span>Veteran developers who have worked on many different teams often remark that they see vast cultural differences between them. </span>Some teams seem to run like clockwork, and are able to craft world-class games while apparently staying happy and well-rested. Other teams struggle mightily and work themselves to the bone in nightmarish overtime and crunch of 80-90 hour weeks for years at a time, or in the worst case, burn themselves out in a chaotic mess. Some teams are friendly, collaborative, focused, and supportive; others are unfocused and antagonistic. A few even seem to be hostile working environments or political minefields with enough sniping and backstabbing to put a game of <em>Team Fortress 2 </em>to shame.</p>
<p>What causes the differences between those teams? <span>What factors separate the best from the rest?</span></p>
<p>As an industry, are we even trying to figure that out?</p>
<p>Are we even asking the right questions?</p>
<p>These are the kinds of questions that led to the development of the Game Outcomes Project. In October and November of 2014, our team conducted a large-scale survey of hundreds of game developers. The survey included roughly 120 questions on teamwork, culture, production, and project management. We suspected that we could learn more from a side-by-side comparison of many game projects than from any single project by itself, and we were convinced that finding out what great teams do that lesser teams don’t do – and vice versa – could help everyone raise their game.</p>
<p>Our survey was inspired by several of the classic works on team effectiveness. We began with the 5-factor team effectiveness model described in the book <a href="http://www.amazon.com/Leading-Teams-Setting-Stage-Performances/dp/1578513332/ref=sr_1_1ie=UTF8&qid=1415287077&sr=8-1&keywords=Leading+teams%3A+Setting+the+stage+for+great+performances"><em>Leading Teams: Setting the Stage for Great Performances</em></a>. We also incorporated the 5-factor team effectiveness model from the famous management book <a href="http://www.amazon.com/The-Five-Dysfunctions-Team-Leadership/dp/0787960756/ref=sr_1_1?ie=UTF8&qid=1414819847&sr=8-1&keywords=the+five+dysfunctions+of+team"><em>The Five Dysfunctions of a Team: A Leadership Fable</em></a> and the 12-factor model from <a href="http://www.amazon.com/12-The-Elements-Great-Managing/dp/159562998X/ref=sr_1_3?ie=UTF8&qid=1414819902&sr=8-3&keywords=12"><em>12: The Elements of Great Managing</em></a><em>,</em> which is derived from aggregate Gallup data from 10 million employee and manager interviews. We felt certain that at least <em>one </em>of these three models would surely turn out to be relevant to game development in some way.</p>
<p>We also added several categories with questions specific to the game industry that we felt were likely to show interesting differences.</p>
<p>On the second page of the survey, we added a number of more generic background questions. These asked about team size, project duration, job role, game genre, target platform, financial incentives offered to the team, and the team’s production methodology.</p>
<p>We then faced the broader problem of how to quantitatively measure a game project’s outcome.</p>
<p>Ask any five game developers what constitutes “success,” and you’ll likely get five different answers. Some developers care only about the bottom line; others care far more about their game’s critical reception. Small indie developers may regard “success” as simply shipping their first game as designed regardless of revenues or critical reception, while developers working under government contract, free from any market pressures, might define “success” simply as getting it done on time (and we did receive a few such responses in our survey).</p>
<p>Lacking any objective way to define “success,” we decided to quantify the outcome through the lenses of four different kinds of outcomes. We asked the following four outcome questions, each with a 6-point or 7-point scale:</p>
<ul>
<li><span>“To the best of your knowledge, what was the game’s financial return on investment (ROI)? In other words, what kind of profit or loss did the company developing the game take as a result of publication?”</span></li>
<li>“For the game’s primary target platform, was the project ever delayed from its original release date, or was it cancelled?”</li>
<li>“What level of critical success did the game achieve?”</li>
<li>“Finally, did the game meet its internal goals? In other words, to what extent did the team feel it achieved something at least as good as it was trying to create?”</li>
</ul>
<p>We hoped that we could correlate the answers to these four outcome questions against all the other questions in the survey to see which input factors had the most actual influence over these four outcomes. We were somewhat concerned that all of the “noise” in project outcomes (fickle consumer tastes, the moods of game reviewers, the often unpredictable challenges inherent in creating high-quality games, and various acts of God) would make it difficult to find meaningful correlations. But with enough responses, perhaps the correlations would shine through the inevitable noise.</p>
<p>We then created an aggregate “outcome” value that combined the results of all four of the outcome questions as a broader representation of a game project’s level of success. This turned out to work nicely, as it correlated very strongly with the results of each of the individual outcome questions. <span>Our </span><a href="http://intelligenceengine.blogspot.com/2014/11/game-outcomes-project-methodology-in.html">Methodology</a><span> blog page has a detailed description of how we calculated this aggregate score.</span></p>
<p>We worked carefully to refine the survey through many iterations, and we solicited responses through forum posts, Gamasutra posts, Twitter, and IGDA mailers. We received 771 responses, of which 302 were completed, and 273 were related to completed projects that were not cancelled or abandoned in development.</p>
<p><strong>The Results</strong></p>
<p>So what did we find?</p>
<p>In short, a gold mine. The results were staggering.</p>
<p>More than 85% of our 120 questions showed a statistically significant correlation with our aggregate outcome score, with a <a href="http://en.wikipedia.org/wiki/P-value">p-value</a> under 0.05 (the p-value gives the probability of observing such data as in our sample if the variables were be truly independent; therefore, a small p-value can be interpreted as evidence against the assumption that the data is independent). This correlation was moderate or strong in most cases (absolute value > 0.2), and m<span>ost of the p-values were in fact well below 0.001</span>. We were even able to develop a linear regression model that showed an astonishing 0.82 correlation with the combined outcome score (shown in Figure 1 below).</p>
<p><img align="left" alt height="646" src="http://www.sickgaming.net/blog/wp-content/uploads/2019/06/dont-miss-the-game-outcomes-project-learning-how-teams-succeed-and-fail.png" width="646"></p>
<p><span><strong>Figure 1</strong>.</span><em> Our linear regression model (horizontal axis) plotted against the composite game outcome score (vertical axis). The black diagonal line is a best-fit trend line. 273 data points are shown.</em></p>
<p>To varying extents, all three of the team effectiveness models (Hackman’s “Leading Teams” model, Lencioni’s “Five Dysfunctions” model, and the Gallup “12” model) proved to correlate strongly with game project outcomes.</p>
<p>We can’t say for certain how many relevant questions we <em>didn’t </em>ask. There may well be many more questions waiting to be asked that would have shined an even stronger light on the differences between the best teams and the rest.</p>
<p><span>But the correlations and statistical significance we discovered are strong enough that it’s very clear that we have, at the very least, discovered an excellent partial answer to the question of what makes the best game development teams so successful.</span></p>
<p><strong>The Game Outcomes Project Series</strong></p>
<p><span>Due to space constraints, we’ll be releasing our analysis as a series of several articles, with the remaining 3 articles released at 1-week intervals beginning in January 2015. We’ll leave off detailed discussion of our three team effectiveness models until the second article in our series to allow these topics the thorough analysis they deserve.</span></p>
<p>This article will focus solely on introducing the survey and combing through the background questions asked on the second survey page. And although we found relatively few correlations in this part of the survey, the areas where we <em>didn’t </em>find a correlation are just as interesting as the areas where we did.</p>
<p><strong>Project Genre and Platform </strong><strong>Target(s)</strong></p>
<p>First, we asked respondents to tell us what genre of game their team had worked on. Here, the results are all across the board.</p>
<p align="center"><img alt height="878" src="http://www.sickgaming.net/blog/wp-content/uploads/2019/06/dont-miss-the-game-outcomes-project-learning-how-teams-succeed-and-fail-1.png" width="646"></p>
<p><strong>Figure 2</strong>.<em> Game genre (vertical axis) vs. composite game outcome score (horizontal axis). Higher data points (green dots) represent more successful projects, as determined by our composite game outcome score.</em></p>
<p>We see remarkably little correlation between game genre and outcome. In the few cases where a game genre appears to skew in one direction or another, the sample size is far too small to draw any conclusions, with all but a handful of genres having fewer than 30 responses.</p>
<p>(Note that Figure 2 uses a box-and-whisker plot, as described <a href="http://www.tableausoftware.com/new-features/box-and-whisker-plot">here</a>).</p>
<p>We also asked a similar question regarding the product’s target platform(s), including responses for desktop (PC or Mac), console (Xbox/PlayStation), mobile, handheld, and/or web/Facebook. We found no statistically significant results for any of these platforms, nor for the total number of platforms a game targeted.</p>
<p><strong>Project Duration and Team Size</strong></p>
<p>We asked about the total months and years in development; based on this, we were able to calculate each project’s total development time in months:</p>
<p><img align="left" alt height="711" src="http://www.sickgaming.net/blog/wp-content/uploads/2019/06/dont-miss-the-game-outcomes-project-learning-how-teams-succeed-and-fail-2.png" width="646"></p>
<p><span><strong>Figure 3</strong>.</span><em> Total months in development (horizontal axis) vs game outcome score (vertical). The black diagonal line is a trend line.</em></p>
<p>As you can see, there’s a small negative correlation (-0.229, using the <a href="http://en.wikipedia.org/wiki/Spearman%27s_rank_correlation_coefficient">Spearman</a> correlation coefficient), and the p-value is 0.003. This negative correlation is not too surprising, as troubled projects are more likely to be delayed than projects that are going smoothly.</p>
<p>We also asked about the size of the team, both in terms of the average team size and the final team size. Average team size was between 1 and 500 with an average of 48.6; final team size was between 1 and 600 with an average of 67.9. Both showed a slight positive correlation with project outcomes, as shown below, but in both cases the p-value is well over 0.1, indicating there’s not enough statistical significance to make this correlation useful or noteworthy.</p>
<p>Note that in both figures below, the horizontal axis is shown on a logarithmic scale, which makes the linear trend line appear curved.</p>
<p align="center"><img alt height="675" src="http://www.sickgaming.net/blog/wp-content/uploads/2019/06/dont-miss-the-game-outcomes-project-learning-how-teams-succeed-and-fail-3.png" width="646"></p>
<p><strong><span>Figure</span><span> 4</span></strong><span>.</span><em> Average team size correlated against game project outcome (vertical axis).</em></p>
<p><img align="left" alt height="671" src="http://www.sickgaming.net/blog/wp-content/uploads/2019/06/dont-miss-the-game-outcomes-project-learning-how-teams-succeed-and-fail-4.png" width="646"></p>
<p><strong><span>Figure</span><span> 5</span></strong><span>.</span><em> Final team size correlated against game project outcome (vertical axis).</em></p>
<p>We also analyzed the ratio of average to final team size, but we found no meaningful correlations here.</p>
<p><span><b>Game Engines</b></span></p>
<p><span>We asked about the technology solution used: whether it was a new engine built from scratch; core technology from a previous version of a similar game or another game in the same series; an in-house / proprietary engine (such as EA Frostbite); or an externally-developed engine (such as <a href="http://unity3d.com/">Unity</a>, <a href="https://www.unrealengine.com/what-is-unreal-engine-4">Unreal</a>, or <a href="http://cryengine.com/">CryEngine</a>).</span></p>
<p>The results are as follows:</p>
<p><img align="left" alt height="727" src="http://www.sickgaming.net/blog/wp-content/uploads/2019/06/dont-miss-the-game-outcomes-project-learning-how-teams-succeed-and-fail-5.png" width="646"></p>
<p><strong><span>Figure 6</span></strong><span>.</span><em> Game engine / core technology used (horizontal axis) vs game project outcome (vertical axis), using a <a href="http://www.tableausoftware.com/new-features/box-and-whisker-plot">box-and-whisker</a> plot.</em></p>
<table border="0" cellpadding="0" cellspacing="0" width="96%">
<tbody>
<tr>
<td> </td>
<td>
<p><strong>Average composite score</strong></p>
</td>
<td>
<p><strong>Standard Deviation</strong></p>
</td>
<td>
<p><strong>Number of responses</strong></p>
</td>
</tr>
<tr>
<td>
<p><strong>New engine/tech</strong></p>
</td>
<td>
<p>53.3</p>
</td>
<td>
<p>18.3</p>
</td>
<td>
<p>41</p>
</td>
</tr>
<tr>
<td>
<p><strong>Engine from previous version of same or similar game</strong></p>
</td>
<td>
<p>64.8</p>
</td>
<td>
<p>15.8</p>
</td>
<td>
<p>58</p>
</td>
</tr>
<tr>
<td>
<p><strong>Internal/proprietary engine / tech (such as EA Frostbite)</strong></p>
</td>
<td>
<p>60.7</p>
</td>
<td>
<p>19.4</p>
</td>
<td>
<p>46</p>
</td>
</tr>
<tr>
<td>
<p><strong>Licensed game engine (Unreal, Unity, etc.)</strong></p>
</td>
<td>
<p>55.6</p>
</td>
<td>
<p>17.5</p>
</td>
<td>
<p>113</p>
</td>
</tr>
<tr>
<td>
<p><strong>Other</strong></p>
</td>
<td>
<p>55.5</p>
</td>
<td>
<p>19.5</p>
</td>
<td>
<p>15</p>
</td>
</tr>
</tbody>
</table>
<p>The results here are less striking the more you look at them. The highest score was for projects that used an engine from a previous version of the same game or a similar one – but that’s exactly what one would expect to be the case, given that teams in this category clearly already had a head start in production, much of the technical risk had already been stamped out, and there was probably already a veteran team in place that knew how to make that type of game!</p>
<p>We analyzed these results using a <a href="http://en.wikipedia.org/wiki/Kruskal%E2%80%93Wallis_one-way_analysis_of_variance">Kruskal-Wallis one-way analysis of variance</a>, and we found that this question was only statistically significant on account of that very option (engine from a previous version of the same game or similar), with a p-value of 0.006. Removing the data points related to this answer category caused the p-value for the remaining categories to shoot up above 0.3.</p>
<p>Our interpretation of the data is that the best option for the game engine <em>depends entirely on the game being made and what options are available for it,</em> and that any one of these options can be the “best” choice given the right set of circumstances. In other words, the most reasonable conclusion is there is no universally “correct” answer separate from the actual game being made, the team making it, and the circumstances surrounding the game’s development. That’s not to say the choice of engine isn’t terrifically important, but the data clearly shows that there plenty of successes and failures in all categories with only minimal differences in outcomes between them, clearly indicating that each of these four options is entirely viable in some situations.</p>
<p>We also did not ask which <em>specific</em> technology solution a respondent’s dev team was using. Future versions of the study may include questions on the specific game engine being used (Unity, Unreal, CryEngine, etc.)</p>
<p><strong>Team Experience</strong></p>
<p>We also asked a question on this page regarding the team’s average experience level, along a scale from 1 to 5 (with a ‘1’ indicating less than 2 years of average development experience, and a ‘5’ indicating a team of grizzled game industry veterans with an average of 8 or more years of experience).</p>
<p align="center"><img alt height="707" src="http://www.sickgaming.net/blog/wp-content/uploads/2019/06/dont-miss-the-game-outcomes-project-learning-how-teams-succeed-and-fail-6.png" width="646"></p>
<p><strong>Figure 7</strong>.<em> Team experience level ranking (horizontal axis, by category listed above) mapped against game outcome score (vertical axis)</em></p>
<p>Here, we see a correlation of 0.19 (and p-value under 0.001). Note in particular the complete absence of dots in the upper-left corner (which would indicate wildly successful teams with no experience) and the lower-right corner (which would indicate very experienced teams that failed catastrophically).</p>
<p>So our study clearly confirms the common knowledge in the industry that experienced teams are significantly more likely to succeed. This is not at all surprising, but it’s reassuring that the data makes the point so clearly. And as much we may all enjoy stories of random individuals with minimal game development experience becoming wildly successful with games developed in just a few days (as with <a href="http://en.wikipedia.org/wiki/Flappy_Bird">Flappy Bird</a>), our study shows clearly that such cases are extreme outliers. </p>
<p><a id="Incentives" name="Incentives"><strong>Surprise #1: Incentives</strong></a></p>
<p>This first page of our survey also revealed two major surprises.</p>
<p>The first surprise was financial incentives. The survey included a question: “Was the team offered any financial incentives tied to the performance of the game, the team, or your performance as individuals? Select all that apply.” We offered multiple check boxes to say “yes” or “no” to any combination of financial incentives that were offered to the team.</p>
<p>The correlations are as follows:</p>
<p align="center"><img alt height="837" src="http://www.sickgaming.net/blog/wp-content/uploads/2019/06/dont-miss-the-game-outcomes-project-learning-how-teams-succeed-and-fail-7.png" width="646"></p>
<p><span><strong>Figure 8</strong>.</span><em> Incentives (horizontal axis) plotted against game outcome score (vertical axis) for the five different types of financial incentives, using a <a href="http://www.tableausoftware.com/new-features/box-and-whisker-plot">box-and-whisker plot</a>. From left to right: incentives based on individual performance, team performance, royalties, incentives based on game reviews/MetaCritic scores, and miscellaneous other incentives. For each category, we split all 273 data points into those excluding the incentive (left side of each box) and those including the incentive (right side of each box).</em></p>
<p>Of these five forms of incentives, only individual incentives showed statistical significance. Game projects offering individually-tailored compensation (64 out of the 273 responses) had an average score of 63.2 (standard deviation 18.6), while those that did <em>not </em>offer individual compensation had a mean game outcome score of 56.5 (standard deviation 17.7). A <a href="http://en.wikipedia.org/wiki/Mann%E2%80%93Whitney_U_test">Wilcoxon rank-sum test</a> for individual incentives gave a p-value of 0.017 for this comparison.</p>
<p>All the other forms of incentives – those based on team performance, based on royalties, based on reviews and/or MetaCritic ratings, and any miscellaneous “other” incentives – show p-values that indicate that there was no meaningful correlation with project outcomes (p-values 0.33, 0.77, 0.98, and 0.90, respectively, again using a <a href="http://en.wikipedia.org/wiki/Mann%E2%80%93Whitney_U_test">Wilcoxon rank-sum test</a>).</p>
<p>This is a very surprising finding. Incentives are usually offered under the assumption that they are a huge motivator for a team. However, our results indicate that only individual incentives seem to have the desired effect, and even then, to a much smaller degree than expected.</p>
<p>One possible explanation is that perhaps the <a href="http://www.ted.com/talks/dan_pink_on_motivation?language=en">psychological phenomenon popularized by Dan Pink</a> may be playing itself out in the game industry – that financial rewards are (according to a great deal of recent research) usually a completely ineffective motivational tool, and actually backfire in many cases.</p>
<p>We also speculate that in the case of royalties and MetaCritic reviews in particular, the sense of helplessness that game developers can feel when dealing with factors beyond their control – such as design decisions they disagree with, or other team members falling down on the job – potentially compensates for any motivating effect that incentives may have had. With individual incentives, on the other hand, individuals may feel that their individual efforts are more likely to be noticed and rewarded appropriately. However, without more data, this all remains pure speculation on our part.</p>
<p>Whatever the reason, our results seem to indicate that individually tailored incentives, such as <a href="http://en.wikipedia.org/wiki/Performance-related_pay">Pay For Performance</a> (PFP) plans, seem to achieve meaningful results where royalties, team incentives, and other forms of financial incentives do not.</p>
<p><a id="ProductionMethodologies" name="ProductionMethodologies"><strong>Surprise #2: Production Methodologies</strong></a></p>
<p>Our second big surprise was in the area of production methodologies, a topic of frequent discussion in the game industry.</p>
<p>We asked what production methodology the team used – 0 (don’t know), 1 (<a href="http://en.wikipedia.org/wiki/Waterfall_model">waterfall</a>), 2 (<a href="https://www.google.com/url?sa=t&rct=j&q=&esrc=s&source=web&cd=2&cad=rja&uact=8&ved=0CD8QFjAB&url=http%3A%2F%2Fen.wikipedia.org%2Fwiki%2FAgile_software_development&ei=JlGIVJTuBcT2yQSizYGAAw&usg=AFQjCNHHClAN2UfGQdH_MuFIDxYa3T86jA&sig2=Kx9achgBwDJFKe6q415LPQ&bvm=bv.81456516,d.aWw">agile</a>), 3 (agile using “<a href="http://scrummethodology.com/">Scrum</a>”), and 4 (other/ad-hoc). We also provided a detailed description with each answer so that respondents could pick the closest match according to the description even if they didn’t know the exact name of the production methodology. The results were shocking.</p>
<p><img align="left" alt height="779" src="http://www.sickgaming.net/blog/wp-content/uploads/2019/06/dont-miss-the-game-outcomes-project-learning-how-teams-succeed-and-fail-8.png" width="646"></p>
<p><span><strong>Figure 9</strong>.</span><em> Production methodology vs game outcome score.</em></p>
<p>Here’s a more detailed breakdown showing the mean and standard deviation for each category, along with the number of responses in each:</p>
<table border="0" cellpadding="0" cellspacing="0" width="564">
<tbody>
<tr>
<td> </td>
<td>
<p><strong>Average composite score</strong></p>
</td>
<td>
<p><strong>Standard Deviation</strong></p>
</td>
<td>
<p><strong>Number of responses</strong></p>
</td>
</tr>
<tr>
<td>
<p><strong>Unknown</strong></p>
</td>
<td>
<p>50.6</p>
</td>
<td>
<p>17.4</p>
</td>
<td>
<p>7</p>
</td>
</tr>
<tr>
<td>
<p><strong>Waterfall</strong></p>
</td>
<td>
<p>55.4</p>
</td>
<td>
<p>17.9</p>
</td>
<td>
<p>53</p>
</td>
</tr>
<tr>
<td>
<p><strong>Agile</strong></p>
</td>
<td>
<p>59.1</p>
</td>
<td>
<p>19.4</p>
</td>
<td>
<p>94</p>
</td>
</tr>
<tr>
<td>
<p><strong>Agile using Scrum</strong></p>
</td>
<td>
<p>59.7</p>
</td>
<td>
<p>16.9</p>
</td>
<td>
<p>75</p>
</td>
</tr>
<tr>
<td>
<p><strong>Other / Ad-hoc</strong></p>
</td>
<td>
<p>57.6</p>
</td>
<td>
<p>17.6</p>
</td>
<td>
<p>44</p>
</td>
</tr>
</tbody>
</table>
<p>What’s remarkable is just how tiny these differences are. <em>They almost don’t even exist.</em></p>
<p>Furthermore, a <a href="http://en.wikipedia.org/wiki/Kruskal%E2%80%93Wallis_one-way_analysis_of_variance">Kruskal-Wallis H test</a> indicates a very high p-value of 0.46 for this category, meaning that we truly can’t infer any relationship between production methodology and game outcome. Further testing of the production methodology against each of the four game project outcome factors individually gives identical results.</p>
<p>Given that production methodologies seem to be a game development holy grail for some, one would expect to see major differences, and that Scrum in particular would be far out in the lead. But these differences are tiny, with a huge amount of variation in each category, and the correlations between the production methodology and the score have a p-value too high for us to deny the assumption that the data is independent. Scrum, agile, and “other” in particular are essentially indistinguishable from one another. “Unknown” is far higher than one would expect, while “Other/ad-hoc” is also remarkably high, indicating that there are effective production methodologies available that aren’t on our list (interestingly, we asked those in the “other” category for more detail, and the <a href="http://www.slideshare.net/holtt/cerny-method">Cerny method</a> was listed as the production methodology for the top-scoring game project in that category).</p>
<p>Also, unlike our question regarding game engines, we can’t simply write this off as some methodologies being more appropriate for certain kinds of teams. Production methodologies are generally intended to be universally useful, and our results show no meaningful correlations between the methodology and the game genre, team size, experience level, or any other factors.</p>
<p>This begs the question: where’s the payoff?</p>
<p>We’ve seen several significant correlations in this article, and we will describe many more throughout our study. Articles 2 and 3 in particular will illustrate many remarkable correlations between many different cultural factors and game outcomes, with more than 85% of our questions showing a statistically significant correlation.</p>
<p>So it’s very clear that where there were significant drivers of project outcomes, they stood out very clearly. Our results were not shy. And if the specific production methodology a team uses is really vitally important, we would expect that it absolutely should have shown up in the outcome correlations as well.</p>
<p>But it’s simply not there.</p>
<p>It seems that in <span>spite of all the attention paid to the subject, </span>the particular type of production methodology a team uses is not terribly important, and it is not a significant driver of outcomes. E<span>ven the much-maligned “Waterfall” approach can apparently be made to work well</span>.</p>
<p><span>Our third article will detail a number of additional questions we asked around production that give some hints as to what aspects of production actually impact project outcomes regardless of the specific methodology the team uses — although these correlations are still significantly weaker on average than any of our other categories concerning culture.</span></p>
<p><strong>Conclusions</strong></p>
<p>We are beginning to crack open the differences that separate the best teams from the rest.</p>
<p>We have seen that four factors – total project duration, team experience level, financial incentives based on individual performance, and re-use of an existing game engine from a similar game – have clear correlations with game project outcomes.</p>
<p>Our study found several surprises, including a complete lack of any correlations between factors that one would assume should have a large impact, such as team size, game genre, target platforms, the production methodology the team used, or any additional financial incentives the team was offered beyond individual performance compensation.</p>
<p>In the <a href="http://gamasutra.com/blogs/PaulTozour/20150106/233254/The_Game_Outcomes_Project_Part_2_Building_Effective_Teams.php">second article in the series</a>, we discuss the three team effectiveness models that inspired our study in detail and illustrate their correlations with the aggregate outcome score and each of the individual outcome questions. We see far stronger correlations than anything presented in this article.</p>
<p><span>Following that, the <a href="http://www.gamasutra.com/blogs/PaulTozour/20150113/233922/The_Game_Outcomes_Project_Part_3_Game_Development_Factors.php">third article</a> explores additional findings around many other factors specific to game development, including</span><span> technology risk management, design risk management, crunch / overtime, team stability, project planning, communication, outsourcing, respect, collaboration / helpfulness, team focus, and</span><span> organizational perceptions of failure. We</span><span> also provide a self-reflection tool that teams can use for postmortems and self-analysis.</span></p>
<p>Finally, our <a href="http://gamasutra.com/blogs/PaulTozour/20150120/234443/The_Game_Outcomes_Project_Part_4_Crunch_Makes_Games_Worse.php">fourth article</a> brings our data to bear on the controversial issue of crunch and draws unambiguous conclusions, and our <a href="http://www.gamasutra.com/blogs/PaulTozour/20150126/235024/The_Game_Outcomes_Project_Part_5_What_Great_Teams_Do.php">fifth article</a> summarizes our results.</p>
<p><em>The Game Outcomes Project team would like to thank the hundreds of current and former game developers who made this study possible through their participation in the survey. We would also like to thank IGDA Production SIG members Clinton Keith and Chuck Hoover for their assistance with question design; Kate Edwards, Tristin Hightower, and the IGDA for assistance with promotion; and Christian Nutt and the Gamasutra editorial team for their assistance in promoting the survey.</em></p>
<p><em>For announcements regarding our project, follow us on Twitter at <a href="https://twitter.com/GameOutcomes">@GameOutcomes</a></em></p>
</div>