Prediction of EU and AM Summer Championships Based on Data Reaper’s 1,000,000 Simulations

Introduction

Since its launch, the Data Reaper Project accumulated a large enough sample to assess the common matchups in the current Meta. To expand our reach to the tournament scene, we decided to try and predict the results of the America’s Championship which took part last weekend. Once we had information regarding all of the players’ lineups and the bracket, we launched a Monte Carlo simulation based on that information and the Data Reaper’s matchup win rates.

We simulated one million tournaments to generate an empirical distribution of the outcomes we could observe. The results yield a matrix that reports the likelihood of each player reaching semi-finals, finals, and winning the entire tournament.

Like any prediction model, one must be aware of the assumptions that are incorporated into the model, both explicitly and implicitly. Here are a number of them:

  1. Because of the tournament’s format – Best of 7 conquest with a ban – we had to include a ban rule. For the purpose of the simulation, we assumed that each player would ban the best performing deck in his opponent’s line up against his own (based on the Data Reaper win rates).
  2. We assumed a random order of games within a match.
  3. Since we use the Data Reaper’s win rates, tech choices in players’ builds that may affect certain matchups are not taken into account.
  4. The matchup win rates assume opponents are of equal skill, which means individual performance may also have an effect on results.

Americas Region

These are the results that we got for the America Championships held on September 17-18, 2016. Players are ordered according to the bracket, with the top 4 and the bottom 4 players consisting of the two sides of the bracket. For each column, the numbers reported are the frequency (out of 1 million simulations) that the player ended up finishing the tournament in that spot. So, for example, Abar finished 52.07% of the simulations in 5/6/7/8 spot, meaning he did not get past the first round. In other words, Tarei won 52.07% of the matches.

(Click the image to Enlarge)

predictionsna1

We can see that HotMeowth and Tarei had relatively superior lineups against the field, and as a result, had the best chance of reaching the finals from each side of the bracket and winning the tournament. Considering that we had eight participants, HotMEOWTH’s 20% chance of winning the event was relatively high.

The Monte Carlo simulation ended up correctly predicting 6 winners out of the 7 matches that took place, including the finalists and the champion. The only result that occurred against the odds was Dude’s victory against Topablo in the Quarter Finals (43%).

(Click the image to Enlarge)

predictionsbrackets1

While the model performed quite well, we should be transparent and let you know that we did not write this article last week because we did not want to put extra pressure on our own member of the Data Reaper Team, HotMEOWTH. Thus, we only kept the results internally and we were crossing our fingers that the final outcome will match our model’s prediction.

Europe Region

Once we had the information regarding the European Championship Bracket, we ran another Monte Carlo simulation for it and these are the results. here are our predictions for the European tournament to be held on September 24-25, 2016, a few hours from the publication of this article. 

(Click the image to Enlarge)

predictionseu1

Again, the chart above reports the frequency at which players ended up finishing the tournament in the corresponding spot. We can see that unlike the America’s Championship, there isn’t a line-up that stands out statistically from the rest. Players are much closer in their chances of winning the tournament, and most Quarter-Finals are 50-50 match ups. It will be interesting to see whether these very slight advantages end up mattering.

In the future, the Data Reaper project will develop more tools to evaluate tournament brackets and lineups. We’re thinking of ways to create tournament models that can help players build line-ups that may have an advantage over the predicted field.

As always, we thank our track-o-bot contributors for making all of this possible. We’re continuing to analyze the game of Hearthstone in ways that have never been done before to further increase the knowledge of the community. A lot is said about Hearthstone being a game of luck, but the truth is that it’s a game of numbers. Knowing how to leverage these numbers to increase your probability of winning will help you become a better player in the long run.

 

This article was brought to you by:

endofdays-2TzachistargazerNaramohc015_dataanalyst

2 Comments

  1. Hey. This is a very interesting article. Also, your assumptions seem reasonable. I kind of did the same thing a while ago, using the win rates given at tempostorm.com, but I took a different approach. Instead of running simulations, I evaluated line-ups with a game-theoretic ansatz. Given that, you can compute winning percentages based on your first deck pick (and of that from the opponent) to give you an optimal strategy what to pick first and also an overall likelihood to win the whole series. I also modified the model to give the “right” ban in a match including 1 ban (which is almost always the deck performing best against one’s line-up, regardless of what the opponent might ban).
    Since your data seems to be more accurate than the ones given at tempostorm, I thought this method could also be interesting for you, since it does not require simulations (but lengthy computations nonetheless, espacially for everything beyond BO5, at least on a single core).

Comments are closed.