MidgetAAAHockey.com Main Page

How the Ratings Program Works

This was originally written to explain the Midget AAA rankings, but it applies equally to the high school rankings. The only real difference is that the high school rankings have a 'Last 5' column indicating the average rating for the last five games whereas the Midget AAA rankings have a 'Last 10' column instead. Simply stated, each team's rating can be thought of as the average rating of all of their opponents plus their average goal differential in those games. The reality is a bit more complex as the goal differential considered for any one game is limited to four, the average opponent rating is a weighted (by games played against) average, and, since every team's rating is therefore dependent on all of their opponents' ratings, a recursive algorithm is used which adjusts ratings based on scores until a minimum difference between expected goal differentials and actual goal differentials is reached. The program starts all teams with an equal arbitrary value and then adjusts them all recursively based on game scores until the lowest cumulative error between expected goal differentials and actual goal differentials is reached (convergence). One effect often noticed is that a team's rating can change slightly without playing due to previous opponents playing which affects their rating and their opponents rating, etc. The Rank column simply lists each team's rank, ranks are determined by by which team has the highest Total. The Total column is actually a better indicator of where a team stands than the Rank column. For example, at the time of this writing it can be determined the team that is ranked 5th is actually as close in Total to 16th than they are to 3rd and therefore would find it easier to slip down several spots than to move up two spots. The Total column is in goal units and the difference in this value between any two teams is the predicted goal differential for a game played at a neutral site between those two teams. It should be noted that due to goal differential limits set in the ranking program these predicted goal differentials are only accurate when comparing teams ranked within two goals of each other. The differential between teams ranked farther apart becomes more and more understated as the chosen teams become farther apart in the rankings. The >4 Column indicates the number of ignored games for each team against teams rated more than 4 goals higher or lower than them. As long as the actual margin of victory in such a game was 3 or more goals, it is ignored in determining team ratings. Ignoring these mismatches eliminates any effect they could have on the rankings. Note that these >4 games are not used for any of the values displayed (other than W-L-T) on the rankings page. The GmPerf column shows the average goal differential for the team, calculated using a maximum of +/- 4 for any one game and listed to the nearest .1 goals. The Sched column is the weighted (by # games played against) average Total rating for all of that team's opponents and listed to the nearest .1 goals. The Total column is the sum of GmPerf and Sched, listed to the nearest .01 goals. The most common misperception about these rankings is that the Sched rating is an input. It is actually an output. The only input to the rankings program are game scores, everything else is generated interdependently by the program. Brute force recursion is the 'secret' as all team Totals are adjusted based on game results until the smallest possible total additive absolute error is reached (actual scores vs. expected scores based on ratings). The Total is actually calculated before and without the aid of the Sched and GmPerf values (they are merely generated at the end as they are interesting components of the Total to look at for each team). A teams listed W-L-T record reflects only those games that have been reported and are in the scores database. Apologies in advance for not being able to accept scores that are not reported in the proper format. It should be noted that these rankings are an average performance value for each team for the entire season, all games are equally weighted, so a team's latest game may make up only 1/70th ( e.g. Midget AAA) of a teams rating at the end of the season. The Last10 column shows the average performance in the team's last ten games. Games that go to shootouts for decision are considered ties for ratings purposes.

How accurate are the rankings?

The past predictive accuracy, the ability of the ratings to explain winners of games that have already taken place, is usually in the 80-90% range (excluding ties). In 2000-01 for example, the past predictive values for Minnesota and Wisconsin HS (two of the states for which the scores were the most complete) were 84.1% and 89.4% respectively. The future predictive accuracy, the ability of the ratings to predict the outcome of future games, is generally about the same to a couple of percentage points lower. But perhaps the best indication of how accurate the ratings actually are is not expressed in prediction percentages but in the direct comparison between the ratings predictions and the coaches' playoff seedings. In the past ten years in comparison with the coaches seedings in Minnesota and Wisconsin HS (13 x 2 = 26 seasons) these ratings (frozen before the start of the playoffs) have only been beaten by the coaches seedings just twice in predicting playoff game winners despite the inherent disadvantage of having home ice assigned based on the coaches' seedings.