RATINGS
 


The most important thing to understand about how our computer power ratings system works is that it is 100% objective.  The size of the schools, their division, the league they're in, how good the school* or league are historically, their geographic location, how well liked the school is, how good the league is in other sports-- none of these things are programmed into the system.  None of them are there to bias it the way that they inevitably bias humans saddled with the daunting task of trying to figure who belongs in post-season play.  Rather, they are just a bunch of teams with a bunch of results.  Cold, hard, and unfeeling, yes...but as accurate, objective and fair as is possible.

*For 11-man football ratings only, how good the team appears to be entering the season (based on players graduated/back) is a factor in the ratings early in the year in order to improve early-season accuracy of the ratings, but this is eliminated as the season goes on.  For more information on this, please click here


We will start explaining how the ratings work when margin of victory is used as a factor.  It is much easier to explain that way.

When margins are used, the differences in ratings between two teams is roughly a measure of how many points better one team is than another.  An 80 should beat a 60 by 20, etc.

Example:

Assume the following starting ratings.  Don't worry about how they got to this point for now- that will be explained in a minute.

Team A's rating is 10.
Team B's rating is 0.
Team C's rating is -5.
Team D's rating is -8.
Team E's rating is -10.

The way our program works is as follows.  It systematically sorts through all the results for the season (season-to-date results if we're dealing with an in-progress season).  It takes each result and compares it to what "should" have happened given the ratings of the teams.  It knows that if A played C, A should have handled them fairly easily.  If A lost that game, or even squeaked by with a narrow victory, its rating is hurt, while C's is helped.  The system keeps checking through all the results for every team.  Sticking with team A though, let's say they also played D and won by 15 (that's about what they should have done- no real impact on either teams' rating there), demolished team B by 22 (which definitely helps their rating), and beat D by 10 (not doing quite as well as could have been expected- another "ding" against their rating.)  When all is said and done, it takes the aggregate of how much better or worse they did than expected in all their games, divides that by the number of games played, and adjusts their rating accordingly.  For example, if they averaged performing two points worse than expected, their rating drops from a 10 to an 8.  (Please note: this is definitely over-simplified; it isn't this straight-forward-mathematical.  Points aren't everything by any means- the win or the loss is always the most important thing, even when margins are used.  There is a "diminishing returns" principle at play so as to not fully credit a team for blowing out a weak opponent.  In addition to the cutoff point past which margins are not counted, there is a "win minimum" as well a maximum-- a number which no win is credited as being below...because, of course, a one point win isn't just barely better than a one point loss.  Far from it.)  All teams are adjusted similarly, and then we start over from the beginning with the new ratings- A is now an 8 and expected to perform accordingly, etc.  This is done repeatedly until their is no longer any movement in the ratings, and they settle in where they "should" be.

Remember when we asked you to hold that thought on how they got to the point they started at?  They didn't start there actually.  All teams start at 0.  There is no bias at all- last year's stats or pre-season projections are not used as a starting point (again, see the one exception above).  Everybody starts at 0 and the ratings run continuously until the movement stops.  It's just much harder to conceptualize that way (and you thought this way was hard!)- that's why we started the example off with the teams already having ratings.

When run without margins, the process is the same, but, of course, the margin of victory is not considered.  A win is a win, and all wins are counted at the same level.  Therefore, the examples of getting your rating "dinged" because of a closer-than-expected win do not apply.  All that matters is the win (and who you played).

For an example without margins, consider the following situation:



Team W-L Games List
A 3-0 wins: C,D,E/losses: none
B 2-1 wins: F,F/losses: E
C 2-1 wins: D,E/losses: A
D 1-2 wins: F/losses: A,C
E 1-2 wins: B/losses: A,C
F 0-3 wins: none/losses: B,B,D



Before continuing on, take a moment to analyze the data in the chart and think about which order you believe the teams should be placed in.

A couple of things are fairly clear.  Team A is obviously having the strongest season thus far, while Team F certainly belongs at the bottom of the list.  With a casual glance, you may not be able to decide which 2-1 team should be rated higher, and likewise with the 1-2 teams.  A closer look would likely convince you that, of the 2-1 teams, C should be rated above B.  B's wins came at the hands of the 0-3 team, while C's wins were against stronger competition.  Likewise, B's loss was against a weaker opponent than C's loss, as C lost to the top team (A).  Regarding the 1-2 teams, they have identical losses, but E's win came against a stronger opponent (D beat the 0-3 team).  So, E should be rated slightly higher than D.

Not suprisingly, when our system runs through this exact data, the actual ratings do in fact place C above B and E above D.



Team W-L Rating Games List
A 3-0 22.2 wins: C,D,E/losses: none
C 2-1 13.2 wins: D,E/losses: A
E 1-2 3.2 wins: B/losses: A,C
D 1-2 -0.9 wins: F/losses: A,C
B 2-1 -8.7 wins: F,F/losses: E
F 0-3 -22.1 wins: none/losses: B,B,D



One thing you may not have seen coming, however, is that B is actually rated below the 1-2 teams as well.  Again, a close inspection of Team B's early-season results reveal that their two wins were against the lowest rated team (and thus aren't all that impressive), and their loss was to one of the 1-2 teams.  In other words, looking at their W-L record alone (2-1) greatly overstates how well they have done.  The 1-2 teams, particularly Team E, have had better seasons thus far.  Team E, for example, had the win over B, and their losses were to highly rated teams.  This suggests that, at this early point in the season, without further data to go on, E should be placed above B despite E's inferior W-L record.


Rest assured, the process is many times more accurate than the over-simplified points systems that many sections/states are trying as an alternative to the subjectivity of the human process.  Systems where, for example, 3 points are given for a win against a large school, 2 points for a win against a medium-sized school, etc. simply can't compare with what we're doing here.