The season ended with Austin taking home the championship trophy. As a Texas native, it was fun to finally get a chance to see a victory parade and the experience did not disappoint. Throughout the season, I did a bit of tracking of expected wins using a pretty basic formula I totally stole from Wikipedia.
Now that things are over, I am writing one final article to inform you, based on data, who overperformed and who underperformed. This data is based entirely on regular season performance so that playing additional games in the playoffs does not distort the comparisons.
I used the Pythagorean Wins formula from Wikipedia to make my predictions. The formula is pretty simple and works entirely off point differential, total games in the season, and some basic math. I didn’t think to keep a regularly updated spreadsheet, so unfortunately I can’t really track how team’s expectations changed throughout the season. That’s something I’ll look into doing next season since there has been some interest in this process.
We begin our analysis in the NSFC, where Yellowknife ended up atop the pile with nine wins to their name. They had the most points in the NSFC but were pretty close to the division average in terms of points allowed. Their just-better-than-average defense brought down their expected wins, but no one else had enough output on offense to topple them from the leading spot in expected wins. They were expected to win 7.84 games and won 9, with a difference of +1.15. Yellowknife was the only team in the NSFC to overperform, but what is basically a one game difference between the expected value and the actual value should not surprise anyone. They were the second overall team in expected wins, so it should be no surprise that they were the league runners up. All in all a good season for the Wraiths and some small improvements on defense could help put them over the top if they can maintain their dominance on offense.
Colorado comes through in second place, winning seven games. Their defense was solid, allowing the second fewest points int he league. The offense was a bit slower, coming in third and barely outpacing fourth place Baltimore. Colorado was expected to win 7.26 games and actually won 7, making the difference -.26. The model was very close, especially since it’s pretty impossible for teams to win quarter or half games. Colorado’s expected wins were good for third overall in the league and they made it to the semifinals. Too bad we don’t get a third place game to further test the model’s accuracy in predicting championship games. Colorado should look to improve their offensive numbers a bit if they’re hoping to be more competitive next season.
The third spot in the NSFC is occupied by Baltimore. Despite scoring only three points fewer than Colorado, Baltimore had the second worst defense in the NSFC and a -11 point differential on the season. This dragged their expected wins down to 6.23. The team won 6, so the difference is once again within a quarter of a game at -.23. Baltimore’s offense wasn’t bad, but the defense definitely wasn’t good enough to get the job done. They should look to improve mostly on the defensive side of things and keep the offense stable if they want to be serious contenders.
The home team, Philly, comes next. Philly provides some evidence to counter the common assertion that defense wins championships. The defense dominated the NSFC, allowing the fewest points in the division and the second fewest points allowed league wide at only 275. The offense, however, couldn’t find the points to stay relative. They stumbled into a -33 point differential and were expected to win a meager 5.52 games. The team came in at 5 wins. The model was slightly less accurate here, missing by a half game. Missing by a half game isn’t bad, you really just have to look at is as a prediction of “five wins plus or minus one.” Obviously taking the middle ground is the coward’s way out, but only missing by .5 isn’t terrible. Philly’s focus should be on finding more ways to put points on the board while making sure to get enough young talent in at defense to start phasing out some aging players to ensure success.
The cellar dwellers of the NSFC this season were Chicago. Their offense found ways to score, putting up the second most points in the division. Their defense, though, was pretty terrible. They allowed the second most points against in the league, behind only San Jose. Chicago was expected to pick up 4.7 wins and actually won 4, making the difference -.7. They certainly underperformed this season, but that’s not saying much. Even with the extra win, they would have been tied for last in the division with Philly and missed out on the playoffs.
Ultimately, the model correctly predicted the order of teams in the NSFC and was only off by more than a game in the case of the Wraiths, who overperformed by 1.15 games. If we round to the nearest whole number, though, they would have been expected to win eight while Colorado would have only won seven, so the change wouldn’t have upset league standings at all. Now, we move on to the ASFC.
Austin, the league champions, won the division and were generally dominant on both sides of the ball. Their offense put up 350 points, good for second in the division and more than any NSFC team. Their defense was best in the league, allowing only 260 points against all season. They had the second best point differential and the best expected wins value, with the model putting them at 8.70 wins. They won 8, so the difference is -.7. It’s weird thinking that the league champions underperformed, but here we are…
New Orleans fell into the second spot despite their lackluster offense. Their defense was tied with Yellowknife’s, but the other defenses in the conference were generally stronger than the NSFC. As a result, their defense is middle of the line for the ASFC and the model predicted that they would pick up 7.03 wins. New Orleans finished as one of three teams with 7 wins, for a difference of -.03. This was one of the most accurately modeled teams. That difference is tiny! And New Orleans fell exactly where they were expected to in terms of wins. Their placement in the standings, though, was boosted by Orange County massively underperforming compared to the model’s expectation.
Orange County was expected to win 8.64 games. Their offense was amazing and lead the league by picking up 371 points. Their defense was better than NOLA’s. But they couldn’t find a way to get those extra wins, falling 1.64 short of their expected value by winning only 7 games. Austin’s expected win total was only .06 higher than Austin’s. Both these team’s massively let the model down with their dramatic underperforming win totals. Still, the Otters had their chance in the playoffs and couldn’t come up with a needed win there either. At least they kept their playoff loss within two scores, unlike my future teammates in Philly. I expect Orange County will be stronger next season and will be watching their battles with Austin closely.
Arizona rings in at fourth in the ASFC. Their offense wasn’t bad, but the defense struggled and they gave up 11 more than they scored on the season. They were, however, the first team to overperform in the ASFC as the model only gave them 6.24 wins. They scraped out 7, creating a .76 difference and becoming the third team to win seven games. They actually gave Austin a run for their money in the first round of the playoffs, but fell just short in a 7 point loss. If they can shore up their defense and maintain their offense, they could make things interesting next season in an already very tight conference.
San Jose is by far the worst team in the NSFL after this season. The model expected a meager 3.44 wins out of them on account of both their offense and defense being terrible. The offense was only better than Philly’s and the defense only had any competition from Chicago, their fellow last-place team. Giving up 400 points in this league is not a recipe for success and it’s nearly impossible to overcome a deficit of nearly 150 points on the season. Still, they scraped out 5 wins, overperforming their expected win tally by a full 1.56 games.
The sim was less accurate in the ASFC than it was in the NSFC, though it did get about as close as possible with New Orleans. The model was not correct in predicting the standings, with Orange County expected to outperform New Orleans. One does have to wonder how flipping their ranks and, consequently, homefield advantage may have changed outcomes in the playoffs. Maybe someone who is less lazy than me can run some sim tests to see how this change might have shaken things up in the postseason. A duel between Austin and Orange County might have made for an epic night in the semifinals.
In conclusion, your underperformers are:
Orange County -1.64
Chicago -.70
Austin -.69
Your overperformers are:
San Jose +1.56
Yellowknife +1.15
Arizona +.76
Oddly enough, those are actually the only three teams that overperformed their expected numbers. It seems the sim generally tends to underestimate teams slightly. The average difference (in absolute value) between the expected value and the actual value is .755. The average in the actual values produced is -.06. So the expected value produced by the model was within .755 of the team’s actual win values on average. The difference between the expected values and the actual values is -.06, so on average the model estimated the team would win .06 more games than they actually did. Obviously one season is a pretty small sample size, but these numbers seem pretty good to me. The model generally was close to the team’s actual performance and predicted the top two teams correctly.
Now that things are over, I am writing one final article to inform you, based on data, who overperformed and who underperformed. This data is based entirely on regular season performance so that playing additional games in the playoffs does not distort the comparisons.
I used the Pythagorean Wins formula from Wikipedia to make my predictions. The formula is pretty simple and works entirely off point differential, total games in the season, and some basic math. I didn’t think to keep a regularly updated spreadsheet, so unfortunately I can’t really track how team’s expectations changed throughout the season. That’s something I’ll look into doing next season since there has been some interest in this process.
We begin our analysis in the NSFC, where Yellowknife ended up atop the pile with nine wins to their name. They had the most points in the NSFC but were pretty close to the division average in terms of points allowed. Their just-better-than-average defense brought down their expected wins, but no one else had enough output on offense to topple them from the leading spot in expected wins. They were expected to win 7.84 games and won 9, with a difference of +1.15. Yellowknife was the only team in the NSFC to overperform, but what is basically a one game difference between the expected value and the actual value should not surprise anyone. They were the second overall team in expected wins, so it should be no surprise that they were the league runners up. All in all a good season for the Wraiths and some small improvements on defense could help put them over the top if they can maintain their dominance on offense.
Colorado comes through in second place, winning seven games. Their defense was solid, allowing the second fewest points int he league. The offense was a bit slower, coming in third and barely outpacing fourth place Baltimore. Colorado was expected to win 7.26 games and actually won 7, making the difference -.26. The model was very close, especially since it’s pretty impossible for teams to win quarter or half games. Colorado’s expected wins were good for third overall in the league and they made it to the semifinals. Too bad we don’t get a third place game to further test the model’s accuracy in predicting championship games. Colorado should look to improve their offensive numbers a bit if they’re hoping to be more competitive next season.
The third spot in the NSFC is occupied by Baltimore. Despite scoring only three points fewer than Colorado, Baltimore had the second worst defense in the NSFC and a -11 point differential on the season. This dragged their expected wins down to 6.23. The team won 6, so the difference is once again within a quarter of a game at -.23. Baltimore’s offense wasn’t bad, but the defense definitely wasn’t good enough to get the job done. They should look to improve mostly on the defensive side of things and keep the offense stable if they want to be serious contenders.
The home team, Philly, comes next. Philly provides some evidence to counter the common assertion that defense wins championships. The defense dominated the NSFC, allowing the fewest points in the division and the second fewest points allowed league wide at only 275. The offense, however, couldn’t find the points to stay relative. They stumbled into a -33 point differential and were expected to win a meager 5.52 games. The team came in at 5 wins. The model was slightly less accurate here, missing by a half game. Missing by a half game isn’t bad, you really just have to look at is as a prediction of “five wins plus or minus one.” Obviously taking the middle ground is the coward’s way out, but only missing by .5 isn’t terrible. Philly’s focus should be on finding more ways to put points on the board while making sure to get enough young talent in at defense to start phasing out some aging players to ensure success.
The cellar dwellers of the NSFC this season were Chicago. Their offense found ways to score, putting up the second most points in the division. Their defense, though, was pretty terrible. They allowed the second most points against in the league, behind only San Jose. Chicago was expected to pick up 4.7 wins and actually won 4, making the difference -.7. They certainly underperformed this season, but that’s not saying much. Even with the extra win, they would have been tied for last in the division with Philly and missed out on the playoffs.
Ultimately, the model correctly predicted the order of teams in the NSFC and was only off by more than a game in the case of the Wraiths, who overperformed by 1.15 games. If we round to the nearest whole number, though, they would have been expected to win eight while Colorado would have only won seven, so the change wouldn’t have upset league standings at all. Now, we move on to the ASFC.
Austin, the league champions, won the division and were generally dominant on both sides of the ball. Their offense put up 350 points, good for second in the division and more than any NSFC team. Their defense was best in the league, allowing only 260 points against all season. They had the second best point differential and the best expected wins value, with the model putting them at 8.70 wins. They won 8, so the difference is -.7. It’s weird thinking that the league champions underperformed, but here we are…
New Orleans fell into the second spot despite their lackluster offense. Their defense was tied with Yellowknife’s, but the other defenses in the conference were generally stronger than the NSFC. As a result, their defense is middle of the line for the ASFC and the model predicted that they would pick up 7.03 wins. New Orleans finished as one of three teams with 7 wins, for a difference of -.03. This was one of the most accurately modeled teams. That difference is tiny! And New Orleans fell exactly where they were expected to in terms of wins. Their placement in the standings, though, was boosted by Orange County massively underperforming compared to the model’s expectation.
Orange County was expected to win 8.64 games. Their offense was amazing and lead the league by picking up 371 points. Their defense was better than NOLA’s. But they couldn’t find a way to get those extra wins, falling 1.64 short of their expected value by winning only 7 games. Austin’s expected win total was only .06 higher than Austin’s. Both these team’s massively let the model down with their dramatic underperforming win totals. Still, the Otters had their chance in the playoffs and couldn’t come up with a needed win there either. At least they kept their playoff loss within two scores, unlike my future teammates in Philly. I expect Orange County will be stronger next season and will be watching their battles with Austin closely.
Arizona rings in at fourth in the ASFC. Their offense wasn’t bad, but the defense struggled and they gave up 11 more than they scored on the season. They were, however, the first team to overperform in the ASFC as the model only gave them 6.24 wins. They scraped out 7, creating a .76 difference and becoming the third team to win seven games. They actually gave Austin a run for their money in the first round of the playoffs, but fell just short in a 7 point loss. If they can shore up their defense and maintain their offense, they could make things interesting next season in an already very tight conference.
San Jose is by far the worst team in the NSFL after this season. The model expected a meager 3.44 wins out of them on account of both their offense and defense being terrible. The offense was only better than Philly’s and the defense only had any competition from Chicago, their fellow last-place team. Giving up 400 points in this league is not a recipe for success and it’s nearly impossible to overcome a deficit of nearly 150 points on the season. Still, they scraped out 5 wins, overperforming their expected win tally by a full 1.56 games.
The sim was less accurate in the ASFC than it was in the NSFC, though it did get about as close as possible with New Orleans. The model was not correct in predicting the standings, with Orange County expected to outperform New Orleans. One does have to wonder how flipping their ranks and, consequently, homefield advantage may have changed outcomes in the playoffs. Maybe someone who is less lazy than me can run some sim tests to see how this change might have shaken things up in the postseason. A duel between Austin and Orange County might have made for an epic night in the semifinals.
In conclusion, your underperformers are:
Orange County -1.64
Chicago -.70
Austin -.69
Your overperformers are:
San Jose +1.56
Yellowknife +1.15
Arizona +.76
Oddly enough, those are actually the only three teams that overperformed their expected numbers. It seems the sim generally tends to underestimate teams slightly. The average difference (in absolute value) between the expected value and the actual value is .755. The average in the actual values produced is -.06. So the expected value produced by the model was within .755 of the team’s actual win values on average. The difference between the expected values and the actual values is -.06, so on average the model estimated the team would win .06 more games than they actually did. Obviously one season is a pretty small sample size, but these numbers seem pretty good to me. The model generally was close to the team’s actual performance and predicted the top two teams correctly.