The Beginner’s Guide to Replacement Level
Like any good acronym, the letters in WAR each stand for something. The “W” stands for wins, which is something with which we’re all pretty familiar. The “A” stands for above, which is just an adjoining word, but the “R” stands for replacement which is a place where newcomers sometimes get lost. What is replacement level, why does it matter, and how do you calculate it? If WAR compares players to replacement level, to understand WAR we need to understand R.
Let’s start from the beginning. Replacement level is simply the level of production you could get from a player that would cost you nothing but the league minimum salary to acquire. Minor league free agents, quad-A players, you get the idea. The concept is pretty tidy. These are the players that are freely available and if five of your MLB level players came down with the flu, you could go out and acquire replacement level players without really giving up anything you value other than their union mandated payday.
In other words, if you had no one on your roster on April 1st and just needed to populate a team, you’re generally signing replacement level players.
This definition exists because we want to be able to compare the number of wins a player is worth compared to the player a team would have to acquire to fill their shoes. If a great player is making $20 million and a replacement player is making $500,000, that great player is providing you X number of wins for $19.5 million because you would have to allocate that half million to the roster spot no matter what.
This covers the basics of what a replacement player is, but there’s more to understanding them properly. Next you have to consider why we care about replacement level at all. Anyone can just come up with a concept, but why is replacement level useful?
It’s useful because we want to compare players to a common baseline, and this will turn out to be a pretty great one to use. Obviously there’s an internal logic to creating benchmarks. How good is a .340 wOBA? Well that depends on what the rest of the league does at the plate. So let’s just compare every player to league average and see what happens. It makes sense. The .340 wOBA hitter is 20 or 30 points better than average, so they are providing some positive value. But how much value?
He’s where it gets tricky. Would you rather have a .340 wOBA hitter for 10 PA or a .320 wOBA hitter for 600 PA? If you’re a fan of using the average player as a baseline, then the first player looks great even though you know that doesn’t make any sense. Both your performance per PA and your number of PA matter when determining value because those 590 PA that the first player isn’t covering have to go to someone.
Let’s imagine for a moment that you have an unlimited supply of batters who can play any position and hit .280 wOBA. They aren’t literally unlimited, but for practical purposes you’ll never run out. What you want to measure is how much better a player is than that .280 wOBA over the number of PA they have (we’re assuming the rest of their skills are the same for clarity).
You want to do that, because you want to distinguish between a player who was average for 1 PA and a player who was average for 700 PA because if you’re average for 1 PA, the other 699 PA go to a worse player, making the total value of the roster spot much worse. This is why we like to use replacement level instead of average as a baseline.
Let’s review for a second. Everyone can agree on the need for a baseline. That’s just a thing. So we just want to pick a baseline that helps us tell the most accurate story, and replacement level is the best option because it’s a level of player that teams never have to search beneath. By definition, there are always replacement level players.
So if you gather a player’s stat line, how do you figure out where they sit relative to replacement level? Let’s say they had a .350 wOBA in 600 PA while having +5 fielding runs at third base and being an average base runner. Those are their numbers, but what we want to know is how many wins this player provided to his team compared to what would have happened if the team had used a freely available player instead. A replacement player.
So first we need to compare this player to league average. We’ll just throw up some quick estimates and say they’re roughly 21 runs better than the average player over those 600 PA. But how does league average compare to replacement level? How do we go from comparing a player to the league to comparing them to replacement level?
This is going to require a touch of math, but the mechanics are easy enough. Based on the quality of those freely available players, we believe that a team making the MLB minimum would win about 29.7% of its games in a give year, or roughly 47-48 per team. Multiple that by 30 and you have something between 1,430 and 1,440, leaving about 1,000 games up for grabs out of the 2,430. Those 1,000 available wins are the “wins above replacement.” Those get divided up with 57% going to position players and 43% going to pitchers based the role we believe each plays.
What this means for position players is that there are 570 WAR to go around per season, so if the season isn’t over you simply need to prorate that. Let’s say the season is over, so the full 570 is up for grabs. Our player had 600 PA, and we want to find out what an average PA was worth that year. So we need:
570*RunsPerWin*PA/lgPA
If you let the numbers cancel out so that we’re left with that players share of the runs, you have the difference between a replacement level player and an average one over the course of those 600 PA. It would equal roughly 17 runs, and if you add that to the 21 runs we determined our player was above average, we wind up with 38 runs above replacement, which reduces to about 4.1 WAR.
To simplify it a bit. Every average PA is worth more than a replacement PA by the same amount. If we say a replacement level PA is worth 0, and then add in the average value of a PA, the player’s performance relative to average per PA, and then the number of PA, you can tell how much value a player added compared to a freely available player.
And why would we want to know that? That’s the central question. I can roll out all kinds of formulas and pretend they are meaningful, but I think I can convince you that you inherently care about replacement level.
When you’re sitting down to construct a team, you have to make decisions about which players you want to acquire. You want to know if you should sign Player X for $8 million per season. Specifically, you want to know if it’s worth it to pay Player X $8 million rather than paying Player Y $500,000 to perform at replacement level. You care about how much better Player X is than Player Y because you cannot pay a player less than the minimum and you can always find a player at least as good as Player Y to fill the roster spot.
You have to spend $12.5 million or so on your roster and you essentially can do worse than 47 or 48 wins. The way you measure your team’s success is how they take their $100 million of additional payroll space and use it to grab another 40+ wins during the season. But the key is that you don’t want to spend any additional money on a player who’s just a replacement level talent. There’s a direct connection to salary.
Does it really matter if you compare Andrew McCutchen and Mike Trout to the average player or a replacement one? If you only care about which one is better, not really. You’ll be able to work with their stat lines. But if you want a better handle on most of the league, a baseline that incorporates performance and playing time is vital. You can’t do that with average as your baseline. And it’s also important because we’re talking about the true bottom of the MLB talent pool.
It’s a fairly easily concept to see in practice. You can identify replacement type players without much effort. It gets trickier when you’re trying to actually parse out the specific calculation. So we can leave it at that. If your entire team gets hurt, you don’t get to use an army of average players or an army of players who would literally hit .000. Replacement level is the bookend and having a baseline at the bottom is much better for determining a player’s importance than having one in the middle.
Neil Weinberg is the Site Educator at FanGraphs and can be found writing enthusiastically about the Detroit Tigers at New English D. Follow and interact with him on Twitter @NeilWeinberg44.
Very interesting but still not sure about some of the assumptions used in the formula for WAR. How do you determine RunPerWin and what is lgPA?
Gary
lgPA is simply the number of PA taken in a season by the whole league. Runs per win is based on the run environment and is an estimate of the number of extra runs a team needs to score in order to add one win in the standings. In theory a team that scores 700 runs and gives up 700 runs should be 81-81. If they score 709 and give up 700, they should be roughly 82-80. This is kind of an overall average based on the run environment, so obviously it’s not a perfect match for every team in every season.
If you are interested in learning more about ‘replacement level’ and Sabermetrics, the
Harvard-MIT initiative is offering their free course again beginning on July 7. Go to edx.org for more informations