Originally Posted by Nuxhall41
That's funny because the mathematcal formulas you worship are flawed. Completely flawed. You cannot take a complex game and boil it down to OPS(there is no situational component), which is precisely why you must at times actually watch the baseball games. Baseball is too complex and nuanced to say total number of bases is the end-all. Dunn is the perfect example where these flawed metrics fail. If you actually watch him play over the course of several years and pay attention situationally speaking, you tend to form a completely different opinion.
OK, so I'm not a Stat Wanker. I don't even really like Stat Wankers (there is something off-putting about watching baseball, and trying to train myself to see a run as a by-product of the lone Truly Desirable Outcome, which is "Not Making An Out"; I don't know about you, but if "Not Making An Out" doesn't put a run on the board, it's hard for me to get up off my couch and appreciate the beauty and nuance of the formulas behind the game with a hearty whoop and cheer).
But at the same time, I am not intellectually incurious, and can't quite go along with the Colbert-esque "gut" argument, either. There must be some middle ground.
For instance: don't like the current formulae? Propose some new ones. To wit: here's an idea I had, and it relates specifically to Dunn...
A while back (I don't remember if it was in here or on the ORG), somebody did a cool little statistical analysis that showed that just because your team averages 4.5 runs scored per game doesn't mean you'll win the same number of games per year. The main thrust of the analysis was that you'll win more games the smaller the standard deviation of your run scoring.
So just because you score more runs than most teams over the course of 162 games doesn't mean you'll win more games. Your team is hurt if you "score in bunches" (i.e. you score 8, 1, 7, 0, 2, 8, 6, 11, 1, 1 over the course of 10 games for a total of 45 runs); your team is better off if you perform consistantly (i.e. you score 4, 5, 6, 2, 5, 4, 5, 4, 4, 6, also for a total of 45 runs in 10 games).
I can't help but wonder if the same exact thing would hold true if you applied that concept to each individual at-bat, instead of to a game as a whole.
The Stat Wankers have their formulae for "Runs Created." From there, one should be able to calculate an individual batter's "Expected Runs Created Per Plate Appearance." Then, each Actual Outcome also has a "Runs Created" value. Calculate the difference between the expected and the actual, and keep track over the course of the year: it's a zero sum game, of course, but in doing this, you get each batter's standard deviation RC per at bat, and you can start to determine how that player's THEORETICAL production actually ties into the whole line-up's EFFECTIVE production.
I think that'd be an interesting line of study. Does the RC standard deviation mean anything? How about if you consider it not just for a single player, but amplified over the course of 9 batters? If it does matter, how much does it matter? My "gut" says that it's better to have consecutive hitters who are more likely to put up numbers like "2-4, w/ single, double, SAC" than to have consecutive hitters with lines like "1-4, w/ HR, 1 BB, 2 K's, 1 GiDP."
"Runs Created" might tell us the second player is more valuable. But when placed within a line-up, does the idea of consistency and standard deviation from expected values cause that value to degrade? Could there be a multiplicative effect on team production caused by lower standard deviations among batters, and/or a divisive one caused by higher deviations?
I dunno, I'm far to lazy to work up a spreadsheet. But the little analysis I'm remembering about consistency of runs scored is suggestive enough. Somebody feel free to get on this, chop chop.