Ok, I looked up the formula for wOBA:
The wOBA formula for the 2011 season was:
wOBA = (0.69×uBB + 0.72×HBP + 0.89×1B + 1.26×2B + 1.60×3B +
2.08×HR + 0.25×SB -0.50×CS) / PA
These weights change on a yearly basis, so you can find the specific wOBA weights for every year from 1871 to 2010 here.
1. If the weights for these constants change every year, doesn't it make comparing year to year values almost useless in terms of comparing volatility from year to year?
2. Also from the same fangraphs article:
Rules of Thumb
Above Average 0.340
Below Average 0.310
So Volatility = STD(daily_wOBA)/Yearly_wOBA^.52
For an excellent hitter, wOBA = .400 which means the denominator is .62
For an awful hitter, wOBA = .290 which means the denominator is .52
Thus the volatility formula rewards the better players with a lower volatility score and punishes they worse players with a higher one, even if they had the identical Standard Deviation. Specifically, the "awful" hitter has about a 19% penalty in his VOL score. ("awful" multiplier: 1/.52 = 1.92, "excellent" muliplier: 1/.62 =1.61 )
Overall, this is an interesting effort, but I would expect the better wOBA people to be more volatile. If Votto has a .400 OBP, it's a lot easier to have a day that's worse than his average than it is for someone like Stubbs to maintain his average (which is closer to the distribution peaks of .000 and .250 shown in the graph) The different multipliers hide that fact.
I can't think of a better method. I don't mean to come across as negative, but I question whether this has true value.