Considering that the following link is a Hardball Times article, I suspect you've read it:

http://www.hardballtimes.com/main/ar...un-estimation/
I knew this was heading in the direction of Linear Weights. Here's where a metric like Pete Palmer's Batting Runs (BR) holds an advantage over Runs Created (RC) from 2000 to 2005:

...<crickets>...

Here's where RC comes out ahead:

Correlation Coefficient: 0.9604 (BR: 0.9539)

Mean Error: 6.52 (BR: 6.96)

Mean Absolute Error: 19.2 (BR: 20.1)

Standard Deviation: 14.68 (BR: 16.71)

Root Mean Square: 24.14 (BR: 26.15)

In fact, the only Linear Weights metric that outperforms Runs Created at any point is BaseRuns- and only at the ME level (1.78) and the MAE level (18.8).

Now, I'm not master mathematician (that's my sister), but when Runs Created produces a higher correlation, a lower standard deviation, and a lower rate of error, and a lower generalized or "power" mean than accepted Linear Weights measurements, that's meaningful. The only Linear Weights metric I've found to allow for close approximation to Runs Created is BaseRuns (BsR) and the only place BaseRuns appears to be more accurate is at the low and high extremes of performance (i.e. where baseball isn't played).

Over the sample noted, BaseRuns produced the highest percentage of expected Run values within ten Runs of actual output, but BsR also produced the highest pecentage of expected Runs that were off by 60 or more Runs. Double-edged sword there.

The result is that we have a bunch of metrics that get real close to approximating actual Runs Scored. While I appreciate the work done by the Linear Weights movement, I remain entirely unconvinced that their methodology better represents run value versus Runs Created.

Different? Yes. Better? No.