82-79 (one of the games with the Phillies will be called off, hence the 161 game season).
In response to Cedric's question, I believe there is something to a wider standard deviation in runs scored/runs allowed that drives teams to under- or over-perform their Pythagorean runs in unusual ways. As for the characteristics that we can ascribe to a high standard deviation team, I offer: a team that hits (or gives up) an unusual number of home runs. I haven't tested the hypothesis, but the 2004 Yanks/Reds both fit the model: both had unusually high HR rates (Yanks hit 242 HRs, Reds surrendered 236). HRs lead to a higher standard deviation of runs scored per inning, and hence, lumpy scoring patterns from one game to the next. Lumpy scoring patterns of course lead to wider-than-normal run differentials throughout a 162-game season.
I recall that Bill James has studied the 1-run ballgame phenomenon, and he suggests the bullpen is the #1 factor in a team's success in 1-run games. As for the bullpen issue, I propose that a given team's spread in bullpen quality (from the #1 to #6 slot in the bullpen) may also affect the Pythagorean run differential. In other words, you can materially affect your team's chances of winning by having your best pitcher pitch in the closest games. This is the Scott Williamson/Josias Manzanillo bullpen quagmire, circa 2003. Or the story of the Reds 2004 bullpen, where the tail end of the bullpen was awful.
Over the long haul, the Pythagorean Formula shows that there is an incredibly strong relationship between runs scored and the events that create runs (hits, SBs, walks, HRs). Nevertheless, runs and Pythagorean runs don't always even out every season for every club, and I believe there are structural reasons why this occurs.