Public Utilities Reports

PUR Guide 2012 Fully Updated Version

Available NOW!
PUR Guide

This comprehensive self-study certification course is designed to teach the novice or pro everything they need to understand and succeed in every phase of the public utilities business.

Order Now

Letters to the Editor

Fortnightly Magazine - September 2007

caused in part by a lack of dynamism in the EIA’s modeling. Such a conclusion would reflect a serious challenge for any long-term forecaster and would require significant further study. Second, and perhaps most important, any consumer of the EIA’s forecasts (or, indeed, any long-term forecast) should clearly be incorporating uncertainty into its modeling and decision-making processes. Whether or not the EIA’s forecasts are biased does not alter the fact that they are riddled with noise. To use the forecasts, which could be off by 50 percent or more during any particular period, without simultaneously illustrating the uncertainty in the results of one’s analysis is, in our judgment, grossly irresponsible.

David Rode, Managing Director, DAI Management Consultants Inc.,

Paul Fischbeck, Professor of Social and Decision Sciences and Professor of Engineering and Public Policy, Carnegie Mellon University,



1. Attanasi, E. “ U.S. Gas Production: Can We Trust the Projections? Public Utilities Fortnightly (August 2001): 12-17.

2. Considine, T., and F. Clemente. “ Betting on Bad Numbers ,” Public Utilities Fortnightly (July 2007): 53-59.

3. Energy Information Administration. Annual Energy Outlook Retrospective Review. U.S. Department of Energy, Washington, DC. DOE/EIA-0640(2006), published April 2007.


The Authors Respond:

As we demonstrate below, far from identifying problems with our analysis, Rode and Fischbeck’s letter actually reinforces our finding of systematic bias in the EIA forecasts of natural-gas markets 2.

In their Table 1, Rode and Fischbeck present percentage forecast errors for 1- to 10-year EIA forecasts of wellhead natural-gas prices from 1989 to 2004. They then calculate averages for the forecast errors for each of the 10 forecast horizons. Next, they take an average of those percentage errors for the entire sample, and two sub-samples, pre- and post-1997, and find that the errors averaged 3.0 percent, 15.7 percent, and -43.5 percent, respectively. Given that the average percentage error (APE) for the entire sample is only 3 percent, they conclude, “This hardly seems like persuasive evidence of material systematic bias.”

But, as we stressed in our paper, APE is a deficient measure of forecasting accuracy because large positive errors cancel out large negative errors. A more accurate measure of forecasting accuracy is the root mean squared error (RMSE), which we use in our paper, and which we report below for their sample. Neither measure, however, addresses the question of bias.

A layman’s definition of systematic bias is forecast errors that display a distinct pattern. Consider the 1- to 4-year-ahead EIA forecasts of wellhead natural-gas prices from 1991 to 2006 in Figure 1. There are three distinct patterns of systematic bias in these forecasts. First, the errors are often positive before 1998 and negative afterward. Second, there is a distinct downward trend in the forecast errors over time. Finally, both these features are amplified as forecast horizons go from one to four years. Rode and Fischbeck choose the story of continuing pessimism followed by persistent optimism. This is precisely the definition of systematic bias.

In our paper, we use the error decomposition analysis developed by Theil 4 and Maddala 3