To the Editor:
We read with great interest Timothy J. Considine and Frank A. Clemente’s recent article on the evaluation of the Energy Information Adminstration’s natural-gas forecasts (see “Gas-Market Forecasts: Betting on Bad Numbers,” July, 2007). We, too, recognize the influence that these forecasts have on both business and policy decision making and, like many others, have taken an interest in evaluating their long-term performance. We take issue, however, with one conclusion of the Considine and Clemente article. They assert that the EIA’s forecasts are subject to “systematic bias.” We are more cautious and present evidence to suggest that bias is potentially much harder to identify than it may appear.
First, however, we should be clear about areas of commonality. We have not independently studied the EIA’s forecasts of production and import activity and therefore, have no reason to disagree with Considine and Clemente’s findings (which have also been found by others1). We limit our comments solely to the forecasts of wellhead prices. With respect to wellhead prices, we are also in agreement that the EIA’s forecasting errors are significant. Natural-gas prices, however, exhibit comparatively very high volatility, and any discussion of forecasting “error” must be considered in that context.
We do not, however, believe that the EIA wellhead price forecasts are subject to systematic bias. We believe instead that the “continuing optimism” in the EIA forecasts is cyclical in nature, being preceded by what they might have termed “continuing pessimism,” had Considine and Clemente’s analysis considered the period before 1998. Further, we believe that a more subtle force is at play here—one that is not easily accommodated by the econometric tools used in their analysis. We discuss each issue in turn.
Consider the data in Table 1, taken from the EIA’s own annual analysis of their forecasting performance3. [Despite being published in April 2007, the EIA’s report only includes actual performance through 2005 (thus, for example, the limit for the 2004 Annual Energy Outlook [AEO] is a 1-year forecast). We limit ourselves to the EIA’s data.] We have reconfigured the table to make each column reflect a specific year-ahead prediction (e.g., one year ahead, two years ahead), rather than a prediction for a specific calendar year. The EIA provides this data going back to the 1982 AEO, although we begin our collection of the data in 1989 (post deregulation).
We assert first that Considine and Clemente’s analysis has selectively examined a very limited period of history in looking only at 1998-2006. As Table 1 illustrates, the annual forecasting errors for the post-1998 period are indeed substantially negative (-43.5 percent), indicating that the EIA significantly underpredicted wellhead prices. However, if one were to examine the 1989-1997 period, the exact opposite conclusion would have been reached—a story of “continuing pessimism”—as the EIA persistently forecasted higher prices than actually materialized (average error of +15.7 percent).
Over the entire period, however, the average error is small: 3.0 percent. Moreover, the average error over all years and across the majority of forecast horizons is less than 10 percent. To us, this hardly seems like persuasive evidence of material systematic bias. If one’s conclusions are so sensitive to the sample period selected, the only unequivocal result is that extreme caution must be employed and further analysis performed.
In place of a “systematic bias” hypothesis, we would assert that long-term forecasting is a thankless endeavor precisely because good forecasters are reluctant to make predictions about “unpredictable” events. As Figure 1 illustrates, the more than two-decade history of the AEO forecasts covers a period during which major, market-changing events occurred that no forecaster would attempt to capture.
These events, which are only ever known retrospectively, are precisely the sort of shocks that would tend to cause prices to diverge from a previously forecast path. No forecaster, for example, would include an expectation that a Katrina-like event would occur in a particular year. Nevertheless, a forecast for low prices, interrupted by Katrina, could easily be set off course without anyone asserting that “bias” was at work.
If we instead examine the course of forecast errors across time, we see in Figure 2 that there is a clear cyclical trend at work. A several-year period of overestimation was followed by a several year period of underestimation. More importantly, these cycles are not abrupt, but rather slow transitions between different forecasting states. This analysis brings us to our second assertion, which concerns a “rational expectations” effect likely common in many long-term forecasts. One of the primary reasons why long-term forecasting is so difficult is because markets have time to react to the substance of the forecast.
We begin with a simplistic example. Suppose the prevailing belief in the market was that the EIA’s forecast was highly credible (note that we say “credible,” and not necessarily “accurate” here). Suppose that the forecast in question stated that natural-gas prices in five years would be very low because of excess supply. Market participants, believing in the credibility of such a forecast would tend to act on it. This action, typically, would take the form of increasing the future consumption of this gas, which they believed to be inexpensive. In other words: build a lot of gas-fired power plants.
In the face of this behavior, however, market participants would find themselves five years forward facing exactly the opposite world that the EIA predicted: gas in high demand and, therefore, prices escalating rapidly. Here, the “accuracy” (or credibility or legitimacy) of the forecast led directly to its error. By allowing markets to respond dynamically to forecasts made by a static model (such as NEMS), there is the potential for such forecasts to always be a “step behind” the market. This result is not necessarily because the forecast is bad; indeed, this error is most prevalent when the forecast is good (or believed to be good)!
What, then, are we to take from these observations? First, that we believe the issue of bias is far from clear. We do not believe that there is a “malicious” bias at work. A bias in terms of a poorly specified model may be present, caused in part by a lack of dynamism in the EIA’s modeling. Such a conclusion would reflect a serious challenge for any long-term forecaster and would require significant further study. Second, and perhaps most important, any consumer of the EIA’s forecasts (or, indeed, any long-term forecast) should clearly be incorporating uncertainty into its modeling and decision-making processes. Whether or not the EIA’s forecasts are biased does not alter the fact that they are riddled with noise. To use the forecasts, which could be off by 50 percent or more during any particular period, without simultaneously illustrating the uncertainty in the results of one’s analysis is, in our judgment, grossly irresponsible.
David Rode, Managing Director, DAI Management Consultants Inc.,email@example.com.
Paul Fischbeck, Professor of Social and Decision Sciences and Professor of Engineering and Public Policy, Carnegie Mellon University, firstname.lastname@example.org.
1. Attanasi, E. “U.S. Gas Production: Can We Trust the Projections?” Public Utilities Fortnightly (August 2001): 12-17.
2. Considine, T., and F. Clemente. “Betting on Bad Numbers,” Public Utilities Fortnightly (July 2007): 53-59.
3. Energy Information Administration. Annual Energy Outlook Retrospective Review. U.S. Department of Energy, Washington, DC. DOE/EIA-0640(2006), published April 2007.
The Authors Respond:
As we demonstrate below, far from identifying problems with our analysis, Rode and Fischbeck’s letter actually reinforces our finding of systematic bias in the EIA forecasts of natural-gas markets2.
In their Table 1, Rode and Fischbeck present percentage forecast errors for 1- to 10-year EIA forecasts of wellhead natural-gas prices from 1989 to 2004. They then calculate averages for the forecast errors for each of the 10 forecast horizons. Next, they take an average of those percentage errors for the entire sample, and two sub-samples, pre- and post-1997, and find that the errors averaged 3.0 percent, 15.7 percent, and -43.5 percent, respectively. Given that the average percentage error (APE) for the entire sample is only 3 percent, they conclude, “This hardly seems like persuasive evidence of material systematic bias.”
But, as we stressed in our paper, APE is a deficient measure of forecasting accuracy because large positive errors cancel out large negative errors. A more accurate measure of forecasting accuracy is the root mean squared error (RMSE), which we use in our paper, and which we report below for their sample. Neither measure, however, addresses the question of bias.
A layman’s definition of systematic bias is forecast errors that display a distinct pattern. Consider the 1- to 4-year-ahead EIA forecasts of wellhead natural-gas prices from 1991 to 2006 in Figure 1. There are three distinct patterns of systematic bias in these forecasts. First, the errors are often positive before 1998 and negative afterward. Second, there is a distinct downward trend in the forecast errors over time. Finally, both these features are amplified as forecast horizons go from one to four years. Rode and Fischbeck choose the story of continuing pessimism followed by persistent optimism. This is precisely the definition of systematic bias.
In our paper, we use the error decomposition analysis developed by Theil4 and Maddala3 to measure systematic bias constituting two components, an intercept or linear bias and the slope or model bias. Is our finding of systematic bias an artifact of our sample selection from 1998 to 2006? To address this question, we re-ran our decomposition analysis from 1991 to 2006. (We could not go back to 1989 because our method does not allow discontinuities in the sample.) The results are reported in Table 1.
The average percentage errors range from 8 percent to less than 5 percent for the 1- to 4-year forecasts, respectively (see Table 1). Notice, however, the RMSEs are much larger, 29 percent for the 1-year ahead forecast and rising to more than 43 percent for the 4-year forecast. Also notice that the two error components, bias and model, reflective of systematic bias, remain substantial across all four forecast horizons. For the 1-year forecasts, 25 percent of the forecast errors arise from systematic bias, and this rises to 58 percent for the 4-year forecast. These results indicate that our previous finding of systematic bias is not an artifact of sample selection.
Our findings in Table 1 above would support the contention by Rode and Fischbeck that a proportion of the errors over the entire sample could be attributed to these market shocks. For example, the random error component accounts for more than 73 percent of the 1-year ahead forecast but this drops to slightly more than 41 percent for the 4-year ahead forecast. So clearly, market shocks can explain only part of the forecast errors but certainly not all.
A more important issue for understanding forecasting errors in the gas market involves structural change. Rode and Fischbeck plot the history of natural-gas wellhead prices, various market shocks, and EIA forecasts. EIA consistently predicted higher prices during the 1980s and 1990s, was proved wrong and reversed course in the mid-1990s, only to be proved wrong again. EIA and many others were predicting a “fly-up” in gas prices in 1985 due to deregulation. Instead, prices collapsed. EIA partially reversed course and lowered their price forecasts but again failed to see the gas surplus during the late 1980s and early 1990s. As the market swung back into balance and excess capacity was eliminated after 1998, EIA continued to forecast low prices when the industry was clearly struggling to meet supply and prices were escalating.
As Rode and Fischbeck illustrate, there was a clear structural break in the gas market around 1998 to 1999, when the market went from a period with low and stable prices operating with excess capacity to a period with high and volatile prices operating at or near capacity limits. Again, while forecasting errors and price volatility would be larger during these capacity constrained periods, they should be random, resembling white noise. Otherwise they are biased. The critical question is how the National Energy Modeling System (NEMS) performs during capacity constrained periods. As it turns out, our selection of the sample 1998-2006, which was originally dictated by the availability of data for the entire gas market forecast, actually provides a good test of NEMS performance in such a world. Unfortunately, the performance of NEMS is disappointing.
Rode and Fischbeck attempt to explain away these problems with a rational-expectations argument in which EIA’s price forecasts are perceived to be highly credible and gas users act accordingly, building power plants that consume gas and bid up prices. Likewise, this forecast presumes that gas producers would supply this gas because supplies are abundant and inexpensive to deliver.
Rode and Fischbeck’s use of rational expectations to explain why EIA is always one-step behind the market is a demand-side argument that ignores the supply side. In a rational expectations model, consumers and producers knowledgeable of the laws of supply and demand use all available information to rationally anticipate market outcomes. There always will be a divergence between expectations and realizations, but these errors, if rational, will be zero on average. But, as Auffhamer1 already has demonstrated, the EIA forecasts are not rational and display a significant asymmetrical bias.
By averaging percentage errors over long periods of time, Rode and Fischbeck make the EIA forecasts appear far better than they really are. Using proper measures of forecasting errors, such as the mean squared error, reveals that there is nothing subtle about EIA’s errors in forecasting natural-gas markets. They are large and biased toward optimism since 1998. The fact that the forecast errors reflected pessimism prior to 1998 is yet another illustration of bias. Indeed, both our paper and Rode and Fischbeck’s letter reveal that NEMS has not performed well either during capacity-constrained periods or during periods of surplus. Hence, rather than pointing to any flaws in our analysis, their letter actually supports our conclusion.
Timothy J. Considine, Professor of Natural Resource Economics, Penn State University; and Frank Clemente, Senior Professor of Social Science and Energy Policy; Penn State University.
1. Auffhammer, M. (2006) “The Rationality of EIA Forecasts under Symmetric and Asymmetric Loss,” Resource and Energy Economics, Volume 21, 102-121.
2. Considine, T.J. and F.A. Clemente. (2007) “Gas-Market Forecasts: Betting on Bad Numbers,” Public Utilities Fortnightly, July, p. 53-57.
3. Maddala, G.S. (1977) Econometrics (New York: McGraw Hill).
4. Theil, H. (1966) Applied Economic Forecasting (New York: Rand McNally & Co.).