Top officials at several U.S. retail gas companies reveal how they are rethinking their business models and developing new approaches to serve customers in the face of supply concerns and price...
Letters to the Editor
percent. To us, this hardly seems like persuasive evidence of material systematic bias. If one’s conclusions are so sensitive to the sample period selected, the only unequivocal result is that extreme caution must be employed and further analysis performed.
In place of a “systematic bias” hypothesis, we would assert that long-term forecasting is a thankless endeavor precisely because good forecasters are reluctant to make predictions about “unpredictable” events. As Figure 1 illustrates, the more than two-decade history of the AEO forecasts covers a period during which major, market-changing events occurred that no forecaster would attempt to capture.
These events, which are only ever known retrospectively, are precisely the sort of shocks that would tend to cause prices to diverge from a previously forecast path. No forecaster, for example, would include an expectation that a Katrina-like event would occur in a particular year. Nevertheless, a forecast for low prices, interrupted by Katrina, could easily be set off course without anyone asserting that “bias” was at work.
If we instead examine the course of forecast errors across time, we see in Figure 2 that there is a clear cyclical trend at work. A several-year period of overestimation was followed by a several year period of underestimation. More importantly, these cycles are not abrupt, but rather slow transitions between different forecasting states. This analysis brings us to our second assertion, which concerns a “rational expectations” effect likely common in many long-term forecasts. One of the primary reasons why long-term forecasting is so difficult is because markets have time to react to the substance of the forecast.
We begin with a simplistic example. Suppose the prevailing belief in the market was that the EIA’s forecast was highly credible (note that we say “credible,” and not necessarily “accurate” here). Suppose that the forecast in question stated that natural-gas prices in five years would be very low because of excess supply. Market participants, believing in the credibility of such a forecast would tend to act on it. This action, typically, would take the form of increasing the future consumption of this gas, which they believed to be inexpensive. In other words: build a lot of gas-fired power plants.
In the face of this behavior, however, market participants would find themselves five years forward facing exactly the opposite world that the EIA predicted: gas in high demand and, therefore, prices escalating rapidly. Here, the “accuracy” (or credibility or legitimacy) of the forecast led directly to its error. By allowing markets to respond dynamically to forecasts made by a static model (such as NEMS), there is the potential for such forecasts to always be a “step behind” the market. This result is not necessarily because the forecast is bad; indeed, this error is most prevalent when the forecast is good (or believed to be good)!
What, then, are we to take from these observations? First, that we believe the issue of bias is far from clear. We do not believe that there is a “malicious” bias at work. A bias in terms of a poorly specified model may be present,