As the debate over restructuring the U.S. electricity industry moves forward, there comes a host of new theoretical models. Two proposals in particular serve well to frame the debate.
The Utah Test
Defining a test period to overcome controversies and inaccuracies.
3) matching between utility’s forecasts and independent forecasts; 4) energy demands and loads are relatively close in variance reports; 5) utility’s forecasting assumptions are valid and reliable; and 6) addressing used-and-useful considerations.
The PSC has identified several factors that need to be considered in selecting a test period—that is selecting between a test period that is based on historical data and adjusted for known and measurable adjustments, or a fully forecasted test period, or a mixed historical and forecast test period. These factors include the general level of inflation; changes in the utility’s investment, revenues or expenses; changes in utility services; and availability and accuracy of data to the parties. Additional factors include the ability to synchronize the utility’s investment, revenues, and expenses; consideration of whether the utility is in a cost-increasing or cost-decreasing status; incentives to efficient management and operations; and the length of time the new rates are expected to be in effect.
Test Period Controversies
During the 21st century, controversial issues concerning the test period have appeared in rate proceedings before the PSC. Some of these issues include forecast accuracy, accountability and process problems. Other issues involve overlapping test periods and overlapping rate cases.
Forecast accuracy includes several sub-issues. The first relates to the precision of the forecasts the utility offers. Intervenors have suggested that forecast precision should be within as little as 1 percent of the utility’s actual results of operations. This might be posturing, but the Utah Division of Public Utilities (DPU) generally considers 3- to 5-percent accuracy to be sufficient for a year-ahead forecast, depending on the item being forecast. In addition to accuracy, the DPU expects the forecast to be unbiased; that is, over time forecasts should be wrong on the high side about as often as they are wrong on the low side. Also related to forecast accuracy is the issue of how far into the future the test period should go. Utah statute allows a forecast test period to end up to 20 months from the rate-case filing date. 4 It’s generally assumed that forecast accuracy is reduced the further out the test period is placed, which has resulted in many parties arguing for a test year that concludes much sooner than allowed by statute. Companies initially wanted the maximum forecast period, but given the resistance to a full 20-month forecast, generally the test periods have been about six months shorter than the maximum. In order to track the accuracy of the utility’s forecasts, the PSC has ordered that the utility provide semi-annual variance reports tracking changes from the forecasts of the most recent completed rate case. 5
The content and form of variance reports still is being refined. Since there has been a relatively short period since variance reports have been required, it remains to be seen how the PSC ultimately uses the results of these reports. Ideally, these reports would be one input into developing standards for forecasts for specific items. That is, different categories of expenses would have different forecasting tolerances. From these standards, penalties and perhaps benefits could be