The selection of a test period and an associated test year continue to generate controversy within the framework of rate-base regulation. Some controversial issues associated with the test period and related test year for energy utilities in Utah stem from the inherent uncertainty about the future and the need to rely on imperfect predictions for forecast test periods. Issues of accountability and process have arisen in part due to the problems with forecasts; specifically those related to updating them and accuracy issues.
A framework is required for selecting a test period based on the evidence that best reflects the conditions a public utility will encounter during the period when rates will be in effect.
The Utah Public Service Commission (PSC) has defined the test period as follows: A test period as used in traditional rate base, rate-of-return regulation is a 12-month period of utility operations used in setting rates that, when properly adjusted, will afford the utility a reasonable opportunity to earn its allowed rate of return.1
An additional useful explanation of the test period is defined by Lowell Alt, former executive staff director of the PSC: “Since the revenue requirement is an annual figure, the data (e.g., costs, revenues, and usage) used in its determination is based on a 12-month period. This 12-month period is termed the test period for a rate case.”2
Once the test period has been selected, then the test-year results are compiled by the utility. The test year is a measure of operations and investment from some specified historical 12-month period, which then is adjusted and forecast to the forecast test period. The energy utility can select and propose a test period based on historical results with known and measurable adjustments, or a fully forecasted test year, or a combination of the two approaches.
A proposed framework (see Figure 1) for test-period and test-year analysis includes the following important components: 1) principles; 2) criteria; and 3) factors and considerations.3
Principles are related to rules and standards and include: 1) comply with regulatory statutes and rules; 2) consider precedence; 3) maintain the utility’s financial health; 4) ensure rates and prices are just and reasonable; and 5) apply matching principle concerning revenues and expenses. The test period should balance the utility’s investment, revenues, and expenses such that all aspects of the rate case match on the same level of operations.
Criteria related to how to judge a test period include: 1) accuracy and reliability of forecasted information; 2) variance showing actual data are reasonably close to forecasts; 3) matching between utility’s forecasts and independent forecasts; 4) energy demands and loads are relatively close in variance reports; 5) utility’s forecasting assumptions are valid and reliable; and 6) addressing used-and-useful considerations.
The PSC has identified several factors that need to be considered in selecting a test period—that is selecting between a test period that is based on historical data and adjusted for known and measurable adjustments, or a fully forecasted test period, or a mixed historical and forecast test period. These factors include the general level of inflation; changes in the utility’s investment, revenues or expenses; changes in utility services; and availability and accuracy of data to the parties. Additional factors include the ability to synchronize the utility’s investment, revenues, and expenses; consideration of whether the utility is in a cost-increasing or cost-decreasing status; incentives to efficient management and operations; and the length of time the new rates are expected to be in effect.
During the 21st century, controversial issues concerning the test period have appeared in rate proceedings before the PSC. Some of these issues include forecast accuracy, accountability and process problems. Other issues involve overlapping test periods and overlapping rate cases.
Forecast accuracy includes several sub-issues. The first relates to the precision of the forecasts the utility offers. Intervenors have suggested that forecast precision should be within as little as 1 percent of the utility’s actual results of operations. This might be posturing, but the Utah Division of Public Utilities (DPU) generally considers 3- to 5-percent accuracy to be sufficient for a year-ahead forecast, depending on the item being forecast. In addition to accuracy, the DPU expects the forecast to be unbiased; that is, over time forecasts should be wrong on the high side about as often as they are wrong on the low side. Also related to forecast accuracy is the issue of how far into the future the test period should go. Utah statute allows a forecast test period to end up to 20 months from the rate-case filing date.4 It’s generally assumed that forecast accuracy is reduced the further out the test period is placed, which has resulted in many parties arguing for a test year that concludes much sooner than allowed by statute. Companies initially wanted the maximum forecast period, but given the resistance to a full 20-month forecast, generally the test periods have been about six months shorter than the maximum. In order to track the accuracy of the utility’s forecasts, the PSC has ordered that the utility provide semi-annual variance reports tracking changes from the forecasts of the most recent completed rate case.5
The content and form of variance reports still is being refined. Since there has been a relatively short period since variance reports have been required, it remains to be seen how the PSC ultimately uses the results of these reports. Ideally, these reports would be one input into developing standards for forecasts for specific items. That is, different categories of expenses would have different forecasting tolerances. From these standards, penalties and perhaps benefits could be developed. The process of developing forecast standards likely will continue over several rate-case cycles covering several years before many of these issues become settled.
Accountability relates to what happens if the forecast in the previous rate case is materially different from what actually occurred. For example, suppose a regulated company predicted that it would have $100 million in capital expenditures, and this amount was placed into rate base in the forecast test year. What if it turned out that capital expenditures were only $80 million, but a year later the utility files a new rate case forecasting that it will need $150 million in capital expenditure recovery? Does the PSC discount the new request by 20 percent? What if the utility had forecast that in a given year it was going to issue $100 million in debt at 7-percent interest, but instead issues $50 million at 6-percent interest? In these two scenarios, should the utility automatically be required to file for a rate reduction? If so, how soon should it be required to file? What criteria will separate the need to file from situations that aren’t material enough to require filing?
What if the utility under-forecasts various items? That is, if as suggested above, the utility must file for rate reductions if it over-collects on significant items, should the utility be allowed to file for quick rate relief if its forecasts are significantly wrong in the other direction? Some intervenors would argue no, because the utility’s greater knowledge of its cost and markets already gives it an advantage over regulators and intervenors. This proposed asymmetry of treatment would require the utility to accept the risk of forecasting too low, but give up the benefit of forecasting too high.
Should updates be allowed during the rate-case proceeding, and if so, should a statutory time clock governing the length of a rate case restart? 6 Does the PSC develop various adjustment mechanisms to account for missed forecasts, as often is done with fuel purchases? What does this do to company incentives to manage costs to be as low as prudently possible?
These are some of the questions that regulators, utilities and intervenors have grappled with. Some of these issues are a kind of mirror image of the regulatory lag issues raised with historical test periods. Instead of a lag, the regulators might accept forecasts of some, perhaps major, elements that are significantly too high. Alternatively, fearing that the forecasts are too high, regulators might order much lower-than-requested cost recovery that might leave the utility no better, or perhaps, worse off, under a forecast test period than it was with a historical test period.
Process problems typically are of three kinds. If there are delays in determination of the forecast test period, then the utility and the intervenors may be disadvantaged as they work off of a different test period than was finally decided. This can be a significant problem in Utah given that the PSC has 240 days to decide the rate case after it’s filed. If the first 60 or 90 days are spent determining the test period, then intervenors especially are at a disadvantage in trying to prepare their cases for the test period chosen.
The second process problem experienced in Utah that continues to cause consternation concerns forecast updates. This, of course, overlaps with forecast accuracy concerns as well. Generally, the PSC has allowed some revisions due to new information becoming available during the course of a rate case that affects the forecasts originally made by the utility. The utility would like to amend its forecast assumptions as late in the process as possible. Intervenors generally don’t want the utility updating forecasts, especially after the first round of testimony has been filed—usually arguing that the time left to evaluate the change is inadequate. (Of course, when intervenors want to include new information late in the process, they argue that the new information should be allowed. Their reasoning is that the utility chooses when to file and controls much of the information, resulting in an advantage, as mentioned above, that should be leveled by allowing late-filed intervenor information.) The PSC has dealt with these amended filings and forecasts on a case-by-case basis so far. However, without guiding rules in place, this continues to be a source of controversy in a forecasted rate case.
An additional issue that is part process and part systemic problem with forecasts is that forecasts potentially lead to endless bickering about assumptions. One could argue that in Utah intervenors are taking advantage of the inherent ambiguity of the future by arguing that the utility has either under- or over-forecast a particular item, depending on which way suits their purpose. The utility has little defense against competing forecasts, except relying on the PSC to make a reasonable decision. The utility would like the issue to be simply one concerning the reasonableness of its forecasts, thereby shifting the burden to intervenors to show that the utility forecasts are unreasonable. This would differ from the current assumption that it is the utility’s burden to prove its case.
Recent trends indicate more frequent and overlapping rate cases in the future due in part to increasing investments made by utilities and also as a means for regulated companies to temper regulatory lag. In Utah, one utility filed a new rate case prior to the decision in the previous general rate case—thus the overlapping rate-case problem. In addition, the test period filed in the latter case used a test period that overlapped by six months the test period selected in the prior rate case. Intervening parties claimed that this was merely a second attempt to recover costs, or a second bite at the apple. Others claimed the utility was selectively re-litigating certain issues, presenting legal problems and unnecessary burdens to re-audit the same test period data. In an earlier time when pancaking rate cases had started to become a problem, the PSC expressed concern over the regulatory difficulties caused by overlapping test years and noted that it “will take steps to protect the regulatory process whenever overlapping test periods are proposed.”8
The PSC generally has attempted to decide issues on a case-by-case basis. This process is developing, perhaps slowly, a body of decisions that can provide some precedential value. For example, although the Utah statute allows a party to forecast out 20 months in a rate case, it currently appears that such a relatively lengthy forecast period is unlikely to be approved by the PSC. This has resulted in quickly arriving at a stipulation between the utility and intervenors regarding the test period for the most recent rate case.9 The other issues remain somewhat a work in progress, although one utility’s application to move to a comprehensive power cost adjustment mechanism, if implemented, might mitigate some of the forecasting issues as well as the frequency of rate cases and overlapping test-period problems.10
The regulatory treatment of the test period clearly involves risk sharing and risk shifting of regulated energy between the regulated energy utility and its ratepayers. If the Utah experience is any indicator, then settling these issues may involve a lengthy process of experimentation and give-and-take among utility companies, regulators and intervenors.
1. Order Approving Test Period Stipulation, Docket No. 04-035-042, Utah PSC, Oct. 20, 2004, p. 4.
2. Lowell E. Alt, Energy Utility Rate Setting, 2006, p. 25.
3. See Direct Testimony of Dr. Joni S. Zenger, Docket No. 07-035-93, Utah Division of Public Utilities, January 2008.
4. Utah Code Annotated 54-4-4(3).
5. Docket No. 07-035-93.
6. Utah Code Annotated 54-7-12(3) provides that the Utah Public Service Commission must issue an order on revenue requirement within 240 days following the date of an application for a rate change.
7. In a brief filed in a recent rate case, an intervenor stated the following: “As very eloquently explained [by the Division], updates have been and continue to be a problem. Updates have become such a problem as a result of the Utility’s insistence on filing forecasted test [periods] (which also aggravated the problems with the cost of service study). Updates will continue to be a serious point of contention in the future as long as forecasted test [periods] are filed unless a rulemaking is conducted to finalize their treatment in general and single-item rate cases.” Docket No. 09-035-23, brief filed by UIEC, Jan. 21, 2010.
8. Report and Order, Docket No. 84-035-01, Sept. 13, 1984, p, 6.
9. Report and Order on Test Period Stipulation, Docket No. 09-035-23, June 1, 2009.
10. Docket No. 09-035-15.