Building upon last month’s installment, more is revealed on how, after 10 years of incentive regulation, reliability has declined in Ontario.
The Reliability Spending Conundrum
environment, rates are capped or expected to remain stable. In such circumstances, a company with low service-related spending and poor service may be pressured to explore ways to improve service without a rate increase. Part of any prudency review is determining whether the spending was efficient and effective in accomplishing what the customer wanted.
As with any benchmarking, arguments may be made that the peers are not really comparable. However, in electric generation, where benchmarking has been used extensively, a certain type of plant should be able to achieve a certain efficiency no matter where it is, adjusting for how it is dispatched. For transmission and distribution, regional differences between territorial geography and climate and even customer preferences can be used to argue that cost and service indicators are not comparable, so peers often are chosen from comparable territories.
Benchmarking can be particularly compelling when it is linked to a best practice. For example, if an electric company's peers are trimming trees on a 4-year cycle, trimming less than 25 percent of its miles per year would raise concerns-particularly if the company's tree-related customer interruptions are higher than its peers.
For gas companies, the annual rate of replacement of leak-prone cast iron and bare steel tends to be 1 to 2 percent of a company's inventory when that inventory is over 500 miles. If a company were to replace only half of 1 percent, it would cause regulators to be concerned about long-term system integrity. And as companies with smaller inventories move toward more rapid replacement (some even adopting 10-year replacement goals), it puts pressure on the others to consider accelerating their policies too, even though replacement could drive up costs.
An analysis of prudency is incomplete without an examination of benchmark results. Even though the results may seem inconclusive or can be explained by differences in territory, the question has to be asked, "How does this compare with others?"
A good modeling approach that relates the spending level to the service level is probably the best test of prudency. The model should not replace the trending and benchmarking tests, but it should be consistent with the story told by those two tests.
A model allows the decision makers to ask "what if" questions, and it helps them see what can be done to fix a problem. Not only can a model raise an alarm that costs are decreasing and service problems are rising, but a good model can tell you what spending level is required to fix the trend and achieve the desired level of service.
An effective model requires an appropriate degree of complexity. For starters, it needs to be a dynamic model that can predict how spending today and tomorrow will affect the level and the trend of service in the future. So it will probably have at its heart a set of difference equations (the discrete equivalent of the differential equations some of us dealt with in calculus) that can exhibit dynamic behavior. In addition, it should have some details about which programs address which indicators. For