The Prius Effect—a term that’s gained currency in sustainability circles—is shorthand for the strong link between information and behavior demonstrated by the popular Toyota hybrid. The car was...
Ontario's Failed Experiment (Part 2)
Service quality suffers under PBR framework.
Benchmarking Report On the Quality of Electricity Supply (2005):
Price-cap regulation without any quality standards or incentive/penalty regimes for quality may provide unintended and misleading incentives to reduce quality levels. Incentive regulation for quality can ensure that cost cuts required by price-cap regimes are not achieved at the expense of quality….The increased attention to quality incentive regulation is rooted not only in the risk of deteriorating quality deriving from the pressure to reduce costs under price-cap, but also in the increasing demand for higher quality services on the part of consumers…. a growing number of European regulators have adopted some form of quality incentive regulation over the last few years. 6
The January, 2008 letter from the OEB also states, “Until ... the sector gains experience with any new or modified service quality indicators or requirements, it is in the Board’s view premature to move to an incentive approach.”
But the OEB is now in its 10th year of collecting reliability data; more than sufficient time to gain experience. Indicators such as SAIDI and SAIFI are standards that are used for monitoring and regulating service quality around the world. These indicators have been used by Ontario distributors’ association for at least 15 years; for individual LDCs much longer.
The staff discussion paper offers a cursory analysis on reliability for 2004 through 2006. This analysis calculates sector, rural, and urban averages, as well as OEB peer-groups’ averages. It’s unclear whether these averages are simple arithmetic averages across reporting companies, or a weighted average calculated from actual customer-hours of interruption and total number of customer interruptions divided by number of customers served. 7
The discussion paper does examine the reliability performance of LDCs relative to various proposed benchmarks such as sector average or peer group average performance over the last three years. It finds that anywhere from 25 to 50 percent of Ontario distributors fail these benchmarks; furthermore, LDCs that fail typically have a reliability performance that is 50 to 100 percent worse than the selected average. What is clear from the data is that a very wide variation in reliability performance exists among LDCs, even within the OEB’s peer groups. Yet, this finding fails to elicit any apparent concern on the OEB’s part for the customers experiencing such degraded reliability. No explanation is offered for the fact that many customers of many LDCs are experiencing significantly lower reliability than customers of similar LDCs. What about performance over the whole period since the inception of incentive regulation (IR)?
The discussion paper sheds no light on whether LDCs are in compliance with the reliability guidelines established by the OEB in 2000. In fact, since its introduction of IR in 2000, the OEB has failed to confirm that LDCs operating under this regime are compliant with the mandated service-quality standards; this despite the fact the OEB repeatedly has stated that a reliable supply of power is necessary for just and reasonable rates. Indeed, the cursory analysis reported by staff would be unable to address current or past compliance.
The staff analysis is based on reliability data