Public Utilities Reports

PUR Guide 2012 Fully Updated Version

Available NOW!
PUR Guide

This comprehensive self-study certification course is designed to teach the novice or pro everything they need to understand and succeed in every phase of the public utilities business.

Order Now

Ontario's Failed Experiment (Part 1)

Reliability declines after 10 years of incentive regulation.

Fortnightly Magazine - July 2009

to monitor and report on all of the service quality indicators included in the plan. The performance of individual electricity distribution utilities will be made publicly available…

Initial standards were minimum required performance levels. For the LDCs with the majority of customers and which had historical data, utilities were to keep their service reliability indices within the range of performance over the prior three years. As soon as feasible, all LDCs would collect such data, and the board would investigate the implementation of more refined standards along with financial penalties for not meeting these standards.

Post-2000 Turmoil

The intentions laid out by the board in 2000 were not realized. The topic of reliability performance by LDCs wasn’t broached again by the OEB until 2003, and then only as a stakeholder process which produced a report on regulatory principles underlying just and reasonable rates, not an empirical investigation to implement the 2000 decision.

However, in its pursuit of the holy grail of “operational efficiencies” the government continued its long crusade of forced and incented LDC M&As. It believed that such unions among utilities would realize 20- to 30-percent operating efficiencies, but in subsequent research the authors found diseconomies with the mergers and increased scale. 8 At the same time, the board had subjected the LDCs to multiple and repeated changes in regulatory governance and rate setting, greatly heightening the operational and financial uncertainty for the utilities. In particular, the OEB’s shift from total-productivity and total-cost benchmarking in the 1999 through 2000 period to a narrow focus on benchmarking O&M expenditures, unadjusted for differing labor capitalization or reliability performance, greatly increased the possibility of unintended consequences.

Recent research in the UK and Poland found allocative inefficiency increased under IR, especially when LDCs were facing more comprehensive controls, including reliability and line losses. Utilities simply weren’t reacting to the correct price signals— e.g., they were under-valuing the loss of load to customers. 9 Given the government and OEB seemingly were unconcerned or unaware of the extent of allocative inefficiency among some utilities and took no measures to reduce it, it wouldn’t be surprising if the past decade’s neglect has worsened the gold plating. Worse, the fixation on O&M costs encourages further perverse behavior by LDCs. First, O&M benchmarking leads to greater allocative inefficiency. Second, since lower O&M costs would raise benchmarking scores and their revenues (even if the lower O&M costs are a figment of accounting differences), LDCs would be incented by the IR to cut O&M. Third, absent SQR, LDCs could cut O&M enough to degrade reliability, even beyond the socially optimal level.

So, pre-PBR, utilities generally overcapitalized their network but provided a very high level of reliability. Part 2 of this article , to appear in the August 2009 issue of Public Utilities Fortnightly , examines the near decade of experience since the passage of the 1998 Act and the resulting restructuring of the MEUs. How have LDCs and customers fared in this altered regulatory environment? Answering that question requires examining LDC reliability performance and compliance with their 2000 standards.

 

ENDNOTES: