Public Utilities Reports

PUR Guide 2012 Fully Updated Version

Available NOW!
PUR Guide

This comprehensive self-study certification course is designed to teach the novice or pro everything they need to understand and succeed in every phase of the public utilities business.

Order Now

Regulatory Reform in Ontario

Successes, shortcomings and unfinished business.

Fortnightly Magazine - November 2009

effects of accommodating diversity and allowing firms to reveal their productivity potential as the menu approach. This option raises implementation issues at least as complex as those associated with a menu approach. For example, yardstick mechanisms often rely directly on frontier cost level benchmarking exercises and attempt to move regulated firms towards the cost frontier by a specified date. These cost frontier benchmarking studies often are extremely controversial and can lead to very high, and unrealistic, X factors in practice. A salient example is the first such application of a frontier benchmarking study that was applied to distributors in the Netherlands, which led to proposed X factors as high as 8 percent. These benchmarking studies also differ greatly from the benchmarking models used by PEG and that were integrated into 3rdGenIRM; PEG’s models benchmarked distributors relative to the mean performance in the industry, rather than relative to an estimated frontier.

In sum, the authors exaggerate both the problems of the PBR approach approved in Ontario and the practicality of their proposed alternatives. Menu approaches now have been proposed and rejected at least twice in Ontario, largely because their proponents didn’t explain how the menu was designed or how these mechanisms would benefit customers as well as companies. These issues are complex but need to be confronted, or the next menu proposal in Ontario will almost certainly meet the same fate.

PBR and Service Quality

In their two-part “Ontario’s Failed Experiment,” Cronin and Motluck concluded that PBR in Ontario is a “failure” because of reductions in service quality. This conclusion is based on a comparison of pre-PBR service-reliability levels for the 1991 through ’99 period with post-PBR service reliability in 2000 through 2007. In addition to indicting PBR generally, the authors blame the way in which PBR was implemented in Ontario after 2000. They write that “the OEB’s shift from total-productivity and total-cost benchmarking in the 1999 through 2000 period to a narrow focus on benchmarking O&M expenditures, unadjusted for differing labor capitalization or reliability performance, greatly increased the possibility of unintended (reliability) consequences.” 13

Regarding Ontario’s reliability experience, comparing 1991 through ’99 with 2000 through ’07 figures is far less straightforward than the authors suggest. One factor complicating historical comparisons is that measured the system average interruption frequency index (SAIFI) and system average interruption duration index (SAIDI) values are affected by a number of business conditions in a distributor’s service territory that are beyond managerial control. These variables include such weather conditions as strong winds, storms, lightning, and extreme heat and cold. Not only do these weather conditions have a substantial impact on measured reliability, but they can fluctuate wildly from year to year. Since measured reliability often is impacted by volatile and unpredictable weather variables, caution always must be exercised when making simple reliability comparisons across two points in time.

Even more important, there have been significant changes in the technology used to measure and collect reliability information. In 1991, few if any utilities used automated measurement systems to record reliability data. Now, automated systems are more widespread and becoming more

Pages