[Editor’s note: Fortnightly published Part I of this article in the July 2009 issue. In that installment, the authors described how the Energy Competition Act in 1998 restructured Ontario’s utilities and charged the Ontario Energy Board with implementing performance-based ratemaking (PBR) to maintain service quality. The author asserts that despite the board’s intentions, however, service quality has declined in the province.]
The Ontario Energy Board’s (OEB) experience with service quality regulation (SQR) of electric distributors has its origins in the OEB’s 2000 Electricity Distribution Rate Handbook. In terms of SQR, this document largely was based on the Implementation Task Force Report’s 1 recommendations.
Survey work by the task force found that more than 60 large and medium utilities covering over 80 percent of distribution customers had been collecting historical reliability data. However, a number of smaller utilities, some with only hundreds of customers, didn’t have historical data. In the end, for a variety of reasons, the task force recommended that only minimum customer-service standards be applied to the LDCs during the first generation. The levels of the minimum standards were determined through a survey of the LDCs. For reliability, the “standards” actually are weaker: For those LDCs with historical data, those LDCs should keep their performance within the range of whatever it had been during the preceding three years. The task force noted: “The OEB will review the PBR submissions to ensure compliance with the established benchmarks.” Those LDCs without reliability data should begin to collect it. The task force recommended that the latter utilities’ benchmarks be set by using peer-group averages.
However, despite the reluctant acceptance of the lowest common denominator for SQR by the implementation task force, the general expectation was that the OEB would move quickly, possibly even early in the first generation, but no later than the beginning of the second generation following the initial three-year PBR term, to set reliability-performance targets based on a more reasoned and judicious rationale than “just do whatever it was that you were doing.”
Indeed, the principles of just and reasonable rates would require that service quality and reliability standards be explicitly formulated as part of the sale of access by distributors to customers. And, the OEB itself stated its intent to move expeditiously: “Upon review of the first year's results, the OEB will determine whether there is sufficient data to set thresholds to determine service degradation for years 2 and 3.”2 Unfortunately, it’s now 2009 and the same nominal standards that applied in 2000 still apply today. As interpreted by some LDCs, however, the standards actually are lower today than in 2000.
In its initial PBR rate setting guidelines, the OEB spelled out the reasoning behind the standards:
…the Board’s approach to encourage the maintenance of service quality during the first generation PBR plan is to apply minimum standard guidelines for customer service indicators, and to apply a utility’s historic performance as its specific service reliability standards. Where a utility has not monitored service reliability in the past, it is required to initiate monitoring and reporting of the indices. (7-2)
Thus for the system average interruption duration index (SAIDI) and system average interruption frequency index (SAIFI), “All planned and unplanned interruptions of one minute or more should be used to calculate this index. Utilities that have at least 3 years of data on this index should, at minimum, remain within the range of their historic performance.” (7-6, 7-7)
With respect to service degradation and remedial action, the OEB noted:
In the absence of historical service quality data, it is not possible to identify service degradation during the first year of the PBR plan. However, upon review of the first year's results, the Board will determine whether there is sufficient data to set thresholds to determine service degradation for years 2 and 3. When established, the Board will issue these thresholds and any utility whose performance falls below these thresholds will be required to file a remedial action plan. (7-10)
It is anticipated that by the second generation PBR plan, there will be sufficient data collected to set industry service-quality performance standards. Once these standards have been established, PBR incentive mechanisms with economic consequences will be introduced around the service quality indicators. (7-10) However, it appears this work hasn’t been completed.
The OEB noted its responsibility with respect to service/reliability, as well as the necessity to evaluate prices, hand-in-hand with the actual service/reliability delivered to customers. In August, 2003, the OEB began a review of service-quality regulation. The OEB acknowledged: “Section 1 of the Ontario Energy Board Act, 1998 states … The Board, in carrying out its responsibilities under this or any other Act in relation to electricity, shall be guided by the following objectives: ... 3.To protect the interests of consumers with respect to prices and the reliability and quality of electricity service.”
Furthermore, the OEB noted that the issues of distribution prices and service quality are integrally linked together. “… [A] determination of just and reasonable rates must take into account the adequacy and level of service quality …”
The August 2003 notice reviewed the OEB’s initial PBR decision and specification of service/reliability indicators. Speaking of the standards in the 2000 Handbook, the notice said, “For most SQIs, the Board approved initial minimum standards. The Board determined that other aspects of service quality regulation, including remedial action and/or financial consequences of service degradation, should be considered, but that a proper assessment… required experience with the measurement and reporting of the SQIs.”
The notice discussed recent developments regarding second generation PBR:
…the Board advised stakeholders of the planned phased development of a second-generation PBR (“PBR II”) plan. A review of currently reported service quality indicators and associated standards, as well as consideration of other indicators and elements of service quality regulation, were identified as one of the components of PBR II plan development….As electricity distributors have been reporting their service performance for three years now, the Board considered it timely to review the SQIs and to further develop service quality regulation applicable to electricity distributors…
The notice listed the issues for review: review of the existing service quality indicators; consideration of additional or replacement indicators; the frequency and the periodicity of reported performance; defining degraded service and regulatory responses to service degradation (remedial action reports, possible financial consequences); urban and rural, large and small, and other distinctions in reporting or standards; and, the form and purpose of service quality audits in a comprehensive SQR plan (remedial plans and financial rewards or penalties).
Subsequently, Ontario Energy Board staff released a paper, “Service Quality Regulation for Ontario Electricity Distribution Companies” (2003 staff report). Importantly, an associated staff discussion paper reaffirmed the link between quality and rates: just and reasonable rates must consider the quality of the service provided: “Service quality regulation is integral to economic rate regulation, to setting ‘just and reasonable’ rates. From the perspective of the users or customers of the service, there must be a consideration of the ‘value’ of the product or service, where value is defined as the product or service meeting or exceeding the needs and expectations of customers relative to the price charged.”3
The 2003 staff report noted under cost of service (CoS) regulation, firms’ incentives weren’t at odds with service quality because they earned a return on investments and prudent and necessary costs were passed along to the customers. The staff discussion paper noted that under CoS the review process was usually annual, and embedded a review of service quality and reliability.
“ …Such reviews occurred periodically—often annually. Service quality could be reviewed as part of the revenue requirement and rate application, with consideration of how existing operational expenses and planned capital investments would contribute to the maintenance or improvement of service quality. Poor service quality could also be a factor considered by the regulator in reducing the allowed revenue requirement (without exacerbating the situation by the utility cutting costs and services in response to reduced revenues)…Also, the ‘rate base’ concept of CoS regulation, some argue, provides an incentive for the firm to overinvest and provide ‘gold-plated’ service, and so service degradation is thus seen as less of a risk under CoS regulation.”
Commenting on PBR, the 2003 staff report noted that differing incentives might result in cost containment degrading service. Under PBR, the OEB staff noted a greater need for ongoing monitoring of service performance:
“… PBR differs from CoS in that it provides incentives for a firm to improve its productivity … Another advantage to PBR is … less frequent detailed reviews … With less frequent detailed reviews, there is an increased need for ongoing monitoring of service performance, to ensure that any problems that do occur are addressed … Also, the incentives inherent in PBR … could result in … degraded service. Service quality monitoring serves as a counterbalance to ensure that adequate service is maintained … In some PBR plans … the service performance of the firm may be a parameter affecting rates … In other plans, aggregate penalties, or the existence of service guarantees and rebates, link the firm's financial performance to its service performance…”
However, after issuing the report, the OEB took no further action on service quality until 2008.
The OEB described the January 2008 staff paper 4 as an initial step in a consultation process designed to assist the OEB in determining an appropriate set of electricity distributor service quality requirements (ESQR). However, prior to any consultation or regulatory process in this proceeding, the staff discussion paper stated on page 3,“The Board has concluded that it will implement a “standards approach” to service quality regulation. Under the ‘standards approach,’ compliance with the performance standard is mandatory and can be enforced through the Board’s compliance process.”
Inexplicably, however, the paper doesn’t propose standards for service reliability. While “Board staff acknowledges that system reliability is critical for customers” (p.30), “Board staff proposes that these Original SQIs [the reliability indicators] not become mandatory ESQRs at the present time but be retained in a modified form for monitoring and reporting purposes” (p. 23).
The report ignores the 2000 Rate Handbook and OEB Decision that established mandatory reliability standards. In 2000, the OEB stated the reasoning behind the standards as follows: “… the Board’s approach to encourage the maintenance of service quality during the first generation PBR plan is to apply minimum standard guidelines for customer service indicators, and to apply a utility’s historic performance as its specific service reliability standards. Where a utility has not monitored service reliability in the past, it is required to initiate monitoring and reporting of the indices.” (7-2)
Thus for SAIDI and SAIFI, “All planned and unplanned interruptions of one minute or more should be used to calculate this index. Utilities that have at least three years of data on this index should, at minimum, remain within the range of their historic performance.” (7-6, 7-7). There’s nothing unclear about this order: “Board’s …PBR plan is to … apply a utility’s historic performance as its specific service reliability standards.” This was confirmed by the OEB’s August 2003 notice which noted that in 2000 “the Board approved initial minimum standards.”
The 2008 report was, according to OEB staff, based on a review of other jurisdictions and found a greater incidence of monitoring than of service quality incentive and standards. However, no data or analysis was offered to support this statement. However, it’s clear that many jurisdictions worldwide that have adopted incentive regulation also have adopted SQR.
In fact, the report Electricity Distribution Quality of Service, October 2007 states: “Ofgem considers quality of service to be one of its key priorities in network regulation …2006/07 was the fifth year that the DNOs [Distribution Network Operators, the UK nomenclature for LDCs] faced financial incentives on their quality of service performance …” 5
In addition to the U.K., incentive-based SQR exists in many other European jurisdictions, and in jurisdictions like Australia. For example, the Council of European Energy Regulators (CEER) noted in its 3rd Benchmarking Report On the Quality of Electricity Supply (2005):
Price-cap regulation without any quality standards or incentive/penalty regimes for quality may provide unintended and misleading incentives to reduce quality levels. Incentive regulation for quality can ensure that cost cuts required by price-cap regimes are not achieved at the expense of quality….The increased attention to quality incentive regulation is rooted not only in the risk of deteriorating quality deriving from the pressure to reduce costs under price-cap, but also in the increasing demand for higher quality services on the part of consumers…. a growing number of European regulators have adopted some form of quality incentive regulation over the last few years. 6
The January, 2008 letter from the OEB also states, “Until ... the sector gains experience with any new or modified service quality indicators or requirements, it is in the Board’s view premature to move to an incentive approach.”
But the OEB is now in its 10th year of collecting reliability data; more than sufficient time to gain experience. Indicators such as SAIDI and SAIFI are standards that are used for monitoring and regulating service quality around the world. These indicators have been used by Ontario distributors’ association for at least 15 years; for individual LDCs much longer.
The staff discussion paper offers a cursory analysis on reliability for 2004 through 2006. This analysis calculates sector, rural, and urban averages, as well as OEB peer-groups’ averages. It’s unclear whether these averages are simple arithmetic averages across reporting companies, or a weighted average calculated from actual customer-hours of interruption and total number of customer interruptions divided by number of customers served. 7
The discussion paper does examine the reliability performance of LDCs relative to various proposed benchmarks such as sector average or peer group average performance over the last three years. It finds that anywhere from 25 to 50 percent of Ontario distributors fail these benchmarks; furthermore, LDCs that fail typically have a reliability performance that is 50 to 100 percent worse than the selected average. What is clear from the data is that a very wide variation in reliability performance exists among LDCs, even within the OEB’s peer groups. Yet, this finding fails to elicit any apparent concern on the OEB’s part for the customers experiencing such degraded reliability. No explanation is offered for the fact that many customers of many LDCs are experiencing significantly lower reliability than customers of similar LDCs. What about performance over the whole period since the inception of incentive regulation (IR)?
The discussion paper sheds no light on whether LDCs are in compliance with the reliability guidelines established by the OEB in 2000. In fact, since its introduction of IR in 2000, the OEB has failed to confirm that LDCs operating under this regime are compliant with the mandated service-quality standards; this despite the fact the OEB repeatedly has stated that a reliable supply of power is necessary for just and reasonable rates. Indeed, the cursory analysis reported by staff would be unable to address current or past compliance.
The staff analysis is based on reliability data for 2004 through 2006 only. The paper indicates, “The following information is based on the reliability data filed under the RRR for the three years 2004 - 2006. Because the data reported in the earlier years may not have been reported consistently or calculated properly, staff has removed any statistics that appeared to be unreliable. This approach may result in a slightly less than completely precise and comprehensive analysis, but staff believes that the analysis based on this more selective data represents a more accurate picture of general trends.” 8
Yet, this is data collected by these same utilities for at least 15 years and reported to the Implementation Task Force in 1999 and to the OEB in its required filings since 2000. 9 However, in choosing to reject use of its own data prior to 2004, the OEB not only misses a significant degradation in 2004 through 2006 compared with 2000 through 2003, it misses an earlier an equally significant degradation in 2000 through 2003 compared with the pre-IR 1993 through 1997 period. Only by examining the performance relative to the pre-IR period could the OEB determine compliance. The OEB sees no degradation in large part because it has chosen to eliminate the periods of higher reliability performance in its comparison.
The OEB doesn’t report what tests had been performed to determine that the data reported in the earlier years hadn’t been reported consistently or calculated properly. It’s unclear what methodology was used to remove statistics that appeared to be unreliable. The earlier data comes from the same population as the later data and therefore can be jointly used to assess the 2000 to 2007 trend, as well as to assess performance relative to the pre-IR period used in 2000 to set standards.
What has been the performance of the electricity distributors in Ontario relative to the minimum standards established in 2000? This question is, unfortunately, not addressed in the discussion paper, nor in any public OEB analysis. Based on the first-generation standards, each LDC must keep its reliability performance within the range of the three-year period preceding the PBR. The OEB evidently has conducted no analysis on LDCs’ compliance with the standards.
What was the reliability of Ontario distributors in the mid-late 1990s prior to the start of the OEB’s PBR? Two sources of data exist to examine this question. One set of data from the industry was published from 1991 onwards. A second set of data was collected by the OEB’s Implementation Task Force in 1999.
Since 1991, the former Ontario Municipal Electric Association (MEA) collected and published performance metrics from its members, including reliability indices. This data included returns from almost all large and medium sized utilities serving 75 to 85 percent of customers in the province (see Figure 1).
During development of its first-generation PBR, the OEB’s Implementation Task Force undertook several surveys of the utilities, including reliability performance. Responses from more than 60 utilities serving 81 percent of customers provided annual data on reliability (see Figure 2).10
Figures 1 and 2 present these results for both municipal utilities, as well as for a composite index representing both municipal and non-municipal distributors. For municipal utilities the mean of SAIDI is 1.22 and the mean of SAIFI is 1.46—quite consistent with the results of non-municipal distributors. Looking at the PBR performance standard, for SAIDI the average three-year high value is 1.59; for SAIFI, the average three-year high is 1.84. For the industry composite index, the mean SAIDI figure is 2.07 with an upper bound of 2.53. For SAIFI, the mean is 1.36 with an upper bound of 1.75.
In January 2008, the OEB released its discussion paper on reliability, its only publicly-released analysis of LDC performance since the 1999 task force report. The paper employs only data from 2004 through 2006 to examine LDC performance. Further, no pre-PBR data and no data for the first three years of the PBR are examined. The paper states:
The following information is based on the reliability data filed under the RRR for the three years 2004 - 2006. Because the data reported in the earlier years may not have been reported consistently or calculated properly, staff has removed any statistics that appeared to be unreliable. This approach may result in a slightly less than completely precise and comprehensive analysis, but staff believes that the analysis based on this more selective data represents a more accurate picture of general trends. 11
Note the OEB’s use of the term “may not have been reported consistently or calculated properly.” There’s no discussion regarding what tests have been performed to determine that the data reported in the earlier years “may not” have been reported consistently or calculated properly. It’s unclear what methodology was used to remove statistics that appeared to be unreliable. The adjustments to the data by OEB staff need to be explored in more detail.
The OEB’s 2008 discussion paper doesn’t examine the reliability performance of LDCs relative to the mandate ordered in its 2000 PBR Decision; what the Paper does do is casually examine performance against several external averages calculated over the latest three years. What about performance under IR versus the 2000 to 2007 period?
As noted above, within its service-quality proceeding, the OEB questioned the robustness of the 2000, 2001, 2002, and 2003 data and refused to include this in its analysis. These data were filed by LDCs as part of their regulatory requirements (i.e., initially under PBR data-filing requirements and then under reporting and record-keeping requirements (RRR), which incorporated the data-filing requirements initiated under the first-generation PBR). Such data had been published by the MEA for a decade and a half without any concerns being expressed by stakeholders. Individual municipal utilities had been collecting this data for 20 years or more. No evidence of any kind has been offered by the OEB to support their contentions. No information has been provided by the OEB as to why, prior to 2004, the reliability data isn’t acceptable.
In fact, the OEB has employed the 2002 to 2006 RRR data, among which reliability is a small part, as the foundation of its entire electric distribution regulatory framework across multiple proceedings and years: the 2005 through 2006 cohort analysis,12 2006 through 2009 cost comparison and benchmarking,13 and 2007 through 2009 third-generation IR rate setting.14 According to the OEB consultant’s cost benchmarking report:15
The econometric model that we developed was based on the largest sample of data available. This, as we have seen, is in keeping with good econometric practice since a larger sample reduces the variance of parameter estimates and thereby helps us develop models with more variables and more flexible forms. The full sample period available was 2002-2006. We included in the sample data for all companies for which requisite data of good quality were available for at least two of the four years.
If the 2002 and 2003 RRR data is acceptable to the OEB in these applications (including all the problems aired in those proceedings on capital costs, capitalization, outsourcing, leasing, and embedded distributors), then straightforward data on interruptions, which definitions have been constant for decades, should be easily amenable to analysis on reliability trends and compliance. Furthermore, the OEB’s consultant has used the 2002 and 2003 reliability data rejected in the 2008 staff report in sophisticated econometric model estimation. And, the OEB has used just this same reliability data in extensive applications during its cost benchmarking proceeding:16 “Extensive data are available today on the operations of Ontario power distributors which are potentially useful in benchmarking their performance. The OEB is the primary source of such information… At the time of our updated study, OEB operating data from 2002 to 2006 were available.”
Commenting on their estimation with the reliability data, the OEB’s consultant stated (p.67): “We should also note that some of the results from the first stage econometric models for the reliability variables were sensible. In the research using SAIDI as the dependent variable…we found that SAIDI was generally higher (suggesting low reliability) for companies that had more rural and less undergrounded systems and used less capital.”
In fact, the report noted the benefit of additional data (p. 66): “Additional years of data for the estimation of the cost and quality models would also be helpful.” Why not 2000 and 2001 data?
In fact, in the OEB’s most recent release of reliability data (June 24, 2008), data for the largest distributor is missing from 2002 through 2006.17
The OEB is willing to employ the 2002 and 2003 reliability data in its cost benchmarking that would determine each LDC’s future annual revenue. Yet, the OEB reports that it will not use this same data for its reliability-trend analysis since this data “may not have been reported consistently or calculated properly.” If the data is good enough for rate setting, it should be sufficient for trend analysis. If accepted, the OEB’s position would mean that no compliance test could be conducted and no historical analysis prior to 2004 could be performed.
Starting in 2000, the OEB collected this reliability data annually (but reported on a monthly basis) from LDCs as stipulated in the OEB’s PBR rate guidelines. OEB staff have made notable comments about the accuracy of the data collected in 2000 and 2001 as well as 2002 and 2003. A detailed examination of this data yielded no systematic deficiency, just the usual data cleanup issues—i.e., duplicate records, missing data, and occasional entries that appear inconsistent, such as monthly data reported at annual rates. These cleanup items occur more frequently for some of the very smallest LDCs (such as those that were subsequently acquired by Hydro One). However, all of these issues are easily resolved.18
The authors examined the reliability data filed by the 80 to 100 LDCs over the 2000 to 2007 period to judge whether the 2000 through 2003 data is consistent with 2004 through 2006. First, they performed a general casual comparison of reported values for each LDC. Second, multiple tests were conducted to gauge if the distributions were normal. All four tests found the annual distributions normal. Additional tests included “t” tests, “F” tests, sign tests, analysis of variance and Tukey’s HSD (honestly significant difference) post-hoc analysis. The clear conclusion supports the hypothesis that all years of data from 2000 to 2007 come from the same population.19 Therefore, if OEB is willing to use 2004, 2005, or 2006, it must also use 2000, 2001, 2002 or 2003. What do data from this period show?
For municipal LDCs, the post-PBR SAIDI average for each year exceeds the pre-PBR of 1.22, except in the first two years of the PBR (see Figure 3). By the end of the period, the final three-year average is 1.79, 46-percent higher than the pre-PBR average. In three separate years, the weighted average exceeds the upper bound standard of 1.59 and in one year they are equal. In 2002, the result exceeds the upper bound standard by 49.7 percent. The final two years exceed the standard by a wide margin. The composite post-PBR results significantly exceed the pre-PBR average of 2.07 in each year. The final three-year average is 6.01, 190-percent higher than the pre-PBR average. Results in each year also exceed the upper bound standard by a wide margin.
The post-PBR municipal SAIFI average for each year except one exceeds the pre-PBR average of 1.46 (see Figure 4). For municipal LDCs, the final three-year average is 1.86, 27.4 percent higher than before PBR. In four years, the weighted average exceeds the upper bound standard of 1.84. The composite post-PBR results significantly exceed the pre-PBR average of 1.36 in each year. The post-PBR average of 2.31 exceeds the pre-PBR average by 70 percent. The final three-year average of 2.52 exceeds the pre-PBR average by 85 percent. Results in each year exceed the upper bound standard by a wide margin, in some cases by more than 50 percent.
These reliability indexes indicate significant service degradation in the province over the past eight years. These troubling findings indicate a degrading of the reliability performance for the electricity distribution sector as a whole. It’s critically important to examine this degradation and the reasons behind it. Have LDCs stopped being concerned about reliability given the laissez faire regulatory attitude displayed by the OEB? Have LDCs been forced to make budgetary cuts because of insufficient revenues under IR and unrealistic expectations on the part of shareholders regarding their dividend payments to provincial and municipal coffers?
Has the focus of LDCs been distracted by a policy environment that is always changing as government and regulator bounce from one idea to the next? Recently, the provincial government, through its regulator the OEB and through legislative changes, has initiated the eighth set of sweeping regulatory changes in 10 years affecting the electric distribution sector.20
The evidence confirms that LDCs have suffered operationally over this period; sector-wide distribution productivity growth under the OEB’s decade-long restructuring and IR has been significantly negative, unlike the positive, broad-based productivity growth from 1988 to 1997. And, maybe not surprising given the focus on O&M, allocative efficiency has declined as well.
More troubling has been the incentives embedded in these frameworks. The 2004 staff report detailed the OEB’s thoughts on achieving further efficiencies in distribution: “The Board’s objective is to consider if further efficiencies are available, and if so, how to achieve them. … the paper identifies approaches available to the Board to drive further efficiencies in the electricity distribution sector.” Consistent with the theme of government over the last decade, the paper and later OEB policies place a heavy reliance on O&M savings from policy-directed (i.e., forced and incented mergers) but offers little to substantiate the savings expectations. These new shareholder-initiated amalgamations publicly have touted the benefits of the government’s consolidation policy: The primary benefit, they claim, is the significant reduction in operational (i.e., O&M) costs. In a Toronto Star article, “Wave of Hydro Mergers Forecast,” some recent merger experiences were discussed.21 The article noted the accepted wisdom that previous mergers had produced ‘‘tangible cost savings.’’ The authors’ research finds diseconomies of scale; however, it also shows significant scope economies for outputs and inputs.
The 2004 staff report and subsequent reports (e.g., the 2006 Christensen report, and the 2007 and 2008 Pacific Economics reports) focus on O&M-based operational efficiencies associated with technical efficiency (i.e., achieving the maximum output-to-input ratio) while ignoring capital (about half of total costs) and associated allocative inefficiency. Consistent with the OEB’s focus on O&M, research on the post-PBR efficiency of the LDCs finds that allocative inefficiency has increased since 1997. But, whereas the gold-plated networks of the 1990s had robust reliability performance, the new, more inefficient networks have degraded reliability due to non-optimal O&M expenditures.
Most troubling is the fact that the OEB now has formalized its IR based on O&M benchmarking without considering inter-utility differences in labor capitalization policies or reliability performance. Ignoring differing capitalization policies will distort O&M comparisons and create differences in benchmark outcomes that are figments of accounting alone. Ignoring reliability in the O&M benchmarking will incent LDCs to cut O&M even if it degrades reliability and harms customers. Indeed, the authors’ econometric research over the past decade finds that O&M reductions significantly were related to reduced reliability.22
But, what about individual LDCs and their performance relative to the standards set by the 2000 Electricity Distribution Rate Handbook? Some LDCs aren’t compliant with their performance standard established in 2000. On average, Ontario LDCs have been experiencing a deterioration of reliability over the 2000 to 2007 period. Furthermore, even though we noted deterioration in the 2005 to 2007 period above relative to the 2000 to 2002 period, for some LDCs their 2000 to 2002 reliability performance had degraded from their pre-PBR performance. Unfortunately, and in contravention to the OEB’s 2000 reliability mandate, some LDCs are using the post-PBR degradation to establish new, lower standards based on their most recent three-year performance. However, the OEB’s decision in 2000 was to establish a minimum floor for reliability. The intent wasn’t to establish a rolling three-year moving average where the reliability standard itself would degrade.23
On average, Ontario LDCs have been experiencing a deterioration of reliability over the 1995 to 2007 period. Furthermore, on average, LDCs aren’t compliant with their standards established in 2000. Indeed, performance deteriorated during the 2005 through 2007 period relative to the 2000 through 2001 period; 2000 through 2001 reliability performance itself degraded significantly from that of the 1995 to 1997 and 1998 period. As reliability degraded, LDCs appear to have used the worsening performance to implement ever-yet lower “rolling-standards.”
There’s clear evidence of reliability degradation in the OEB’s data to question the assertion in the staff discussion paper that there are no concerns with reliability in the Province. With the existing data, however, it’s impossible to attribute cause for the degradation. As indicated in the staff discussion paper and as mandated by the OEB’s decision in 2000, all service reductions, regardless of cause, are used to calculate the interruptions indexes.
The OEB and the Ontario government has played a role in this degradation. Implicitly, the laissez faire regulatory attitude displayed by the OEB since 2000 has abetted the deterioration. Explicitly, the OEB’s growing fixation on partial cost benchmarking, as opposed to the total benchmarking advocated in the 1st Generation PBR, has directly incented LDCs to curtail O&M expenditures so as to improve their benchmarking score. Our own research on costs, reliability and investment found such curtailments degraded reliability. The OEB in 2003 reminded stakeholders that its legislative mandate requires the OEB: “To protect the interests of consumers with respect to prices and the reliability and quality of electricity service.” If the OEB has failed to protect consumers with respect to reliability, can rates be presumed just and reasonable?
Reliability may have been affected by causes beyond an individual LDC’s ability to control—for example, loss of supply from the transmission system. Indeed, the Implementation Task Force argued that LDCs should be held accountable for the failures under their control (p.36): “One other factor that needs to be considered when calculating the indices is the effect of external causes. These causes include outages and interruptions on the transmission system, and on feeders used jointly with another utility... [T]he reliability indices reported by a utility should be adjusted so that they truly represent situations under its control.”
Therefore, as part of the original reliability indicator reporting requirements established in the first Distribution Rate Handbook, LDCs are required to record the reason for supply interruption, but not report this to the OEB. This requirement was continued in the 2006 Electricity Distribution Rate Handbook. The OEB should require LDCs to provide this data retroactively to 2000 so that the historical data available to the OEB can be used to determine how much of the degradation is outside the network and how much reflect interruptions that are within the LDCs’ ability to control. Both operational as well as regulatory governance remedies then need to be implemented and examined to see if they bring subsequent performance within mandated limits.
Standards and penalties have been shown to be effective in blunting the perverse incentives under IR. In the United States, Ter-Martirosyan found that utilities with IR, but without standards, reduce their expenditures by 37 percent throughout the time period of the analysis. On the other hand, utilities with IR and standards and penalties increased their expenditures in every year by 17 percent. The former utilities were found to have had a 64-percent increase in SAIDI and a 13-percent increase in SAIFI. The latter utilities were found to have had a 26-percent decrease in SAIDI and a 23-percent decrease in SAIFI.
In the long run, our preference in Ontario is to develop an incentive approach that internalizes the cost of supply interruptions so that LDCs can supply a socially optimal level of reliability. Such regimes have been successfully implemented by a number of European regulators. These efforts have been well documented; CEER is shortly due to release its fourth benchmarking report on such efforts among member countries. In the short run and in the absence of such an incentive regime, Ontario’s distributors should face financial penalties for noncompliance with mandated minimum-reliability standards. After all, the OEB itself stated that by 2003 it would be in a position “to set industry service quality performance standards. Once these standards have been established, PBR incentive mechanisms with economic consequences will be introduced around the service quality indicators” (2000 Handbook, p.7-10). Now, in 2009, the OEB should follow through on its decade old promise. Hopefully, it’s not too late.
1. Report of the OEB Performance Based Regulation Implementation Task Force, May 18, 1999.
2. OEB, Service Quality, 2000 Electric Distribution Rate Handbook, March 9, 2000, p. 7-10.
3. Service Quality Regulation for Ontario Electricity Distribution Companies: A Discussion Paper, Ontario Energy Board staff, Sept. 15, 2003 (downloaded from http://www.oeb.gov.on.ca).
4. Staff Discussion Paper, Regulation of Electricity Distributor Service Quality (Board File EB-2008-0001), Jan. 4, 2008.
5. Ofgem, 2006/07 Electricity Distribution Quality of Service Report, Oct. 31, 2007, p.1.
6. CEER, Third Benchmarking Report of Quality of Electricity Supply, 2005, p.31.
7. Another appropriate metric would be weighted-average by customer numbers.
8. Staff Discussion Paper, fn 7., p.25.
9. Reliability data spanning the period from 2000 to 2007 have been assembled from the Board’s annual PBR filings for 2000 and 2001, as well as from the RRR data for 2002 to 2007 for each utility. We have conducted time series statistical tests to examine whether or not the pre-2004 reliability data is different from the 2004 to 2006 used by the Board. We were unable to reject the null hypothesis of no difference, i.e., for statistical purposes, the data appear to come from the same universe.
10. According to the Board, “Utilities that have at least 3 years of data…should, at minimum, remain within the range of their historic performance.” (7-6, 7-7) In this instance, the average for municipal utilities during PBR should be no higher than 1.59 for SAIDI and 1.84 for SAIFI. These standards are based on a customer weighted mean of upper boundary performances during the prior three years.
11. Staff Discussion Paper, fn 7., p.25.
12. Christensen Associates, Methods and Study Findings: Comparators and Cohorts Study for 2006 EDR, October 2005.
13. Pacific Economics Group, Benchmarking the Costs of Ontario Power Distributors, April 2007.
14. February 2008 Calibrating Rate Indexing Mechanisms for Third Generation Incentive Regulation in Ontario.
15. Benchmarking the Costs of Ontario Power Distributors, March 2008, p.43.
16. Id at p.36.
17. Fortunately, we do have filings from prior years for this LDC. It is clear that the post-PBR period, and in particular the last few years, have seen a very significant deterioration in its reliability. This LDC had a relatively good reliability record pre-PBR. In recent OEB proceedings this LDC (and others) voiced concern that budget constraints prevented replacing substantial assets deployed decades ago. Our statistical model of reliability, O&M, and additions find such under investment degrades reliability.
18. Missing data etc., also occur in the 2002-2006 data the Board used for its cost comparison and benchmarking. This is not enough in itself to judge that the data is unusable. In addition, data that is identified as inconsistent for a particular LDC can be easily verified or corrected with the LDC.
19. See, Cronin, F. and Motluk, S. “An Analytical Look at Service Reliability Degradation.”
20. Unfortunately, the Government’s pronouncements, proposals and policies often have been inconsistent, misguided, and counterproductive. These include: Bill 35, the 1998 Energy Competition Act; the 2000 OEB PBR Decision (OEB, 2000a); Bill 100, the Minister’s Directive to the OEB, and the OEB Decision in the Proceedings on the Minister’s Directive in 2000 (OEB, 2000b); 2002’s Action Plan and Bill 210; the February 2004 OEB Discussion Paper on Further Efficiencies (OEB, 2004); Ontario Ministry of Energy, Electricity Transmission and Distribution in Ontario — A Look Ahead, Dec. 21, 2004. (EDTO); Christensen Associates, Methods and Study Findings: Comparators and Cohorts Study for 2006 EDR, October 2005; Pacific Economics Group, Benchmarking the Costs of Ontario Power Distributors, April 2007, and finally, Calibrating Rate Indexing Mechanisms for Third Generation Incentive Regulation in Ontario, February 2008.
21. Tyler Hamilton, “Wave of Hydro Mergers Forecast,” Toronto Star, Oct. 21, 2006.
22. Cronin, F. J. and S. Motluk, Modeling Electric Distributor Costs, Investment, and Reliability, (in process). We used Ontario LDCs’ data to estimate a three-equation model. We find that LDCs with higher O&M expenditures also have higher reliability (lower SAIDI, etc.). Older networks, networks with lower shares of underground lines, and networks with less capital tend to have lower reliability.
23. See Ottawa Hydro Holdings, Inc., 2006 Annual Report, p.22 for a discussion of a rolling average to set reliability standards. We might note that in this case, the LDC’s reliability performance is good.