Electric distributors produce and sell a multi-dimensional output to their customers. Clearly, the customer service, reliability and voltage quality, among others, can vary substantially, producing different products depending on the mix of characteristics delivered to the customers. These different bundles of characteristics likely would have different costs associated with them and thus different prices. Evaluating the reasonableness of a distributor’s price must consider the context of the whole package (or packages) being delivered to customers.
Regulators usually have responsibility to ensure that regulated prices such as electric distribution are just and reasonable. But, most energy regulators have an associated, dual responsibility toward consumers. In addition to ensuring prices are just and reasonable, they also must ensure the appropriate levels of service and reliability are delivered. Without the latter, there can be no assurance that the prices being paid are, in fact, just and reasonable.
Sometimes, these fundamental responsibilities can be overlooked in the pursuit of more glamorous issues, or just through the passage of time. This point was noted by Elizabeth A. Noël, Esq., the District of Columbia’s people’s counsel, in her April 2009 Fortnightly letter to the editor, “Regulate First, Innovate Second,” as quoted in part:
Commissioner Rick Morgan of the Public Service Commission for the District of Columbia based his article, “Rethinking ‘Dumb’ Rates” (March 2009, p.34), on the faulty premise that there is a consensus, either in the regulatory community or electric industry, or both, trending toward the immediate adoption of smart meters and dynamic rates…. The change to AMI and dynamic rates, etc., is not mandated by law. Rather, District of Columbia law requires public utilities to provide safe, adequate and reliable service to consumers at rates that are just, reasonable and nondiscriminatory. As a sitting commissioner on the Public Service Commission, surely Commissioner Morgan knows his first responsibility is to address why there have been more than 2,700 sustained electric outages in 2008 (179 in January 2009). With multiple open dockets investigating Pepco’s quality of service, rates and infrastructure, etc., his touting new rate designs and dazzling technologies, all the while ignoring the basics of requiring the local monopoly distributor of electricity to provide safe, adequate and reliable service, defies logic. It goes without explanation, fundamentals come first!...While AMI and a new rate design are sexy and way more interesting than downed wires, outages, exploding manholes and aged infrastructure, AMI and new rate design will not fix the problems this city is experiencing! The Commission first needs to address the aging and broken electric infrastructure plaguing D.C…
Firms can optimize only those costs internal to their cost structure, typically capital and operations, maintenance and administration (OM&A). Costs borne by customers due to the utility’s interruptions generally aren’t considered by a utility when deciding capital and OM&A budgets. In general, failure to recognize such customer interruption costs leads to little spending on reliability.
Over the past decade or so, regulators have moved to implement incentive regulation (IR). The shift to IR can put OM&A costs directly in conflict with the pursuit of profit during the plan’s term. Cost reductions experienced earlier in a plan’s term are worth more to a utility than cost reductions achieved in later years. Since capital might not be subject to significant changes within the earliest years of a plan’s term, the utility could be incented to cut OM&A expenses beyond what is prudent for the quality and reliability of the network. Injudicious curtailments in OM&A have been shown to significantly lower local distribution company (LDC) reliability.
In the United States, few studies have examined distribution reliability. Six years ago, one study examined the effects of incentive regulation on OM&A expenses and service results. A. Ter-Martirosyan of George Washington University examined the effects of IR on electricity distributors’ OM&A and quality of service.1 The author uses 1993 through 1999 data from 78 major U.S. electric utilities from 23 states. Ter-Martirosyan finds that IR is associated with a reduction in OM&A expenditures. These reduced OM&A activities are then associated with an increase in outage duration. Importantly, Ter-Martirosyan’s analysis concludes that incorporating strict reliability standards with financial penalties into IR can offset the tendency of plans without standards and penalties to imprudently cut critical OM&A activities.
Possibly, due to these perverse service quality results, it’s not uncommon for utilities under IR to have explicit and strict service quality standards, often with penalties for violations. Indeed, Ter-Martirosyan finds that over half of the utilities in the sample with IR had such penalties.
Regulators in both North America and Europe have responded to profit-driven OM&A cuts with new regulatory initiatives. Among the former, following a series of significant outages often caused by imprudent reductions in OM&A expenses, some regulators have imposed on the utilities mandates covering inspection and maintenance, and sometimes investment, which specify the nature, timing and, in some cases, the money and staffing necessary to fulfill the regulations. In Europe, such regulators as the Council of European Energy Regulators (CEER) have documented and encouraged the adoption of service and reliability quality regulation (SQR) among its two dozen member jurisdictions. CEER’s SQR combines system-wide standards with incentive and penalty schemes as well as single-customer guarantees with monetary payments for nonperformance. Some regulators have used willingness to pay (WTP) studies to gauge the value customers place on reliability and the amount they would be willing to pay for service improvements or interruption avoidance.2
In North America, however, publicly available research on reliability performance is scarce. Whether it be a single LDC over a long period, multiple LDCs at one point in time, or most difficult of all, a number of LDCs over a long period, little data exists to gauge the state, trend and compliance issues regarding reliability. Ter-Martirosyan’s U.S. study was based on data that is now more than a decade old. However, there is one North American jurisdiction—Ontario—whose network is tied into U.S. grids, which measures reliability performance similarly to U.S. LDCs, and for which we have extensive reliability data for many LDCs over a long period of time (e.g., from mid-1990s to 2007). Furthermore, in 2000 the Province of Ontario started IR.
In 1998, the government of Ontario and regulator, the Ontario Energy Board (OEB), began what arguably was the most complex electric restructuring in North America. Prior to restructuring, these distributors were acknowledged to be technically efficient and providing highly reliable power. The OEB’s implementation task force concluded that these distributors would face notably increased profit motives under IR. The task force noted further that it would be reasonable to expect the Ontario utilities to react to these increased incentives. The task force and other stakeholders maintained that robust standards would be necessary to ensure the continued supply of reliable power.
The OEB opted to require LDCs to continue supplying power within the levels of reliability observed over the preceding three years. However, despite stated intentions to review these standards by 2003 and set financial penalties for noncompliance, the OEB didn’t review LDCs’ reliability until 2008, and then it did so only for the 2004 to 2006 period based on sector averages. The OEB hasn’t conducted a public review of LDC reliability performance from 2000 to 2003, nor has it ever conducted a review of any post-IR performance over the 2000 to 2007 period and examined whether LDCs are compliant with the standards imposed in 2000.
Were the OEB’s IR standards, monitoring, and reporting requirements sufficiently robust to mitigate the utilities’ potentially imprudent cost reductions, and the likely consequences for lowered reliability?
Prior to 1998, when the government of the Province of Ontario introduced comprehensive restructuring of the electricity sector, more than 300 municipal electric distribution utilities (MEUs) varying in size from several hundred customers to hundreds of thousands of customers operated in the province. These MEUs operated alongside a vertically integrated, provincially owned utility, Ontario Hydro, which controlled most of the generation and transmission capacity in the province and also distributed electricity in rural areas of the province where there was no municipality. Ontario Hydro sold power at cost to the MEUs, which also distributed electricity essentially as ratepayer cooperatives, earning very low rates of return and operating with little or no debt. Due to their essentially non-profit, public status, the MEUs weren’t subject to stringent regulation by the provincial energy regulator, the OEB, but had rates set by Ontario Hydro in a light-handed, cost pass-through approach.
Some critics maintained that mergers among the publicly owned MEUs would create efficiencies due to privatization and increased scale. In 1998, Bill 35, the Energy Competition Act, 1998, was enacted. While the Energy Competition Act affected the electric sector broadly, restructuring Ontario Hydro and enabling the IESO (the Ontario ISO and power pool), transferring regulatory authority to the Ontario Energy Board and charging the board to examine performance based regulation (PBR), it also undertook a fundamental restructuring of the MEUs.
Under the Energy Competition Act, MEUs were to be corporatized and recapitalized, placed under municipal shareholder control for possible sale, and placed under the regulatory oversight of the OEB and its yet-to-be-determined PBR. Not only was ownership and capital structure up in the air, but MEUs were subject to a new regulator and unknown regulations. The restructuring of the MEU sector alone was arguably one of the most complex regulatory restructurings in the world.3
Faced with the recent transfer of 300 electricity distributors to the OEB’s authority, the OEB instituted a process to structure a suitable regulatory framework. These LDCs were highly diverse, ranging in size from many hundred to hundreds of thousands. Some stakeholders, including utilities, held that initial levels of efficiency varied significantly, due primarily to overcapitalized rate bases among some utilities, particularly those making use of third-party financing (i.e., contributed capital). These stakeholders contended that high-use, contributed capital MEUs distorted their input mix, using too much capital and gold-plating their network.
Research data show that only about 20 percent of firms were on both the technical and allocative efficiency frontiers; 80 percent of utilities were interior performers. The average MEU was about 13-percent less efficient technically than the best- practice MEUs, but about 30-percent less efficient in terms of allocative efficiency, i.e., having the right mix of inputs given relative prices. Among the worst utilities, the extent of “gold-plating” was even more notable.4
The OEB’s stakeholder implementation task force noted the dilemma involved in moving to PBR.5 While a utility would face greater incentives to eliminate embedded inefficiencies likely accumulated under cost-of-service regulation, the regulator couldn’t easily quantify the potential level of inefficiency. Some participants pointed to the “yardstick competitions” being implemented in the United Kingdom, Europe, and Australia and argued that Ontario should adopt such models.6 Due to the government’s tight deadline (with the market scheduled to open in November 2000, although subsequently this was delayed to May 2002), these critical issues couldn’t be analyzed within the time permitted.
Unlike efficiency levels, a general consensus prevailed that the gold plating, if a fact, had produced a near ubiquitous, highly reliable system. Two common industry standards for measuring network performance are the System Average Interruption Duration Index (SAIDI) and the System Average Interruption Frequency Duration Index (SAIFI).7 Industry survey data indicated that the SAIDI and SAIFI for municipal utilities ranged from about 1.0 to about 1.5. The OEB’s Task Force on PBR Implementation also collected reliability data from Ontario MEUs. Its findings, covering utilities with more than 80 percent of the distribution customers, were similar. In fact, this performance was significantly better than most European and North American peers.
Ultimately, the OEB ordered the implementation of PBR. In the 2000 Rate Handbook, the board spelled out the reasons for regulating service/reliability performance:
PBR provides the electricity distribution utilities with incentives for economic efficiency gains. To discourage utilities from sacrificing service quality in pursuing these economic incentives, service quality performance measures are included in the PBR plan. Utilities will be expected to monitor and report on all of the service quality indicators included in the plan. The performance of individual electricity distribution utilities will be made publicly available…
Initial standards were minimum required performance levels. For the LDCs with the majority of customers and which had historical data, utilities were to keep their service reliability indices within the range of performance over the prior three years. As soon as feasible, all LDCs would collect such data, and the board would investigate the implementation of more refined standards along with financial penalties for not meeting these standards.
The intentions laid out by the board in 2000 were not realized. The topic of reliability performance by LDCs wasn’t broached again by the OEB until 2003, and then only as a stakeholder process which produced a report on regulatory principles underlying just and reasonable rates, not an empirical investigation to implement the 2000 decision.
However, in its pursuit of the holy grail of “operational efficiencies” the government continued its long crusade of forced and incented LDC M&As. It believed that such unions among utilities would realize 20- to 30-percent operating efficiencies, but in subsequent research the authors found diseconomies with the mergers and increased scale.8 At the same time, the board had subjected the LDCs to multiple and repeated changes in regulatory governance and rate setting, greatly heightening the operational and financial uncertainty for the utilities. In particular, the OEB’s shift from total-productivity and total-cost benchmarking in the 1999 through 2000 period to a narrow focus on benchmarking O&M expenditures, unadjusted for differing labor capitalization or reliability performance, greatly increased the possibility of unintended consequences.
Recent research in the UK and Poland found allocative inefficiency increased under IR, especially when LDCs were facing more comprehensive controls, including reliability and line losses. Utilities simply weren’t reacting to the correct price signals—e.g., they were under-valuing the loss of load to customers.9 Given the government and OEB seemingly were unconcerned or unaware of the extent of allocative inefficiency among some utilities and took no measures to reduce it, it wouldn’t be surprising if the past decade’s neglect has worsened the gold plating. Worse, the fixation on O&M costs encourages further perverse behavior by LDCs. First, O&M benchmarking leads to greater allocative inefficiency. Second, since lower O&M costs would raise benchmarking scores and their revenues (even if the lower O&M costs are a figment of accounting differences), LDCs would be incented by the IR to cut O&M. Third, absent SQR, LDCs could cut O&M enough to degrade reliability, even beyond the socially optimal level.
So, pre-PBR, utilities generally overcapitalized their network but provided a very high level of reliability. Part 2 of this article, to appear in the August 2009 issue of Public Utilities Fortnightly, examines the near decade of experience since the passage of the 1998 Act and the resulting restructuring of the MEUs. How have LDCs and customers fared in this altered regulatory environment? Answering that question requires examining LDC reliability performance and compliance with their 2000 standards.
1. Ter-Martirosyan, A., “The Effects of Incentive Regulation on Quality of Service in Electricity Markets,” George Washington University Dept. of Economics Working Paper, Presented at International Industrial Organization Conference, Northeastern University, Boston, 2003.
2. Indeed, some regulators have taken this WTP information and explicitly incorporated the customer interruption values into their distribution price regulation. One regulator has specified a goal of achieving a socially optimal level of reliability by recognizing that customer interruption costs must be considered equally with a utility’s capital and OM&A costs in utility planning and regulatory benchmarking.
3. Unfortunately, later inconsistent and ever-changing policies undermined LDCs’ performance for the next decade. Productivity losses wiped out the widespread gains over the decade from 1988 to 1997.
4. F. Cronin, S. Motluk, “Agency Costs of Third-Party Financing and the Effects of Regulatory Change on Utility Costs and Factor Choices,” Annals of Public and Cooperative Economics, 78, No.4, 2007.
5. See Report Of the OEB, PBR Implementation Task Force, May 1999, at: http://www.oeb.gov.on.ca/documents/cases/RP-1999-0034/implemnt.pdf
6. Subsequently, we examined PBRs implemented in the U.K., Australia and Europe. These PBRs generally benchmark on partial costs and examine only a minority of inefficiency. They create sizeable distortions in efficiency rankings: individual utilities could experience errors in rankings of 20, 30 or even 40 percent. See F.J. Cronin & S.A. Motluk, “Flawed Competition Policies: Designing Markets with Biased Costs and Efficiency Benchmarks,” Review of Industrial Organization, Vol.31, No. 1, Aug 2007.
7. The two reliability indicators used universally are “System Average Interruption Duration Index” (SAIDI) and "System Average Interruption Frequency Index (SAIFI). SAIDI is the average duration of a system outage calculated by adding the number of customer-hours of interruptions and dividing by the number of customers. SAIFI is the average frequency of outages and is calculated by adding the number customer interruptions divided by the number of customers.
8. Cronin, F.J., and Motluk, S., “How Effective are M&As in Distribution? Evaluating the Government’s Policy of Using Mergers and Amalgamations to Drive Efficiencies into Ontario’s LDCs,” Electricity Journal, April 2007.
9. Cullmann., A. & Hirschhausen., C.V., (2006), From Transition to Competition – Dynamic Efficiency Analysis of Polish Electricity Distribution Companies, Working paper, Dept. of International Economics, DIW Berlin, May 24, 2006. Yu, William, et al., “Incorporating the Price of Quality in Efficiency Analysis: the Case of Electricity Distribution Regulation in the UK,” July 2007.