Efficiency programs often fall short because they don’t account for human behavior. Systematic studies with randomized trials can bring better results.
Ontario's Failed Experiment (Part 1)
Reliability declines after 10 years of incentive regulation.
Under the Energy Competition Act , MEUs were to be corporatized and recapitalized, placed under municipal shareholder control for possible sale, and placed under the regulatory oversight of the OEB and its yet-to-be-determined PBR. Not only was ownership and capital structure up in the air, but MEUs were subject to a new regulator and unknown regulations. The restructuring of the MEU sector alone was arguably one of the most complex regulatory restructurings in the world. 3
Faced with the recent transfer of 300 electricity distributors to the OEB’s authority, the OEB instituted a process to structure a suitable regulatory framework. These LDCs were highly diverse, ranging in size from many hundred to hundreds of thousands. Some stakeholders, including utilities, held that initial levels of efficiency varied significantly, due primarily to overcapitalized rate bases among some utilities, particularly those making use of third-party financing ( i.e., contributed capital). These stakeholders contended that high-use, contributed capital MEUs distorted their input mix, using too much capital and gold-plating their network.
Research data show that only about 20 percent of firms were on both the technical and allocative efficiency frontiers; 80 percent of utilities were interior performers. The average MEU was about 13-percent less efficient technically than the best- practice MEUs, but about 30-percent less efficient in terms of allocative efficiency, i.e., having the right mix of inputs given relative prices. Among the worst utilities, the extent of “gold-plating” was even more notable. 4
The OEB’s stakeholder implementation task force noted the dilemma involved in moving to PBR. 5 While a utility would face greater incentives to eliminate embedded inefficiencies likely accumulated under cost-of-service regulation, the regulator couldn’t easily quantify the potential level of inefficiency. Some participants pointed to the “yardstick competitions” being implemented in the United Kingdom, Europe, and Australia and argued that Ontario should adopt such models. 6 Due to the government’s tight deadline (with the market scheduled to open in November 2000, although subsequently this was delayed to May 2002), these critical issues couldn’t be analyzed within the time permitted.
Unlike efficiency levels, a general consensus prevailed that the gold plating, if a fact, had produced a near ubiquitous, highly reliable system. Two common industry standards for measuring network performance are the System Average Interruption Duration Index (SAIDI) and the System Average Interruption Frequency Duration Index (SAIFI). 7 Industry survey data indicated that the SAIDI and SAIFI for municipal utilities ranged from about 1.0 to about 1.5. The OEB’s Task Force on PBR Implementation also collected reliability data from Ontario MEUs. Its findings, covering utilities with more than 80 percent of the distribution customers, were similar. In fact, this performance was significantly better than most European and North American peers.
Ultimately, the OEB ordered the implementation of PBR. In the 2000 Rate Handbook , the board spelled out the reasons for regulating service/reliability performance:
PBR provides the electricity distribution utilities with incentives for economic efficiency gains. To discourage utilities from sacrificing service quality in pursuing these economic incentives, service quality performance measures are included in the PBR plan. Utilities will be expected