Successes, shortcomings and unfinished business.
Larry Kaufmann (email@example.com) is a senior advisor to Pacific Economics Group and Navigant Consulting. He advised the Ontario Energy Board (OEB) in 2007 and 2008 on the establishment of third-generation incentive regulation for electricity distributors in Ontario. The views expressed in this article are those of Dr. Kaufmann and not necessarily those of OEB board members or OEB staff.
In the past several months, Francis Cronin and Stephen Motluck have published three, largely critical articles in Fortnightly on the regulation of the electricity distribution industry in Ontario. In “Dealing with Asymmetric Risk” (May 2009), the authors criticized the performance-based regulation (PBR) plan established for the industry in 2008 and proposed an alternate mechanism that they claimed would “minimize data requirements and allow firms to reveal productivity potential.”1 This was followed by the two-part “Ontario’s Failed Experiment – Part 1 and Part 2” (July 2009 and August 2009), which argues that service quality has declined in Ontario because of the PBR framework.
Although regulatory reform in Ontario is a work in progress, there is no basis for concluding that current reform efforts are a “failure.” Indeed, the PBR plan approved in 2008 is almost universally seen as a significant step forward, which will provide a foundation for sustainable incentive regulation in the future. The prior authors’ evidence on service reliability is also far more ambiguous than they indicate. It is certain, however, that if there has in fact been a decline in service quality in Ontario over the last decade, it can’t be due to the influence of PBR.
The current experience in Ontario can be understood only by first considering its earlier PBR initiatives. Of particular importance is the “third generation incentive regulation mechanism” (3rdGenIRM) approved for electricity distributors in 2008.
PBR in Ontario
Ontario first implemented comprehensive PBR (i.e., PBR that applies to overall regulated rates, which therefore reflects both capital and operating costs) in 2000 for electricity distributors. The PBR proceedings produced a “Rate Handbook,” which presented the recommendations of OEB staff and its advisors on a preferred PBR plan.2
The Rate Handbook recommended a PBR plan where each distributor’s rates were adjusted by an inflation measure minus an X factor. Distributors also were allowed to select their X factor using a menu approach that included six alternative X factor and allowed return on equity (ROE) combinations, with higher values for X associated with higher-allowed ROE levels and vice versa. Companies then would be allowed to select the X factor- ROE combination that most appealed to their risk-incentive preferences.
In January 2000, however, the OEB rejected this approach as too complex for a first-generation PBR plan. It also didn’t believe there was a well-developed analytical foundation supporting the specific menu of X factor and ROE combinations. Instead of this menu approach, the OEB opted for a more conventional PBR plan where the inflation minus X formula used a single X factor. This indexing mechanism then was used to adjust the rates of all electricity distributors.
The X factor had two separate components. The first was a productivity factor of 1.25 percent, based on the estimated total factor productivity (TFP) trend for 48 distributors in the province.3 There’s a well-established theoretical and mathematical basis for linking X factors in PBR indexing plans to the TFP trend of the regulated industry, and this approach has been utilized in many approved plans.4 The X factor also included a 0.25 percent productivity “stretch factor,” designed to reflect the expected acceleration in TFP after companies became subject to the stronger cost-cutting incentives of PBR. The value of this stretch factor was based on judgment rather than any explicit empirical evidence. The final X factor in this plan therefore was equal to 1.5 percent.
The electricity PBR plan had an intended term of three years, from 2000 to 2002. However, before the plan could run its course, the provincial government imposed a cap on overall retail electric prices in 2001. This cap effectively eliminated any further formula-based distribution price adjustments for distribution services, thereby ending the plan. The industry returned to traditional cost-of-service regulation after the government lifted the rate freeze in 2006.
PBR for electricity distributors next was implemented in a “second generation incentive regulation mechanism” (2ndGenIRM) in an OEB report issued on Dec. 20, 2006. Distribution rates once again would be indexed by an inflation-minus X mechanism. Inflation was measured by the change in an index of economy-wide prices. The X factor was set at 1 percent. This value wasn’t derived from any explicit study, but was considered to be generally consistent with the X factor precedents for energy utility PBR plans in North America.5
The 2nd Generation IRM first took effect in 2007 and essentially was designed as a transitional mechanism until the third-generation rate plan could be established. Formula-based rate adjustments were to be applied between 2007 and 2010 until cost-of-service based rates (called rate “rebasings”) could be set for every distributor in the province.6 These cost-based applications were staggered over a three-year period because there are more than 80 distributors in Ontario and undertaking more than 80 cost-of-service reviews annually isn’t operationally feasible. The 2ndGenIRM remains in effect, and its final application will be in the 2010 rate year.
The design of the third-generation IRM began in October 2007, and the final mechanism was approved by the OEB on Sept. 17, 2008. This process began with a series of working group discussions, comprised of six company representatives, six customer representatives, the Power Workers’ Union (PWU), several OEB staff and this author, working as the main advisor to OEB staff.7 The working group was designed to educate stakeholders and give them an opportunity to participate “in real time” (rather than through formal hearings) in the process of evaluating and crafting regulatory proposals. The working group considered a variety of PBR options, including the menu approach that was proposed and ultimately rejected by the OEB in 2000. The working group again decided not to pursue this option, for reasons to be discussed.
However, while the working group didn’t accept the menu approach, it did agree there should be a degree of flexibility in how PBR is implemented in Ontario. Flexibility is important due to the large number of distributors in the province. These companies vary in a number of ways, including differences in their cost efficiencies, economic activities and customer growth, and expected capital-replacement expenditures. In order to accommodate these diverse circumstances, the OEB-approved PBR framework has several modules added to a core PBR plan. These modules are optional regulatory mechanisms that companies may access according to pre-established rules. In contrast, the core plan is designed to be a stable and rigorous PBR mechanism that applies to all distributors, although this plan is tailored to certain individual company conditions.
The core PBR plan has the following key features:
• A term of four years (one “rebasing” year and three years of index-based rate adjustments);
• No earnings-sharing mechanism;
• An inflation minus X rate-adjustment formula, where inflation is measured by the growth in an economy-wide inflation index;
• An X factor with two components: 1) a productivity factor based on industry TFP trends, which is common for all distributors in Ontario; and 2) a differentiated productivity stretch factor, where companies are assigned one of three possible stretch factor values based on benchmarking evaluations of their operations and maintenance (O&M) cost efficiency;
• The approved value for the industry TFP trend and productivity factor is 0.72 percent; and
• The stretch factors are determined through benchmarking studies; the initial studies undertaken by Pacific Economics Group identified three efficiency cohorts, determined based on two benchmarking models developed by PEG: a unit cost model that directly compared each distributor’s unit cost performance with the unit costs of a selected peer group; and an econometric cost model, which generated cost predictions for each company based on a variety of business conditions beyond its control. Each company’s actual costs then were compared to its predicted costs, and statistical tests were performed to determine whether the difference between actual and predicted costs was statistically significant. If a company’s actual costs were less than the predicted value and the difference was statistically significant, the company was identified as a significantly superior performer. If a company’s actual costs were greater than the predicted value and the difference was statistically significant, the company was identified as a significantly inferior performer. If the difference between actual and predicted costs was not statistically significant, then the company was an average cost performer. Both benchmarking models applied to companies’ operation, maintenance and administration expenses since it was not possible to obtain reliable capital cost measures for all companies in the sample in the available time.
The first cohort was populated by all distributors that were in the top third on the unit cost benchmarking study, and were statistically superior cost performers on the econometric benchmarking model. The third cohort was given by all distributors in the bottom third on the unit cost benchmarking study that were statistically inferior cost performers on the econometric benchmarking model. All other distributors were therefore in the second cohort.
A different stretch factor is assigned to each cohort, with relatively more efficient cohorts having a lower stretch factor. The final approved stretch factors are 0.2 percent for the most efficient cohort; 0.4 percent for the intermediate cohort; and 0.6 percent for the least efficient cohort.
The PBR framework also creates three modules that distributors may access. The most important is the incremental capital module, which companies can petition to use if their investment requirements during the term of the PBR plan can’t be financed by the revenues generated under the indexing mechanism. The incremental capital module allows distributors to petition for additional rate relief to recover the costs of identified, non-discretionary capital investments.8 The second module is a PBR “off ramp,” designed to protect ratepayers and shareholders from exceptionally high or low earnings outcomes under the PBR plan. The off-ramp module can be accessed by either company managers or the OEB, and triggers a review of the terms of the PBR plan if a company’s earnings are more than 300 basis points above or below its allowed ROE. The third module is an exogenous cost factor. This is a standard feature of most PBR plans, and it may be accessed by either distributors or the OEB to recover the costs of unexpected tax, policy or analogous changes that have impacted distributors’ unit cost but aren’t otherwise reflected in the indexing mechanism.
Far from being viewed as a failure, the approved 3rdGenIRM is widely seen as a success by stakeholders, OEB and its staff. Both the working group process and technical conferences led to a great deal of support for the PBR model. The core-module framework also is viewed as a practical, innovative means of balancing a stable regulatory mechanism with the desire for flexibility. These are significant steps forward, since circumstances such as the rate freeze prevented either the first- or second-generation IRM applications from providing a sustainable basis for PBR. While there were inevitably differences between customers and companies on some issues (especially on the values of the productivity and productivity stretch factors, and the design of the incremental capital module), these were less pronounced than the ratemaking disputes in most North American regulatory proceedings.9
In addition, stakeholders and the OEB agreed that the most significant shortcoming in 3rdGenIRM was the lack of a reliable time series of capital data for Ontario distributors. These data are necessary to develop a longer and more robust set of industry TFP measures. Better capital data also can facilitate more comprehensive benchmarking models that can be used to set future stretch factors. The proceeding ended with a clear understanding of these data limitations, the value of remedying them, and the efforts needed to address the problem. The 3rdGenIRM therefore was effective not only in establishing an appropriate PBR model, but also for pointing the way forward on how best to enhance and refine the PBR framework.
A Menu Approach
In their May 2009 article, Cronin and Motluck criticize the standard approach for developing the terms of index-based PBR plans, to which they refer as “exogenously determined PBR plans,” as well as a number of more specific points regarding the 3rdGenIRM. Their main general concerns are repeated:
“Due to the principal-agent problem between regulators and the monopolies they regulate, regulators don’t have the information necessary to correctly set parameters in exogenously determined PBR plans. In such an asymmetric environment, determining the appropriate inflation escalator and productivity offset can be complicated, confusing, time consuming and divisive. Often, the necessary data is as difficult, or more difficult, to obtain than the process of determining the firm’s cost of service. Thus, exogenously determined PBR plans often suffer from the same shortcomings as cost of service rate of return plans …” (see “Dealing With Asymmetric Risk”).
There’s no analysis or evidence supporting any of these assertions, which either are incorrect or greatly exaggerated. The first sentence above is inaccurate: the “principal-agent problem” in regulation is one that depends on the difficulty of obtaining accurate cost information from regulated firms when they are subject to cost-based regulation. This is inaccurate because PBR indexing plans use industry TFP and inflation measures rather than company-specific cost information to calibrate rate-adjustment mechanisms. These industry and economy-wide metrics are external to the regulated utility in question, meaning that no regulated firm can influence the values of these variables through its own actions. Because no regulated firm is able to influence the terms of its PBR rate-adjustment formula, this formula is not affected by the textbook “principal-agent problem.”
The other claims are exaggerated. There’s a well-established and accepted paradigm for setting the terms of PBR-indexing formulas in North America. Implementing this paradigm requires measures of industry TFP and input price trends. While estimating these measures isn’t trivial, they can be developed using techniques that are generally accepted and far less controversial than the typical estimate of a utility’s revenue requirement. The idea that estimating productivity and inflation measures is “more complicated, confusing, time consuming and divisive” than a cost-of-service rate case easily is disproved by examining any proceeding that includes both a cost-based rate rebasing and a PBR plan that adjusts those initial rates for a known term. For example, there were more than 2,000 data requests related to establishing the cost-based cast-off rates that took effect for Boston Gas in late 2003. In the same proceeding, there were less than 200 data requests related to the PBR-adjustment formula that adjusted those rates for the following 10 years. The controversies and regulatory burdens associated with setting cost-of-service rates for a single year therefore were approximately 10 times greater than the analogous costs incurred for setting updated rates for 10 years. On a per-annum basis, the cost-of-service portion of the Boston Gas rate case therefore was about 100 times more contested and costly than Boston Gas’s approved PBR plan.10
The authors also don’t accurately convey the PBR experience in Ontario. Most important, they present a detailed description of the menu approach that was proposed for first-generation PBR (pp. 50-51), but omit the critical fact that the OEB rejected this proposal. Knowing whether this proposal ultimately was accepted is material for understanding how a subsequent menu proposal may be received. The authors also wrongly assert that a menu approach was put forward “to address the Staff request” for such a proposal (p. 52). On the contrary, menu approaches were discussed during the working group sessions and at the first technical conference, but there was little appetite for such a mechanism among customer groups, companies or OEB staff. The overwhelming, but not unanimous preference was to pursue a core-module framework instead, which was built around a standard inflation-minus X indexing core plan.11
It is true that, during the consultation process, PWU did advocate the menu approach (p.52). However, this proposal was rejected during the review process for two sound reasons. First, the proposal assumed that the PBR plan would include an earnings sharing mechanism (ESM). While there are some potential advantages to ESMs, there also are clear disadvantages. For example, ESMs require annual earnings computations, which raise regulatory costs and may lead to contentious mini cost-of-service reviews of how earnings were calculated. ESMs also weaken firms’ incentives to cut costs, because they will immediately share a portion of these cost reductions with ratepayers. For these and other reasons, the OEB didn’t approve an ESM as part of the 3rdGenIRM. This decision essentially eliminates the viability of the authors’ proposed menu alternative.
More fundamentally, during the working group consultations, there were significant unanswered questions regarding the design of the authors’ proposed menu. One basic concern is that it never was explained how permitting companies to choose from a menu would necessarily benefit customers. If companies are presented with a variety of regulatory options, they clearly will select the alternative that is expected to be most profitable. However, this isn’t sufficient for an appropriately-designed PBR plan, which should lead to win-win outcomes for companies and customers. It was far from clear that the proposed menu would lead to win-win outcomes, and the authors never presented any persuasive analysis or evidence to support this obligation.
On a related matter, it wasn’t clear how the specific options on the menu were designed, or whether the X factor-ROE tradeoff was reasonable. This is a critical issue. In any menu approach, the menu options must be calibrated so that the party selecting from the menu will be induced to select an alternative that benefits customers and shareholders alike. Whether these incentives are created depends on the linkages between the variables that are paired for each option, as well as the linkages among the different items on the menu. Evaluating these relationships and their implications for customer and shareholder welfare likely will be complex, and these issues were not addressed in the proposal. Design considerations like these illustrate why menu approaches toward regulation are appealing in theory but much less common in practice.12
The Cronin and Motluck article also discusses “yardstick competition” as another option that regulators can consider that would have some of the same salutary effects of accommodating diversity and allowing firms to reveal their productivity potential as the menu approach. This option raises implementation issues at least as complex as those associated with a menu approach. For example, yardstick mechanisms often rely directly on frontier cost level benchmarking exercises and attempt to move regulated firms towards the cost frontier by a specified date. These cost frontier benchmarking studies often are extremely controversial and can lead to very high, and unrealistic, X factors in practice. A salient example is the first such application of a frontier benchmarking study that was applied to distributors in the Netherlands, which led to proposed X factors as high as 8 percent. These benchmarking studies also differ greatly from the benchmarking models used by PEG and that were integrated into 3rdGenIRM; PEG’s models benchmarked distributors relative to the mean performance in the industry, rather than relative to an estimated frontier.
In sum, the authors exaggerate both the problems of the PBR approach approved in Ontario and the practicality of their proposed alternatives. Menu approaches now have been proposed and rejected at least twice in Ontario, largely because their proponents didn’t explain how the menu was designed or how these mechanisms would benefit customers as well as companies. These issues are complex but need to be confronted, or the next menu proposal in Ontario will almost certainly meet the same fate.
PBR and Service Quality
In their two-part “Ontario’s Failed Experiment,” Cronin and Motluck concluded that PBR in Ontario is a “failure” because of reductions in service quality. This conclusion is based on a comparison of pre-PBR service-reliability levels for the 1991 through ’99 period with post-PBR service reliability in 2000 through 2007. In addition to indicting PBR generally, the authors blame the way in which PBR was implemented in Ontario after 2000. They write that “the OEB’s shift from total-productivity and total-cost benchmarking in the 1999 through 2000 period to a narrow focus on benchmarking O&M expenditures, unadjusted for differing labor capitalization or reliability performance, greatly increased the possibility of unintended (reliability) consequences.”13
Regarding Ontario’s reliability experience, comparing 1991 through ’99 with 2000 through ’07 figures is far less straightforward than the authors suggest. One factor complicating historical comparisons is that measured the system average interruption frequency index (SAIFI) and system average interruption duration index (SAIDI) values are affected by a number of business conditions in a distributor’s service territory that are beyond managerial control. These variables include such weather conditions as strong winds, storms, lightning, and extreme heat and cold. Not only do these weather conditions have a substantial impact on measured reliability, but they can fluctuate wildly from year to year. Since measured reliability often is impacted by volatile and unpredictable weather variables, caution always must be exercised when making simple reliability comparisons across two points in time.
Even more important, there have been significant changes in the technology used to measure and collect reliability information. In 1991, few if any utilities used automated measurement systems to record reliability data. Now, automated systems are more widespread and becoming more common every year. Inevitably, when distributors switch from manual to automated systems, they find their measured frequency and duration of outages increase. This implies that manual systems for measuring interruption data tend to miss or undercount the frequency and duration of outages. The fact that distributors increasingly rely on automated outage management systems probably accounts for a significant share of the measured increase in SAIFI and SAIDI over the 1991 through 2007 period. To the extent this is true, these increases are evidence only that reliability is being measured more accurately rather than declining. This can’t be definitively established one way or another, but the fact that the technology used to measure reliability has been changing over time does indicate that the methods used by OEB staff to evaluate service quality (i.e., comparing average reliability measures over a three-year period relative to the preceding three years) generally will yield more accurate inferences on underlying performance than the authors’ approach of comparing reliability between more distant points in time.
But while there is ambiguity regarding reliability trends, there’s no doubt that the authors are incorrect when they attribute reliability declines between 2000 and 2007 to the impact of PBR. This is true for a simple reason: The industry wasn’t subject to PBR between 2000 and 2007. First-generation incentive regulation lasted only from 2000 to 2001. Second generation PBR did not begin until 2007. The industry therefore wasn’t subject to PBR for most of the 2000 through 2007 period.14
The claims of declining reliability due to a “shift from total-productivity and total-cost benchmarking in the 1999 through 2000 period to a narrow focus on benchmarking O&M expenditures” also are untrue. First, total cost benchmarking wasn’t applied at all in 1999 through 2000. Second, while TFP trends were used to calibrate the X factor in the first-generation plan, TFP was used in the same way in the third-generation plan. Finally, “benchmarking O&M expenditures” played no role whatsoever in Ontario regulation until the establishment of the 3rdGenIRM. This plan wasn’t approved until September 2008 and therefore clearly is irrelevant to the 2000 through 2007 period the authors examine.15, 16
Cronin and Motluck are correct in saying that service quality is important and should at some point be integrated more formally into Ontario’s PBR framework. It was understood, however, that service quality wouldn’t be part of 3rdGenIRM for a number of reasons, including data issues and the prioritization of resources and projects within the OEB. Cronin and Motluck correctly emphasize the importance of maintaining quality, but this objective can be better achieved through appropriate refinements of the regulatory framework, not by making insupportable allegations about Ontario’s PBR experience.
1. Cronin, F., S. Motluck and J. Kwik, “Dealing With Asymmetric Risk,” Public Utilities Fortnightly, May 2009, p. 47.
2. During this consultation, Dr. Cronin advised the OEB Staff and Mr. Motluck was a key OEB staff member. Judy Kwik (who co-authored the May 2009 article, but not the July or August articles) was the key OEB staff member during the development of first generation incentive regulation.
3. The TFP trend over the most recent 10-year period was estimated to be 0.86 percent. The TFP trend over the most recent five-year period was estimated to be 2.05 percent. The OEB believed that some recognition of the industry’s most recent productivity experienced should be reflected in the X factor. It therefore applied a two-thirds weight to the ten-year TFP trend, and a one-third weight to the five-year TFP trend. This weighted average of industry TFP trends led to a productivity factor of 1.25 percent.
4. For further details on the relationship between X factors and TFP trends, see L. Kaufmann et. al, “Calibrating Rate Indexing Mechanisms for Third Generation Incentive Regulation in Ontario: Report to the Ontario Energy Board,” February 2008.
5. Some distributors also proposed that there be an additional component of the indexing mechanism to recover the costs of incremental capital spending. In its report, “the Board concludes that there is no need for a capital investment factor in this 2nd Generation IRM plan. Those distributors with an inordinate capital spending program can be accommodated through rebasing” (Report of the Board on Cost of Capital and 2nd Generation Incentive Regulation for Ontario’s Electricity Distributors, Dec. 20, 2006, p. 37).
6. The rate adjustments under the indexing mechanism apply to all distributors for the 2007 rate year. For 2008, index-based rate adjustments apply to those distributors that have not applied for rate rebasing. For the 2009 rate year, the mechanism applies to the remaining distributors that have not yet applied for, or been subject to, rebasing.
7. Throughout the regulatory processes for 2nd and 3rdGenIRM, Dr. Cronin advised the Power Workers’ Union.
8. Filings to utilize the capital investment module only could be submitted under certain conditions that were specified in the OEB reports. These conditions included a materiality threshold, which was designed to prevent “double counting” of capital expenditures through the core PBR rate-adjustment mechanism and the funds allowed under the module. This threshold was company-specific and depends on a formula presented in the Sept. 17, 2008 OEB report.
9. For example, I originally proposed a value of 0.88 percent for the productivity factor; the companies proposed a value of 0.55 percent for the productivity factor.
10. I was the witness testifying in support of Boston Gas’s PBR plan in 2003.
11. The article also contains other statements that are either factually inaccurate or presented in an unclear and confusing manner. For example, in the section headed, “Third Generation PBR” on p. 52, the authors mention a “second term” PBR plan, and in the next sentence say the OEB did not have appropriate capital data and so “opted for an O&M based efficiency comparison,” and in the following sentence say, “because of these shortcomings, the OEB used a literature-reviewed X factor.” As noted above, it is true that the OEB did use an X factor consistent with precedent rather than relying on any independent empirical analysis in 2ndGenIRM, but it is not true that O&M cost benchmarking played any role in 2ndGenIRM. In addition, on p. 52 the authors assert that “subsequent research which included both O&M and monetized values of capital similar to that employed in the OEB’s first generation, found a widespread negative rate of productivity growth over the 2000 to 2006 period.” In fact, in the TFP study that was approved by the OEB, there was a small but positive rate of TFP growth for the Ontario and U.S. industries over the most recently available period, from 2002 to 2006; no study was in fact published that estimated TFP for Ontario over the 2000-2006 period using monetary capital values, since monetary-based capital input data for 2000-2001 are currently unavailable.
12. The authors note that the Federal Communications Commission (FCC) did implement a menu approach for local exchange carriers subject to their jurisdiction, but this experiment was not continued when that PBR plan ended and a new plan was established.
13. Cronin, F. and S. Motluck, “Ontario’s Failed Experiment (Part 1),” Public Utilities Fortnightly, July 2009, p. 42.
14. It is true that the industry was subject to a government-imposed rate freeze for much of this period, which may be viewed as an alternative to traditional cost-of-service regulation. However, this rate freeze clearly differs from the “performance-based” or “incentive regulation” plans that the OEB has explicitly approved and which Cronin and Motluck refer to repeatedly in their articles. The 2000-2007 period was not characterized by a single regulatory approach, let alone a PBR approach like that approved in 3rdGenIRM.
15. The authors make other factually incorrect statements in their July and August articles. For example, they write that “(t)he OEB is willing to employ the 2002 and 2003 data in its (O&M) cost benchmarking that would determine each LDC’s future annual revenue. Yet, the OEB reports that it will not use this same data for its reliability-trend analysis since this data “may not have been reported consistently or calculated properly.” If the data is good enough for rate setting, it should be sufficient for trend analysis” (August 2009, p. 56). In fact, the O&M benchmarking models that were used to set stretch factors in 3rdGenIRM did not use the reliability metrics as independent variables in the econometric model, therefore these reliability data do not play any role in the O&M cost benchmarking used for rate setting.
16. At other points in the July and August articles, the authors also imply that a policy encouraging mergers in the province was designed to reduce O&M costs. It is true that mergers were encouraged, but it does not follow that this policy was only designed to cut O&M costs. Mergers were considered to be in the public interest because they lead to the realization of economies of scale and therefore lower unit costs. These unit cost reductions can be achieved by O&M cost-cutting, but they can also be realized by economies related to capital costs, such as more efficient planning, procurement and installation of capital goods and construction services. These capital-related economies are likely to be particularly important in mergers between very small distributors, which are still prevalent in Ontario. Thus it should not be assumed that mergers are exclusively devoted to realizing O&M efficiencies.