Different customers have different wants and needs, and customer segmentation strategies can help utilities understand those differences. But what’s the best way to define customer classes? And...
Ontario's Failed Experiment (Part 2)
Service quality suffers under PBR framework.
be weighted-average by customer numbers.
8. Staff Discussion Paper, fn 7., p.25.
9. Reliability data spanning the period from 2000 to 2007 have been assembled from the Board’s annual PBR filings for 2000 and 2001, as well as from the RRR data for 2002 to 2007 for each utility. We have conducted time series statistical tests to examine whether or not the pre-2004 reliability data is different from the 2004 to 2006 used by the Board. We were unable to reject the null hypothesis of no difference, i.e., for statistical purposes, the data appear to come from the same universe.
10. According to the Board, “Utilities that have at least 3 years of data…should, at minimum, remain within the range of their historic performance.” (7-6, 7-7) In this instance, the average for municipal utilities during PBR should be no higher than 1.59 for SAIDI and 1.84 for SAIFI. These standards are based on a customer weighted mean of upper boundary performances during the prior three years.
11. Staff Discussion Paper, f n 7., p.25.
12. Christensen Associates, Methods and Study Findings: Comparators and Cohorts Study for 2006 EDR , October 2005.
13. Pacific Economics Group, Benchmarking the Costs of Ontario Power Distributors, April 2007.
14. February 2008 Calibrating Rate Indexing Mechanisms for Third Generation Incentive Regulation in Ontario.
15. Benchmarking the Costs of Ontario Power Distributors, March 2008, p.43.
16. Id at p.36.
17. Fortunately, we do have filings from prior years for this LDC. It is clear that the post-PBR period, and in particular the last few years, have seen a very significant deterioration in its reliability. This LDC had a relatively good reliability record pre-PBR. In recent OEB proceedings this LDC (and others) voiced concern that budget constraints prevented replacing substantial assets deployed decades ago. Our statistical model of reliability, O&M, and additions find such under investment degrades reliability.
18. Missing data etc., also occur in the 2002-2006 data the Board used for its cost comparison and benchmarking. This is not enough in itself to judge that the data is unusable. In addition, data that is identified as inconsistent for a particular LDC can be easily verified or corrected with the LDC.
19. See, Cronin, F. and Motluk, S. “An Analytical Look at Service Reliability Degradation.”
20. Unfortunately, the Government’s pronouncements, proposals and policies often have been inconsistent, misguided, and counterproductive. These include: Bill 35, the 1998 Energy Competition Act ; the 2000 OEB PBR Decision (OEB, 2000a); Bill 100, the Minister’s Directive to the OEB , and the OEB Decision in the Proceedings on the Minister’s Directive in 2000 (OEB, 2000b); 2002’s Action Plan and Bill 210 ; the February 2004 OEB Discussion Paper on Further Efficiencies (OEB, 2004); Ontario Ministry of Energy, Electricity Transmission and Distribution in Ontario — A Look Ahead , Dec. 21, 2004. (EDTO); Christensen Associates, Methods and Study Findings: Comparators and Cohorts Study for 2006 EDR , October 2005; Pacific Economics Group, Benchmarking the Costs of Ontario Power Distributors , April 2007, and finally, Calibrating Rate Indexing Mechanisms for Third Generation Incentive