Has the one-day-in-10-years criterion outlived its usefulness?
James F. Wilson is an economist and principal of Wilson Energy Economics, and also is an affiliate of LECG, LLC. Email him at firstname.lastname@example.org. This article expresses the author’s views and not necessarily those of any client.
Electric utilities and regional transmission organizations (RTOs) in the United States aim to have enough electric generating capacity to meet anticipated peak loads with a reserve margin for reliability. The reserve margins usually are set to meet the widely-accepted “one day in 10 years” (1-in-10) resource adequacy criterion, under which the expected frequency of having to curtail firm load due to inadequate capacity should be no greater than once every 10 years.
The 1-in-10 criterion always has been highly conservative—perhaps an order of magnitude more stringent than the marginal benefits of incremental capacity can justify—and capacity planning has been even more conservative in practice. Indeed, economists have questioned the 1-in-10 criterion for many decades.1
Resource adequacy practices based on the 1-in-10 criterion perhaps make more sense for utility planners and regulatory authorities, who would have to answer for any curtailments that occur, than for the consumers who are directly affected if reliability isn’t maintained, but who also bear the cost of the additional capacity.
Marginal Costs and Benefits
The 1-in-10 resource adequacy criterion is economically efficient if it calls for an amount of capacity that reasonably balances the incremental costs and benefits of additional capacity. Under this principle, more capacity should be built as long as its incremental cost is exceeded by the anticipated incremental benefit.
The cost of incremental capacity is the annualized cost to build and maintain the most economical type of capacity, less the amount of those costs the plant can be expected to offset through sales of energy and ancillary services in the wholesale markets. It’s generally considered that gas-fired combustion turbines represent the cheapest type of capacity, and the type that would be built to meet an incremental need for capacity for reliability.
The incremental benefit of holding more capacity for reliability results from reducing curtailment due to shortages. This potential benefit depends upon the anticipated frequency of such outages (the LOLE) and the cost of outages to the electricity consumers who are curtailed (often called the value of lost load or VOLL). Additional capacity also can contribute to lower market prices for energy and ancillary services, potentially an added benefit from the consumer’s perspective. However, the last increments of capacity built to satisfy the 1-in-10 criterion (or any criterion leading to a low frequency of outages) will run very infrequently and have little, if any, impact on these prices.
When comparing incremental costs and benefits in this manner, the 1-in-10 criterion appears to be extremely conservative, calling for a much higher level of capacity than is justified by the economics. With an LOLE of only 0.1 outages per year (as implied by the 1-in-10 criterion), the incremental cost of capacity exceeds the incremental benefits by a wide margin across a range of reasonable assumptions. Estimates of the cost of capacity and value of service to customers suggest that a balancing of marginal cost and marginal benefit would require an outage frequency substantially greater than 1-in-10.
The benefits of incremental capacity depend on VOLL calculations. The impacts of outages on business and residential customers include loss of productivity, potential damage to electrical devices, inconvenience or discomfort due to loss of lighting, cooling or heating, spoilage of refrigerated goods, waste due to interruption of a manufacturing process, traffic snarls or accidents due to inoperable traffic lights, or even missing a favorite TV show, to name just a few. Outages also can be contributing factors to injuries or deaths. The average impact of outages on electricity customers often is quantified as VOLL, expressed in dollars per megawatt hour (MWh) of curtailed load.
Of course, the cost of an outage will vary for each customer and depends upon the customer’s uses for electricity, the circumstances under which the outage occurred, the duration of the outage, whether there is any advance warning, and other factors. For some customers, a very high level of reliability is desired, reflecting the nature of the facility or uses of electricity. Many customers desiring higher electric service reliability self-provide it, by installing on-site backup generation, uninterruptible power supply (UPS) systems, or other approaches. To the extent such customers are protected from the impacts of electric service outages, their VOLLs for utility system reliability will be much lower, as suggested by a recent review of outage costs by Lawrence Berkeley National Laboratories (LBL).2
In addition, utilities identify essential-use customers who (along with other customers fortunate to share the same circuits) are exempt from rotating outages. Essential-use customers typically include hospitals and nursing homes, prisons, police and fire stations, radio and TV stations, some water and sewage facilities, telephone switching stations, and emergency management and 911 systems. 3 In estimating a VOLL pertinent to rotating outages, the value of lost load for essential-use customers isn’t relevant.
It also can be assumed that customers adapt to some extent to the level of reliability they’re accustomed to receiving, and these adaptations reduce the exposure to the impacts of outages. In addition to adaptations such as self-provision of reliability, if they suffer frequent service disruptions, customers will be more likely to add battery backup systems to their computers, for example, or to at least set their software applications to auto-save documents.
A survey of the literature for ISO New England concluded that a wide range of values for VOLL, from $2,400/MWh to $20,000/MWh, can be “deemed justified by some source in the literature.”4 The Midwest Independent Transmission System Operator, the RTO for a large region of the Midwest, uses VOLL as a parameter in its ancillary services, and recently has set it to $3,500/MWh. Estimates of VOLL for the New York Independent System Operator in 2004 identified a lower range of $1,000 to $2,500/MWh and a higher range of $3,000 to $5,000/MWh.5 Other surveys, including the LBL outage cost review, also suggested a very wide range, with lower values for residential customers and the highest values for small commercial and industrial customers. However, outage cost surveys generally don’t account for whether customers with the highest VOLL have installed back-up power. A U.S. Department of Energy report in 2006 stated that a VOLL representing an average value in the range of $2,000 to $5,000/MWh is the “accepted industry practice.”6 Based on these and other reviews, values can range from $2,000 to $20,000/MWh, but values in the $3,000 to $5,000/MWh range are considered most appropriate for analysis.
Capacity reserve costs, including the net cost of new capacity (net CONE), also factor into the 1-in-10 criterion’s economic analysis. The marginal cost of additional reserve capacity can be represented by the annualized cost of building and maintaining the least expensive form of reliable peaking capacity, usually considered to be a gas-fired combustion turbine. Values recently developed by PJM for use in its reliability pricing model (RPM) capacity mechanism can be used. While marginal capacity resources cannot expect much in the way of energy and ancillary services earnings, estimates of such net earnings reduce the cost of capacity and therefore factor into the analysis. The values used by PJM during the past year have ranged from roughly $70,000/MW-year to $130,000/MW-year on a levelized basis, with the range primarily reflecting the timing of the cost estimate (with lower estimates from 2005 and 2006, and higher estimates from 2008) and also location.7 Energy and ancillary services earnings are estimated to range from approximately $10,000/MW-year to $50,000/MW-year in various locations in recent years. This suggests a net capacity cost (net CONE) range from about $40,000/MW-year to $120,000/MW-year. While the upper end of this range reflects the more recent data (but before the impacts of the 2009 recession on these costs), PJM’s capacity auction in 2008 cleared at a price close to the low end of this range, and its 2010 auction cleared at an even lower price.
Benefit vs. Cost
If the 1-in-10 criterion is being met (i.e., the LOLE totals 0.1 events or less per year), the last MW of reserve capacity has a 10-percent chance of being needed to avoid or reduce an outage in any year. To determine the potential lost load precluded by an incremental MW (i.e., to determine its marginal benefit), the appropriate geographic scope of the analysis must be identified. An incremental MW helps avoid or reduce load loss in all areas to which it is likely to be incrementally deliverable in the peak hours when the load loss might occur; so an incremental MW located within a transmission-constrained subarea helps avoid curtailment both within and outside the constrained area, while a MW located outside the constrained area cannot be counted on to help reduce load loss within the constrained area.
An assumption regarding the average duration of an outage also is required. An estimated five-hour duration for a typical rotating outage is based on review of hourly load shapes in a few areas of the country. Load levels tend to rise during the morning and afternoon and fall during the evening on the hot summer weekdays that are most likely to experience extreme loads in most areas of the country, suggesting an average duration of roughly five hours for a rotating outage due to capacity shortages. With these assumptions, if a region is at 1-in-10, an incremental MW saves, in $/MW/year:
0.1 (expected events/year) x 5 (hours/event) x VOLL ($/MWh curtailed).
In general, the benefit of the last megawatt of capacity will equal the LOLE times the expected hours of operation during the typical outage times the VOLL. If an optimal level of resource adequacy is being provided, marginal cost equals marginal benefit, so:
Net CONE ($/MW-year) = LOLE x hours/event x VOLL.
Therefore, the optimal LOLE equals Net CONE divided by VOLL divided by 5, assuming five hours per event. Table 1 shows the optimal LOLE based on various VOLL and capital cost assumptions.
Most of the combinations of assumptions suggest an optimal LOLE in excess of one event per year. Only under the lowest Net CONE assumption, and the extreme value for VOLL ($20,000/MWh), is the implied LOLE value 0.4/year, which is still four times more frequent than one day in 10 years. Assuming a typical outage duration of less than five hours also would raise the optimal LOLE. In terms of the “nines” reliability measure, frequently used in such industries as telecom and information technology (e.g., five nines being 99.999 percent), the range of estimated optimal values is 2.2 to 3.6 nines, compared to 4.2 nines for the 1-in-10 criterion. This analysis suggests that 1-in-10 is roughly an order of magnitude more stringent than the criterion that would provide the optimal level of resource adequacy. Put another way, it would be more economical for electricity consumers for capacity to be planned such that there would be approximately one outage per system due to resource shortage per year, rather than one per decade, taking into account the impact of the outages and the cost of capacity.
1-in-10: The Customer’s Perspective
While “(not more than) one day in 10 years” could be interpreted as a reliability pledge to each and every customer, the 1-in-10 criterion generally is interpreted as pertaining to the frequency of curtailment or load loss on an electrical system. However, when outages due to insufficient resources occur, typically only a small fraction of load must be curtailed to bring the system into balance. Consequently, only a small subset of customers is affected each time an outage occurs, and the frequency with which any individual customer is curtailed will be much lower than the system-wide outage frequency.
The frequency of curtailment for the average customer can be roughly estimated. If the typical outage lasts five hours, during this time an average of 2 percent of a system’s firm load is curtailed in each hour, and the curtailment is rotated hourly, then in total 10 percent of the customer load is curtailed during each outage. These estimates are conservative and roughly based on an examination of hourly load shapes; more likely, one rotating outage event would affect less than 10 percent of a system’s customers. If it’s further assumed that 50 percent of the customers are, or share circuits with, essential-use8 customers, and are therefore exempt from curtailment, the curtailment must be imposed on the remaining 50 percent of customers. With these assumptions, the exposed customers would be curtailed once every five outage events on average. Thus, 1-in-10 for a system translates into roughly one hour of outage every 50 years for the average customer exposed to such outages.
While the 1-in-10 criterion might result in a risk of curtailment to the average customer of once every several decades, most electricity customers experience a much higher frequency of outages due to disturbances in the electric distribution systems that serve them—roughly two orders of magnitude (100x) higher. The comparison of the 1-in-10 criterion to distribution system outage rates also suggests that the 1-in-10 standard is extremely conservative.
Utilities summarize the number of minutes of interruption the average customer experiences with the System Average Interruption Duration Index, or SAIDI, usually expressed in minutes of outage per year. Two values usually are provided, one including all events, and a somewhat lower value excluding major events, with the latter measuring more localized events that originate in utility distribution systems. A recent LBL report summarized utility-reported SAIDI values by census division, with major events excluded, showing a range from 107 to 212 minutes a year and a national average of 146 minutes a year.9 The SAIDI values will of course vary for each portion of each electric distribution company’s service area.
The 1-in-10 resource adequacy criterion can be expressed in minutes per year for comparison to SAIDI values, based on the rough estimates regarding curtailment quantity and duration used above. Those assumptions suggested that under 1-in-10 the average customer would be curtailed for one hour every fifty years, or 1.2 minutes per year on average. Thus, distribution system outages appear to impose roughly two orders of magnitude more minutes of outage on customers than does resource adequacy under the 1-in-10 criterion—i.e., 146 compared to 1.2 minutes a year.
Distribution system reliability, excluding major events, has averaged about 3.6 nines (see Table 2). By comparison, the 1-in-10 criterion corresponds to 4.2 nines at the system level, or more than five nines for the customer, under the above assumptions.
Is 1-in-10 Justified?
In practice, capacity planning approaches result in resource adequacy that usually exceeds the 1-in-10 criterion. While 1-in-10 has been accepted in principle, planners and regulators understandably have as a goal that curtailments never occur, and there might be thumbs on the scale as resource adequacy is implemented.
There are a number of ways resource adequacy in practice often is more conservative than the 1-in-10 criterion requires. Because the criterion is probabilistic, probabilistic modeling is required to determine the reserve margin required to satisfy it. Such models rely on assumptions about future load growth and its variability; capacity resources and their outage rates and availability during peak periods; the amount of interruptible load available; the assistance that may be available from neighboring systems during peak periods; and the impact of actions that can be taken when reserves are low to avoid having to curtail firm customers, such as appeals to the public and voltage reductions.
The tendency is often to adopt conservative assumptions for many of these values, to make the overall result of the analysis conservative (i.e., erring on the side of too much rather than too little capacity and reliability, identifying too large rather than too small a reserve margin). As a result, the resulting reserve margin may correspond to an LOLE less than, and perhaps much less than, 0.1. In addition, peak load forecasts might rely upon conservative assumptions and err on the side of over- rather than under-forecasting future peak loads.
These assumptions don’t always account for the full range of actions that utilities can take to avoid firm curtailments, including maximizing generation; recalling exports and calling upon assistance from neighboring power systems; tapping interruptible and emergency demand-response resources; reducing voltage levels; and appealing to the public to curtail consumption.
That resource adequacy planning has been very conservative and resulted in a lower frequency of outages than the 1-in-10 criterion suggests, is reflected in the Standard EOP-002-2 reports of capacity and energy emergencies that utilities submit to the North American Electric Reliability Corporation (NERC). These reports—filed by 134 entities—show the vast majority of incidents didn’t result in lost load, or they were caused by T&D system equipment failures, or were caused by extreme weather such as wind, snow, or hurricanes. Perhaps a dozen incidents occurred over the past decade in which there was a loss of load due to capacity shortages. This is a very small number considering the 1-in-10 criterion; 134 utilities following the 1-in-10 criterion might be expected to suffer 134 outages due to inadequate resources in 10 years. So in practice, resource adequacy is much more conservative than 1-in-10—by roughly one order of magnitude.
Highly conservative capacity planning might reflect concerns that if load forecasts or new capacity projections prove inaccurate, a region might suffer frequent, costly outages. That is, the main concern might be not one or a few outages in ten years, but the risk of unforeseen circumstances leading to multiple outages in a single year, for instance, on many hot days in a single summer. Roughly speaking, “many hot days” could be quantified as 10 days in one year, or two orders of magnitude greater frequency than 1-in-10.
However, the extreme peak loads targeted by the 1-in-10 criterion occur rarely, and peak load levels on nearly all other days are considerably lower. Consequently, for there to be shortages on many hot days, rather than just the rare extreme peak day, capacity would have to fall far short of the amount targeted under the 1-in-10 criterion. This suggests that concern about the possibility of circumstances that could lead to outages on many days doesn’t help to justify the 1-in-10 criterion.
Take for example the highest daily peak load levels attained in recent years on the PJM, ISO-New England, and California ISO systems, respectively (see Figures 1-3). For PJM, peak loads in excess of 92.4 percent of the highest peak (11,000 MW lower) occurred only seven times during 2005 through 2008.10
Peaks occurred with similar rarity in the ISO New England and California ISO systems, and with few exceptions the 10th highest peak in any year was typically 10 percent or more below the highest peak of the year. To put these differences in perspective, 7 percent of peak load equals five or six years of forecast growth in peak load based on 2009 projections.11
Models used to determine the required reserve margins to satisfy the 1-in-10 criterion provide further support for the conclusion that only a level of capacity far short of the 1-in-10 level would risk frequent outages. This can be seen in the relationship between the installed reserve margin and the LOLE for the PJM system, according to data provided in PJM’s most recent reserve margin analysis 12 and a probabilistic model developed by the author that approximates the assumptions, structure and results of PJM’s analysis (see Figure4).
This model estimates that if the installed reserve margin is approximately 8 percent, far below the target of 15.3 percent, the outage frequency is one per year. Only with an installed reserve margin of less than zero (i.e., total installed capacity is roughly equal to the forecast median annual net peak load) would the LOLE rise to approximately 10 events per year. (That it takes such an extremely low reserve margin to anticipate 10 outages per year reflects the fact that daily peak loads close to the median annual peak level are rare, and, in addition, there is some help available from neighboring systems not reflected in the reserve margin.) PJM’s model, which exhibits similar sensitivity to the reserve margin, likely would calculate a similar LOLE corresponding to lower reserve margins.
To risk frequent outages, installed capacity would have to be far below the level that satisfies the 1-in-10 target; it isn’t necessary to aim for 1-in-10 to ensure a very small risk of outages on many hot days due to capacity shortages. However, on much smaller systems the relationship would be different, and the anticipated LOLE would rise faster with lower levels of capacity .
EDITOR’S NOTE: This article is the first of two excerpts from the author’s paper, “One Day in 10 Years: Resource Adequacy for the Smart Grid,” of which an earlier draft was delivered to the 28th Annual Eastern Conference of Rutgers University’s Center for Research in Regulated Industries. The full paper will be published on Fortnightly.com in May 2010. The second excerpt, explaining how resource adequacy should be adapted for the smart grid, is in Fortnightly’s May 2010 issue.–MTB
1. See, for instance, Telson, Michael E., “The economics of alternative levels of reliability for electric power generation systems,” Bell Journal of Economics Vol. 6 No. 2 (Autumn 1975) p. 679; Cramton, Peter and Steven Stoft, “The Convergence of Market Designs for Adequate Generating Capacity,” April 25, 2006, p. 32; Joskow, Paul L., “Competitive Electricity Markets and Investment in New Generating Capacity,” June 12, 2006, p. 48-49; and Hogan, William W., Regulation and Electricity Markets: Smart Pricing for Smart Grids, presentation to the Energy Bar Association Electricity Committee Meeting, Oct. 16, 2009, pp. 21-23.
2. Ernest Orlando, Lawrence Berkeley Laboratory, A Framework and Review of Customer Outage Costs: Integration and Analysis of Electric Utility Outage Cost Surveys, November, 2003, p. 14 (showing outage costs for large commercial and industrial customers 3 or 4 times lower if they have back-up systems).
3. See, for instance, Duke Energy Indiana – Essential Use Customers.
4. Cramton, Peter and Jeffrey Lien, Value of Lost Load, February 2000, p. 4.
5. Breidenbaugh, Aaron, New York Independent System Operator, The Market Value of Demand Response, presented at the PLMA Fall 2004 Conference, Sept. 30, 2004.
6. U.S. Department of Energy, Benefits of Demand Response in Electricity Markets and Recommendations for Achieving Them, February 2006, p. 83.
7. See the RPM Planning Period Parameters for the 2011/12 and 2012/13 Base Residual Auctions. The values are based on cost studies that are also available on the PJM Web site.
8. One survey found that approximately 50 percent of customer load in California is exempt from rotating outages due to protection of “essential use” customers. Lawrence Berkeley National Laboratory, Rates and technologies for mass-market demand response, Paper LBNL 50626, 2002, p. 4. Available at: http://escholarship.org/uc/item/6k22v5kq.
9. Joseph H. Eto and Kristina Hamachi LaCommare, Tracking the Reliability of the U.S. Electric Power System: An Assessment of Publicly Available Information Reported to State Public Utility Commissions, for the Ernest Orlando Lawrence Berkeley National Laboratory, October 2008, p. 15 Table 4.
10. While peak load data for 2009 is available, it is not shown in these figures as the peaks in 2009 were much lower due to the recession.
11. ISO New England, 2009-2018 Forecast Report of Capacity, Energy, Loads and Transmission, April 15, 2009; PJM, 2009 PJM Load Forecast Report.
12. PJM, 2009 PJM Reserve Requirement Study, September 2009.