The Value of Resource Adequacy


Why reserve margins aren’t just about keeping the lights on.

Fortnightly Magazine - March 2011

Setting target reserve margins within the context of resource adequacy planning has historically been based strictly on the “1 day of firm load shed in 10 years” reliability standard. In other words, under the 1-in-10 standard, reserve margins are determined solely based on the probability of physical load loss events. This approach doesn’t explicitly determine whether the particular target reserve margin is reasonably cost-effective or otherwise justified economically. In fact, the economic benefit of avoiding one firm load-shed event in 10 years is small relative to the cost of carrying incremental capacity. However, the economic benefits of reserve capacity go beyond avoiding load-shed events to include reducing high-cost emergency purchases, the dispatch of energy-limited (e.g., intermittent, storage, etc.) and high-cost resources, and the interruption of expensive demand-response resources.

A case study of an economic simulation of reliability events and their costs and benefits shows that this type of analysis can provide a dramatically improved understanding of resource adequacy risks. It also can help identify more cost-effective solutions to meet given resource adequacy standards, document the link between economically efficient target reserve margins and physical reliability standards such as the 1-in-10 standard, and inform stakeholders about the value customers are receiving from paying for reserve capacity. As the analysis shows, sole reliance on physical reliability metrics, such as the 1-in-10 standard, easily results in setting target reserve margins that—depending on system size and characteristics—are either too low or too high to be cost effective and economically efficient.

The Origins of Resource Adequacy

For decades, the utility industry has been using the 1-in-10 standard for setting target reserve margins. While the origination of the 1-in-10 metric is somewhat vague, there are multiple references to it in papers starting with articles by Calabrese from the 1940s.1 In the literature surveyed, no justification was given for the reasonableness of the standard other than that it’s approximately the level that customers were accustomed to. Because customers rarely complain about the level of reliability they receive under the 1-in-10 standard, few question the 1-in-10 metric as an appropriate standard. While the standard has been questioned recently in regions with capacity markets, such as PJM,2 little empirical work has been undertaken to quantify the full economic value provided by reserve margin targets or to confirm that sole reliance on such physical reliability standards produces a reserve margin that reasonably—if not optimally—balances the tradeoff between the economic value of reliability and the cost of carrying the amount of planning reserves needed to maintain target reserve margins.

Structural changes in energy and capacity markets, increased penetration of renewable and demand-side resources, and legislative changes raise the question of whether target reserve margins set solely based on the 1-in-10 standard are either too low or too high to be reasonably cost effective and efficient today. Arguably customers and policy makers must have a means to understand the full economic value that additional capacity (i.e., higher reserve margins) provides beyond physical reliability. An economically efficient resource adequacy standard should:

• Provide a level of reliability that is meaningful to all customer classes;

• Reasonably balance the economic value, including price-risk mitigation, that customers receive from reliability with the cost of supplying that level of reliability;

• Demonstrate to customers what economic and other benefits reserve margins provide beyond the physical reliability benefit;

• Provide adequate investment incentives for suppliers of capacity-only products;

• Result in a reasonably optimal mix of peaking resources that supply energy during the highest-load periods; and

• Consider the ability of a system to absorb energy limited, non-dispatchable, and demand-side resources.

A comprehensive approach to economic reliability analysis attempts to address and balance these goals.

Limitations of the 1-in-10 Standard

Relying solely on the 1-in-10 standard to determine resource adequacy targets won’t reliably result in economically efficient and cost effective reserve margin targets because of a number of important limitations of the 1-in-10 standard. These include the absence of a standard definition, and failure to consider the full customer cost of reliability-related events.

As recognized in the recent effort by NERC and Reliability First Corp., the 1-in-10 reliability standard has different interpretations.3 Most resource adequacy planners define it as one event in 10 years and measure this by calculating loss of load expectation (LOLE) in “events per year,” which equates to a 0.1 LOLE measured in events per year. However, others define the 1-in-10 metric as one day (24 hours) of load loss during a 10 year period, which equates to an LOLE of 2.4 measured in hours per year. As shown in the results of this study, these different interpretations alone can result in target reserve margins that differ by more than 4 percentage points. While planners recognize that the 2.4 hours per year interpretation provides different physical reliability than the 1-event-in-10-years interpretation, the question remains which metric provides an adequate level of reliability. In addition, the 1-in-10 standard doesn’t generally define the magnitude or duration of the firm load shed as measured by the “expected unserved energy” (EUE). Based on a small number of samples, even when applying the same 1-in-10 definition, the average magnitude of EUE as a percentage of total load varies from 1 percent for large systems to around 5 percent for relatively small systems. This is one reason why normalized EUE (EUE divided by “Net Energy for Load”) was adopted as a physical reliability metric for the NERC effort under the Generation and Transmission Planning Models Task Force (GTRPMTF).4

Like any solely physical reliability standard, the 1-in-10 standard assumes that a reliability event occurs only if firm load is shed. However, reliability-related costs realistically also include costs associated with events such as calling on interruptible loads, dispatching high-cost emergency resources, and making unanticipated expensive market purchases. In the California energy crisis, for example, only approximately 8,000 MWh of firm load was shed during a total of 6 days.5 Even if these load drops are priced at $10,000/MWh, the economic cost of the curtailments is only $80 million, which is a small fraction of the estimated $50 billion in total costs attributed to the crisis.6 A resource adequacy assessment consequently should consider the full range of reliability-related events, not just firm load-shed events.

A typical criticism of the 1-in-10 standard is that it provides greater reliability than customers are willing to pay for—the argument being that even if the value of lost load is $20,000/MWh, the “last CT” [combustion turbine] would need to displace 5 hours of lost load per year to be economically justifiable—assuming the carrying cost of a CT is $100/kW-yr. However, analysis shows that the majority of customer-side reliability costs might not be incurred in the form of lost load, which means the last CT actually provides substantially more value than just offsetting the cost of firm load-shed event. When the full range of reliability-related impacts and costs are quantified, the 1-in-10 standard can result in target reserve margins that are either too low or too high from an economic efficiency and overall cost-effectiveness perspective, depending on system size, resource mix, and interconnections with neighboring system. A smaller system with weak interconnections to neighboring systems tends to have much less cost exposure at exactly the same level of physical reliability than a system with significant interconnections. For a larger system with a substantial amount of energy-limited resources and significant tie line assistance, the 1-in-10 standard yields a cost exposure that is much higher than for the other systems.

An economically efficient and cost-effective target reserve margin can differ significantly from the target reserve margins derived with the 1-in-10 standard—it can either be below or above current reserve margin targets based solely on physical reliability. Consequently, physical reliability standards should be supplemented and target reserve margins should be validated with an analysis of economic value and cost effectiveness, taking into account the full range and uncertainty of possible outcomes. Setting target planning reserves to include economic considerations achieves the goals of economic reliability planning. Consumers will enjoy a level of reliability they are willing to pay for while also taking cost uncertainty into account. They will be protected not only from excessive firm load shedding, but also from the high energy costs frequently associated with reliability-driven extreme market conditions.

Economic Reliability Modeling

Economic reliability modeling differs significantly from typical production cost modeling. Production cost modeling is designed to determine average costs with a handful of sensitivities, which makes it well-suited to performing fuel budget studies, RFP evaluations, and resource mix studies. However, the “average” nature of production cost simulations can’t realistically capture reliability events because they occur only infrequently based on a combination of extreme weather, under-forecast of load, and poor unit performance. The recent outages in Texas and Arizona are examples of such reliability events that would only be captured by modeling a full (e.g., 30-year) distribution of weather and its impact on load, resources, and fuel availability. To capture these costs, a model with an hourly economic dispatch is needed, with the ability to run a sufficiently wide range of scenarios—including extreme combinations that create physical reliability problems.

Economic reliability analysis combines production cost and reliability simulation techniques. In particular, they capture all production and scarcity costs of power above the variable cost of the marginal capacity resource, which is typically a new combustion turbine. In addition, these analyses require a realistic distribution and sufficiently large number of scenarios of weather, unit performance, and economic growth to capture the extreme conditions during which reliability is a concern. Finally, transmission capabilities and neighboring systems must also be analyzed in detail because reliability support from neighboring systems might be limited by both transmission and resource availability constraints.

To illustrate the application of economic considerations to reliability analysis, Astrape Consulting performed a case study using an actual (herein generalized) power system that includes approximately 40,000 MW of capacity with a weather normalized peak load of approximately 35,000 MW and 10,000 MW of inter-ties with multiple neighboring systems.

The Strategic Energy and Risk Valuation Model (SERVM) was used in this case study to perform economic reliability modeling.7

SERVM commits and dispatches generation economically to meet load plus operating reserves during all 8,760 hours of a year and then calculates reliability costs and other reliability metrics such as LOLE and LOLH. SERVM is a multi-area model that models directly-interconnected neighboring regions to simulate out-of-region purchases over tie lines when necessary for reliability. To gain an accurate picture of the system’s physical and economic reliability-related costs, the analysis involved 112,000 full-year simulations8 (each for 8,760 hours) for each reserve margin level analyzed. The simulations included 40 historical weather years in which load, resources, and fuel availability were dependent on historic hourly weather data. The results from these simulations were then used to determine the average and distribution of reliability-related costs for different reserve margin levels. Simulating a sufficiently large range of reserve margins thus allows for both the identification of 1) the reserve margins that yield the lowest average costs and 2) an assessment of the cost uncertainty, including the risk (probability) that actual outcomes significantly exceed these average costs.

Defining Reliability-Related Costs

Setting target reserve margins based on economic reliability simulations requires balancing the costs of adding new capacity against the benefit of adding that capacity. For the case study, new capacity was assumed to be a combustion turbine. In other regions, that might not be the appropriate marginal new resource. It’s also possible to evaluate a supply curve of new capacity that stretches from lower-cost demand response resources to higher-cost additions of new physical generation.

As the level of installed capacity resources changes, the total benefit of the additional capacity must be captured, as well as the costs of that capacity. This means the analysis must keep track of all production and purchase costs above the marginal cost of the new capacity resource as well as the fixed costs of the added new capacity. The analysis breaks these customer reliability costs into four categories: production-related reliability costs, reliability and emergency purchase costs, unserved energy costs and capacity resource carrying costs.

First, production-related reliability costs are defined as any costs of the system’s physical generation above the dispatch cost of the new capacity resource. This includes the dispatch of higher-cost generators such as oil-fired turbines and old natural gas turbine units.

Second, reliability and emergency purchase costs are defined as the costs of any purchases at prices higher than the cost of the marginal capacity resource. The model distinguishes between “emergency purchases,” associated with events when emergency assistance is requested from neighboring systems, and “reliability purchases,” which include all other purchases at prices higher than the cost of a marginal capacity resource. In simulations, these reliability and emergency purchase costs, including purchases associated with demand-side resources, can range from $1/MWh above the dispatch cost of a CT to the cost of unserved energy (e.g., well in excess of $1,000/MWh) under extreme conditions.9

Third, unserved energy costs represent the value of lost load to customers. This value typically is derived from customer surveys.

Finally, capacity resource carrying costs are the costs of adding additional capacity in $/kW-yr.

The unserved energy costs are easily calculated in most reliability models. However, the production cost of expensive units, and the portion of reliability-related costs associated with power purchases during reliability and emergency events must be considered. SERVM utilizes a scarcity pricing model to simulate purchase costs during capacity shortages. For this case study, 10 years of actual historical prices from bilateral reliability and emergency purchases in the region were analyzed to estimate scarcity pricing curves which vary with reserve margin and the amount of capacity needed. SERVM was then calibrated to an actual historical year to ensure that the model is accurately projecting the cost of reliability purchases.

Lowest-Average-Cost Reserve Margin

Figure 1 shows one set of results from this case study. The figure shows the probability-weighted average cost of various reliability-related cost elements as a function of planning reserve margins. The lowest-average-cost reserve margin can be determined, for example, based on the point at which total reliability-related costs plus the cost of carrying additional reserves is the lowest, ignoring the uncertainty of costs around the weighted average costs shown in the chart. In the case study, this lowest-average-cost reserve margin is 12 percent. But this result will vary significantly across regions based on their size, load shape, resource mix, and many other factors.

The analysis also shows that, for the system studied here, the primary driver of reliability costs is expensive market purchases. In contrast, and contrary to usual reliability study assumptions, the value of lost load isn’t a highly significant factor in determining optimal reserve margins. Even if the value of lost load is changed by $5,000/MWh, the lowest-average-cost or risk-neutral optimal reserve margin shifts by only approximately 0.5 percentage points.

Importantly, as Figure 1 illustrates, because the cost of reliability events (in particular emergency and reliability purchases) increases quickly as reserve margins decline, omitting some of these costs in reserve margin evaluations can lead to greatly understated estimates of optimal reserve margins. If one considered only the installed cost of peaking capacity and the value of lost load, the reserve margin that yields the lowest average costs would appear to be only 9 percent, while it’s 12 percent when all reliability-related costs are considered—and before even attributing any insurance value to risk mitigation.

Finally, Figure 1 also shows the strikingly different reserve margins that would result from applying the 1-in-10 standard interpreted as 1) 2.4 hours of lost load per year and 2) 1 event in 10 years. These different interpretations of the 1-in-10 standard yield a difference in the target reserve margin of 4.5 percentage points.

Risk-Adjusted Reserve Margins

In the presence of risk aversion, the value of higher reserve margins also includes the insurance value of avoiding infrequent high-cost outcomes. While Figure 1 is informative, it over-simplifies the problem by only comparing fixed capacity costs with the long-term averages of very uncertain market exposures. To perform a more informed comparison, the uncertainty of market exposure needs to be considered as well.

The probability distributions of the total annual costs (excluding the more certain CT carrying costs) are shown in Figure 2. The figure shows that substantial annual cost uncertainty exists at any given level of reserve margin. Most of this cost uncertainty is associated with the risk of very infrequent high-cost outcomes.

As Figure 2 shows, for 90 percent of possible annual outcomes, the reliability-related cost exposure is quite low for reserve margins in the 11 percent to 18 percent range. Only in the last 10 percent of possible annual outcomes does a combination of factors occur that causes substantial reliability-related costs. For example, while the expected average of annual reliability-related costs at a 12 percent reserve margin are only $240 million, there is a very small chance that total annual reliability-related costs could be as high as $8.3 billion. Assuming total retail rates are 10 cents/kWh, this maximum cost exposure would raise consumers’ annual costs by 50 percent. These numbers are not unrealistic considering that the California Energy Crisis would have doubled retail rates if all costs had been passed through to customers.

Considering that customers, regulators, and policy makers want to avoid high-cost outcomes, the “optimal” target reserve margin consequently shouldn’t be based solely on the lowest-average cost reserve margin, shown as 12 percent in Figure 1. While a 12 percent reserve margin would offer the cheapest option for customers in terms of long-run average costs, the highest-cost outcomes that load serving entities and customers would be exposed to might be unacceptable.

In the insurance industry, premiums are frequently set using a 95 percent confidence level that the insurance company will be covered in the long term. A similar calculation for determining the appropriate risk adjustment can be used for setting the target reserve margin. Assuming that substituting the 95th percentile cost for the weighted average cost is a proper risk adjustment, the target reserve margin increases from 12 percent to 15 percent—which, in the case of this system, happens to be close to the target reserve margin based on the 1-in-10 standard.

As Figure 2 also shows, increasing the target reserve margin from 12 percent to 15 percent by installing additional CT capacity decreases the maximal cost exposure from $8.3 billion to $4.0 billion. This increases the incremental carrying costs of CTs from approximately $150 million per year to about $250 million per year (as shown in Figure 1), which increases average retail rates by less than 1 percent. However, the maximum possible reliability-cost-related annual retail rate impact is reduced from 50 percent to only approximately 25 percent.

Capacity Value of Different Resources

Economic reliability modeling also quantifies the annual capacity value of different types of resources, such as CTs with and without environmental dispatch limits, storage devices, demand response resources with different dispatch costs and restrictions, or intermittent renewable resources. SERVM simulations show that the value of these resources differs significantly across power systems and depends greatly on a system’s resource mix. For example, the value of dispatch-limited demand-response resources will decline as their share increases and the dispatch limits bind more often. The simulations also documented that the resource adequacy value of intermittent resources, such as wind, is higher in a power system with hydro storage or other energy-limited resources. This is because even if intermittent resources don’t reliably generate during system peak, their generation during near-peak hours allows storage and other energy-limited resources to be conserved for peak periods.

Quantifying Tradeoffs

An economic simulation of system reliability offers substantial benefits and allows stakeholders to understand the costs, risks, and tradeoffs of resource adequacy policy options. Economic reliability analysis can provide a dramatically improved understanding of resource adequacy risks, and can help determine more cost-effective solutions that consider the tradeoff between the expected level and uncertainty of reliability-related costs. Additionally, this analysis can help decision makers understand the link between economically efficient target reserve margins and physical reliability standards such as the 1-in-10 standard, and quantify the resource adequacy value of different types of technologies, such as demand-response, storage, energy-limited, and renewable resources. Finally, it can inform stakeholders about the value customers are receiving from paying for reserve capacity. As the analysis shows, the value of additional reserves includes both the reduction in expected average reliability-related costs, such as the high cost of emergency purchases or the cost of curtailments; and the insurance value associated with the reduction of infrequent but extremely high-cost outcomes.



1. Calabrese, G., “Determination of Reserve Capacity by the Probability Method,” American Institute of Electrical Engineers, vol.69, no.2, pp.1681-1689, January 1950.

2. For example, see Wilson, James F., “Reconsidering Resource Adequacy, Part 1: Has the one-day-in-10-years criterion outlived its usefulness?” Public Utilities Fortnightly, April 2010 and “Reconsidering Resource Adequacy, Part 2: Capacity planning for the smart grid,” Public Utilities Fortnightly, May 2010.

3. See FERC Notice of Proposed Rulemaking on Planning Resource Adequacy Assessment Reliability Standard, Docket No. RM10-10-000, Oct. 21, 2010 (responding to NERC’s filing of the regional reliability standard BAL-502-RFC-02).

4. See the NERC Generation and Transmission Reliability Planning Models Task Force (GTRPMTF) “Final Report on Methodologies and Metrics - September and December, 2010 with Approvals and Revisions.”

5. Sweeney, James (2002) The California Electricity Crisis, Hoover Institution press, ISBN 978-0817929121, p. 171.

6. Weare, Christopher (2003). The California Electricity Crisis: Causes and Policy Options. San Francisco: Public Policy Institute of California. ISBN 1-58213-064-7, pp. 3-4.

7. SERVM has been used extensively by large utilities in the southeastern U.S. In contrast to several other reliability modeling tools (such as GE-MARS), SERVM allows for the explicit consideration of economic factors such as the cost of emergency purchases, the cost of integrating intermittent or energy-limited resources, the cost of demand side resource dispatch, and the economic and reliability value of tie line capacity to neighboring power systems.

8. 40 weather years x 7 load forecast error points x 400 unit outage iterations = 112,000 simulations.

9. Purchase prices during reliability or emergency events may also include premiums associated with high opportunity costs of energy-limited resources, emergency assistance available from high-dispatch-cost demand-side resources in neighboring systems, or markups related to the exercise of market power by suppliers during scarcity events.