High-voltage generation reserves cost more than would portable, small-scale units to keep critical services on line during a major power outage.
The Value of Resource Adequacy
Why reserve margins aren’t just about keeping the lights on.
markets, increased penetration of renewable and demand-side resources, and legislative changes raise the question of whether target reserve margins set solely based on the 1-in-10 standard are either too low or too high to be reasonably cost effective and efficient today. Arguably customers and policy makers must have a means to understand the full economic value that additional capacity ( i.e., higher reserve margins) provides beyond physical reliability. An economically efficient resource adequacy standard should:
• Provide a level of reliability that is meaningful to all customer classes;
• Reasonably balance the economic value, including price-risk mitigation, that customers receive from reliability with the cost of supplying that level of reliability;
• Demonstrate to customers what economic and other benefits reserve margins provide beyond the physical reliability benefit;
• Provide adequate investment incentives for suppliers of capacity-only products;
• Result in a reasonably optimal mix of peaking resources that supply energy during the highest-load periods; and
• Consider the ability of a system to absorb energy limited, non-dispatchable, and demand-side resources.
A comprehensive approach to economic reliability analysis attempts to address and balance these goals.
Limitations of the 1-in-10 Standard
Relying solely on the 1-in-10 standard to determine resource adequacy targets won’t reliably result in economically efficient and cost effective reserve margin targets because of a number of important limitations of the 1-in-10 standard. These include the absence of a standard definition, and failure to consider the full customer cost of reliability-related events.
As recognized in the recent effort by NERC and Reliability First Corp., the 1-in-10 reliability standard has different interpretations. 3 Most resource adequacy planners define it as one event in 10 years and measure this by calculating loss of load expectation (LOLE) in “events per year,” which equates to a 0.1 LOLE measured in events per year. However, others define the 1-in-10 metric as one day (24 hours) of load loss during a 10 year period, which equates to an LOLE of 2.4 measured in hours per year. As shown in the results of this study, these different interpretations alone can result in target reserve margins that differ by more than 4 percentage points. While planners recognize that the 2.4 hours per year interpretation provides different physical reliability than the 1-event-in-10-years interpretation, the question remains which metric provides an adequate level of reliability. In addition, the 1-in-10 standard doesn’t generally define the magnitude or duration of the firm load shed as measured by the “expected unserved energy” (EUE). Based on a small number of samples, even when applying the same 1-in-10 definition, the average magnitude of EUE as a percentage of total load varies from 1 percent for large systems to around 5 percent for relatively small systems. This is one reason why normalized EUE (EUE divided by “Net Energy for Load”) was adopted as a physical reliability metric for the NERC effort under the Generation and Transmission Planning Models Task Force (GTRPMTF). 4
Like any solely physical reliability standard, the 1-in-10 standard assumes that a reliability event occurs only if firm load is shed. However, reliability-related costs