State commissions can select from a toolkit of regulatory approaches to promote desired utility cybersecurity behavior. One approach is to allow the industry to selfregulate, and another approach...
The Value of Resource Adequacy
Why reserve margins aren’t just about keeping the lights on.
Setting target reserve margins within the context of resource adequacy planning has historically been based strictly on the “1 day of firm load shed in 10 years” reliability standard. In other words, under the 1-in-10 standard, reserve margins are determined solely based on the probability of physical load loss events. This approach doesn’t explicitly determine whether the particular target reserve margin is reasonably cost-effective or otherwise justified economically. In fact, the economic benefit of avoiding one firm load-shed event in 10 years is small relative to the cost of carrying incremental capacity. However, the economic benefits of reserve capacity go beyond avoiding load-shed events to include reducing high-cost emergency purchases, the dispatch of energy-limited ( e.g., intermittent, storage, etc.) and high-cost resources, and the interruption of expensive demand-response resources.
A case study of an economic simulation of reliability events and their costs and benefits shows that this type of analysis can provide a dramatically improved understanding of resource adequacy risks. It also can help identify more cost-effective solutions to meet given resource adequacy standards, document the link between economically efficient target reserve margins and physical reliability standards such as the 1-in-10 standard, and inform stakeholders about the value customers are receiving from paying for reserve capacity. As the analysis shows, sole reliance on physical reliability metrics, such as the 1-in-10 standard, easily results in setting target reserve margins that—depending on system size and characteristics—are either too low or too high to be cost effective and economically efficient.
The Origins of Resource Adequacy
For decades, the utility industry has been using the 1-in-10 standard for setting target reserve margins. While the origination of the 1-in-10 metric is somewhat vague, there are multiple references to it in papers starting with articles by Calabrese from the 1940s. 1 In the literature surveyed, no justification was given for the reasonableness of the standard other than that it’s approximately the level that customers were accustomed to. Because customers rarely complain about the level of reliability they receive under the 1-in-10 standard, few question the 1-in-10 metric as an appropriate standard. While the standard has been questioned recently in regions with capacity markets, such as PJM, 2 little empirical work has been undertaken to quantify the full economic value provided by reserve margin targets or to confirm that sole reliance on such physical reliability standards produces a reserve margin that reasonably—if not optimally—balances the tradeoff between the economic value of reliability and the cost of carrying the amount of planning reserves needed to maintain target reserve margins.