What Price, Resiliency?

Deck: 

Evaluating the cost effectiveness of grid-hardening investments.

Fortnightly Magazine - October 2013

Recent experience has demonstrated that the nation’s electric infrastructure can be highly susceptible to widespread power outages due to severe weather and related events (e.g., hurricanes, winter storms, tornadoes, earthquakes, climate induced changes in sea level, etc.). To this could be added manmade impacts such as physical or cyber-attack. For instance, in 2008 Hurricane Ike caused 2.15 million customer outages in the territory of CenterPoint Energy, while Hurricane Irene caused more than 4.0 million homes and businesses across the Eastern United States to lose power. In June last year a major storm system, Derecho, moved through 11 states and the District of Columbia traveling around 600 miles in 10 hours. Roughly 4.2 million utility customers lost power. It took as many as seven to 10 days to restore power in many cases. More than 2 million customers lost power in New York due to Superstorm Sandy, and a week after the event, more than 200,000 customers were still without power. Figure 1 shows the timeline of power restoration by utilities in New York City following Sandy.

As our economy and well-being are heavily dependent upon electricity, there’s been increasing attention to making our electric system more resilient. This resiliency takes the form of hardening critical infrastructure so it’s less vulnerable to failure, as well as improving response time and decreasing the time it takes to restore electric power if it does go down.

However, as with all such issues, accomplishing these things has costs. A key question being faced by energy company executives, shareholders, regulators, and other stakeholders is how much cost is appropriate. What is an appropriate level of investment, both in capital and operations and maintenance (O&M) expense, to achieve a given level of resiliency?

Cost-Benefit Benchmarks

Stakeholders involved in making these decisions would benefit from a reasonably straightforward cost-benefit structure that would inform these decisions. An example construct is the California Manual of Standard Practice1 adopted decades ago by California and many other states to make decisions about investments in utility energy efficiency programs. Summarizing in a simplistic manner, a series of cost-benefit tests was developed against a defined benchmark (the long run avoided cost of new generation). If energy efficiency programs could meet demand growth at a cost lower than constructing new generation, energy efficiency programs should be pursued. Otherwise, utilities should build new generation. These cost-benefit tests were run from several perspectives, the most commonly used one being the Total Resource Cost (TRC) test. Regulators could feel comfortable in reviewing these analyses and making decisions based on a reasonably sound analytical construct.

Resiliency programs require a similar construct. Note, that we define resiliency differently from standard day-to-day reliability. Resiliency is rather typically associated with a high impact but likely low frequency event, such as a major storm. Consider the hypothetical curve shown in Figure 2 (also see Test Data Table at www.fortnightly.com/appendix-what-price-resiliency).

Figure 1 - Outage Restoration in NYC After Sandy

The X axis shows some measure of reliability, one example of which could be the hours of outage experienced after a major disruption (this could be based on a CAIDI2 type index reflecting a combination of the number of customers affected and the duration of the outage). The Y axis shows cost to hold the level of post-event outage to a defined level. Decreasing hours of post-event outage requires increasing costs. At some level there are diminishing returns; additional investments have a positive but decreasing effect on the resiliency of the grid. The curve is in some sense asymptotic in that it never truly intercepts the y axis; we probably can never attain 100 percent reliability at any cost. The actual numbers on the axes are hypothetical but serve to illustrate the concept. In general the curve follows the formulation xy=k, which demonstrates the basic form of the diminishing returns construct. The k is a constant. If x=k/y, the constant is a measure of the rate of diminishing returns to improve grid resilience (e.g., it costs roughly $50 million to reduce estimated post-event outage hours from 100 to 50 on this example curve. This $50 million could be some combination of annualized capital investment and operations and maintenance expense). Presumably, utility engineers would work up the x and y values unique to their service territory. We use the mathematical structure of xy=k here because it’s believed that an actual utility curve would more or less follow a similar functional relationship of reliability versus cost.

Customer Restoration-90

To refine the discussion somewhat, we introduce the concept of the “Customer Restoration-90” as another X-axis metric. In most major high impact events, power is lost to a large number of electric customers. These include residential customers, small commercial customers, and medium and large commercial and industrial customers (including agricultural, government, etc). Utilities generally are well versed in customer restoration, with a long history of responding to hurricanes, ice storms, tornadoes, etc. They typically implement restoration plans in a prescribed order, with critical loads, such as hospitals, safety, and national security loads being restored first, followed by a sequence of “biggest bang for the buck” steps where major blocks of customers can be restored in descending population order, such that often the last customers restored are single or small groups of customers at the end of the circuit.

Customer Response-90 is defined as the number of hours it takes from the start of the outage event to restore power to 90 percent of the customers of a given utility. Obviously, stakeholders would like this number to be as low as possible. For the curve in Figure 3, we model a CR-90 lasting as long as two weeks; 336 hours expressed in 12-hour increments. In the real world, either of these parameters could be varied. For example we could model more or fewer hours until recovery, or we could change the 90 percent to 80 or 98 percent, etc.

The Y axis of Figure 3 is again expressed in dollars. It could most easily be represented as an annual cost to the utility associated with targeting a given CR-90 level. The cost would comprise the amortized capital costs of certain hardening and related capital investments, as well as the extra O&M costs associated with extra tree trimming, more trucks, more crews, etc., to maintain the given CR-90 level. The extra O&M is above and beyond the standard expenses for these items that a utility incurs.

Figure 2 - Costs Associated with Increasing Resiliency of the System

For purposes of the illustrative curve, we derived these hypothetical costs, y, by solving the equation xy=k, where k, a constant, equals 5 million. The constant could be manipulated as a proxy for the size or number of customers of a utility, or the data could be replaced by actual engineering-based estimates, which would vary by utility. The purpose here is to simply illustrate the concepts. So for example, targeting the CR-90 to 100 hours would require this hypothetical utility to spend an additional $50 million per year in amortized hardening and increased O&M. Stated another way, it would cost ratepayers an additional $50 million per year to have a post-major event restoration time whereby 90 percent of customers were restored in 100 hours or less.

At some point on this curve, there’s a point of diminishing returns. That is, the marginal costs exceed the marginal reduction in the CR-90. This precise point can be calculated mathematically. A cursory examination of the curve shows that it’s somewhere around a CR-90 of 60 hours at a cost of approximately $80 million per year; mathematically, this would be the intersection point of a 45-degree tangent line to the curve.

This isn’t to say that stakeholders and regulators must pick this point on the curve. It’s merely the point of diminishing returns. The question stakeholders must address is where on the curve they wish to be. Some jurisdictions might wish to spend less money with the understanding that the CR-90 could be longer. Others might wish to spend more money, even in the diminished returns portion of the curve, to further reduce the CR-90, perhaps because of a range of external factors.

Outage Cost Analysis

So far the discussion has focused on investment costs to reduce the CR-90 after a major disruption event. But the outage hours themselves also have a cost. These are the direct costs of lost revenue to the utility, and also the indirect costs to businesses and residences of being without power. Such costs must be factored into the analysis to take a more comprehensive and perhaps alternative view of resiliency.

Figure 3 - Customer Restoration-90

For this analysis, we have relied on data developed by Lawrence Berkeley Laboratory (LBL)3, but have made greatly simplifying assumptions. LBL computed the average cost per kilowatt (kW) for an eight-hour outage at $7.10 for residential customers, $2,174.80 for small commercial and industrial customers, and $115.20 for large commercial and industrial customers; the cost for larger customers is lower as they’re more likely to have backup generation.

If we assume a hypothetical utility with a 15-MW peak, equally divided between the above three customer classes, the average cost is $765.70 per kW. Multiplying by 15,000 kW, we get a cost of $11,485,500 for an eight-hour outage. Dividing by 8 gives us a crude estimate (it isn’t a linear calculation) of $1,435,687.50 per hour for this hypothetical utility. Figure 4 shows these costs cumulatively escalating for each hour of outage. (This is a highly simplified analysis because it doesn’t reflect the fact that at any given CR-90, some customers will have been restored). Figure 4 shows the total cost curve, which is the sum of the investment costs and outage costs. The precise minimum cost point can be mathematically solved for, but a cursory examination of the total cost curve suggests that the minimum total cost occurs at the approximate 60-hour CR-90, at a total cost of roughly $169.5 million. (It should be noted that the outage cost is for a onetime event, whereas the hardening costs could be a recurring annual investment of the utility. Therefore this example analysis assumes one significant outage per year with the associated costs. The analysis could easily be adjusted if planners wanted to assume two or more significant outages per year for planning purposes or the analysis could be updated each year to reflect updated annual capital and O&M costs).

Mathematically, the resultant cost curve could be fit into a fifth-order polynomial with the following coefficients: Y = a x5 + b x4+ c x3+ d x2+ e x+ f.

Figure 4 - Closing the Skills Gap

Based on the equation above, we solve for the lowest point on the curve as x = 65 hours, that corresponds to a total cost value of $168.7 million. That is, the economic optimum in this example is that this utility would invest a total of up to $168.7 million per year in order to restore service to 90 percent of its customers within 65 hours of the outage event.

Prioritizing Resiliency Investments

With several competing alternatives that could be considered for increasing resiliency, careful attention needs to be placed on prioritization with an emphasis on managing the overall risks. Factors to consider will likely include both qualitative and quantitative metrics including: criticality; benefit vs. cost; customer effect; and ease of implementation.

For comparison of a select set of projects, it’s useful to evaluate each criterion on a common scale and assign points to the value provided from the project. For example, the benefit cost determination might be the most significant element of the analysis, and would be assigned 80 of 100 points. As with any infrastructure investment, traditional methods for project justification must be considered. Critically, the cost benefit determination and overall rate impact are necessary to determine whether the investment is warranted. This determination would include: cost assessment; cost-benefit analysis (i.e., what direct benefits to the electric market would the assets provide versus the investment cost, and how would these offset rate increases); rate effect and cost allocation; comparison of the same to other alternatives; and assessment of the exposure of the investment to forward market changes.

As discussed, some less traditional evaluation metrics must be considered to capture fully the reasoning and rationale for the investment decisions related to hardening and resiliency investments. Specifically, the hardening of the system goes toward addressing costs associated with recovery from events driven by extreme circumstances and includes a review of the likelihood of extreme events; the recovery cost associated with such events; the costs of lost load, including impact on consumer income and business revenues; and the direct costs to business to recover from loss of operations. It further includes an assessment of the resulting impact to the overall economy of increased rates due to the investment. Typically, one would evaluate these costs and benefits on an expected lifetime (NPV) basis rather than an annual impact basis, given the probabilistic nature of considering the costs of individual events. 

The analytical framework provided here for considering the cost effectiveness of grid resiliency investments is based on approximate analysis with hypothetical numbers. However, it offers a theoretical construct under which to evaluate the efficacy of additional investments in electric grid resiliency. Individual utilities will have to develop their own specific data, and along with customers, regulators and other stakeholders engage in discussions about how resilient we want the grid to be – and at what cost.

Endnotes:

1. California Standard Practice Manual, Economic Analysis of Demand-Side Programs and Projects, October 2001.

2. CAIDI is Customer Average Interruption Duration Index and provides the average outage duration that any given customer would experience. It can also be viewed as the average restoration time.

3. “Estimated Value of Service Reliability for Electric Utility Customers in the United States,” Ernest Orlando Lawrence Berkeley National Laboratory, June 2009.