SEVERAL YEARS AGO, ENGINEERS AT AMERICAN ELECTRIC Power measured the transfer capability or transmission capacity (in this article we will use the terms interchangeably) between AEP and Commonwealth Edison. Using traditional methods, they found that the winter transmission capacity that year was 3,500 megawatts.
Then they performed a more exhaustive and nonstandard analysis. It showed that during the month of January, transmission capacity actually varied from a low of 1,600 MW (less than half the nominal amount) to a high of 6,000 MW (70 percent higher than nominal).
Why is transmission capacity random? How is the probability structure of transmission capacity computed? Why doesn't anybody use random transmission capacity today? This article will try to answer these questions.
But first, it is important to understand why transmission capacity must be modeled correctly - including its random characteristics. Recognizing electric power transmission system capacity as a random variable will reduce risk and transmission costs and will allow increased use of the transmission system. It will improve both planning decisions and energy contracting in the evolving power markets.
Reducing Risk, Increasing Use
It seems a most ingenious paradox, that modeling transmission capacity as a random variable can reduce risk. After all, isn't risk a function of uncertainty? And isn't a random variable a kind of uncertainty?
Pretending that transmission capacity is a nice, solid, constant, deterministic number doesn't make it so. Facing the uncertainty head on is inherently less risky than assuming away reality.
For example, AEP engineers worried that 28 percent of the time the system transmission capacity was less than what their traditional modeling showed. (See Figure 1, which shows transmission capacity on the horizontal axis and the probability of not reaching a particular value of transmission capacity on the vertical axis.)They were concerned that at any particular moment the system would be called upon to transfer 3,500 MW when it might only be capable of 1,600 MW. If the transmission provider and user assumed that 3,500 MW was available 100 percent of the time, AEP might be exposed to operating problems or expensive penalties for the equivalent of more than one week out of the month.
How might they hedge this risk? The obvious approach is to reduce the reported transmission capacity still further. But doing this - or even using the original 3,500 MW number - is wasteful. Most of the time the transmission capacity is actually greater than 3,500 MW. It is silly to throw this away.
In fact, pretending that transmission capacity was a constant 3,500 MW during just one month could result in trying to use an average of 500 MW of transmission capacity that wasn't there (28 percent of the time), while letting an average of 1,250 MW of available transmission capacity go unused (72 percent of the time)!
If all parties recognized that transmission capacity was not constant, they could design contracts for transmission services with different levels of firmness. Users who could tolerate more frequent interruptions, perhaps through backup contracts elsewhere, would buy less-expensive, interruptible service. Those who didn't have this option could pay more but with a greater assurance than with present practices that service would not be curtailed.
Since the transmission system could accommodate more users, the fixed costs would be spread over more MWh, making transmission cheaper, on the average, for everyone.
The transmission services provider would reduce his risk of being sued for interrupting contracts. The transmission services provider would also reduce the risk of regulatory censure for leaving transmission capacity idle. People are becoming aware that there is unused transmission capacity out there - a lot of it - and some more-suspicious citizens may contend that utilities are withholding it from the market for reasons of their own. The absence of actual nefarious intent will not keep the lawyers at bay.
Finally, measuring and selling transmission capacity probabilistically can keep us from wasting money building transmission facilities that we don't truly need. For example, consider an industrial installation with a need for a lot of low-grade process heat - a textbook opportunity for a cogenerator of say, 150 MW. Suppose that he is on the wrong side of a transmission interface that is committed to prior users up to its (traditional) limit of (to pull a number out of the air) 3,500 MW.
Getting 150 MW more transmission capacity might require half of the capacity of a new 230-kV line, 100 miles long, costing $250,000 per mile. If the IPP had to pay for this, it could make the cogeneration uneconomical.
The IPP might be quite comfortable selling nonfirm energy across an interface that was available 60 percent of the time, though - particularly if he got a price break for taking interruptible transmission. And a power marketer at the other side might be willing to pay him almost as high a price as he would pay for firm power, perhaps covering the uncertainty with other contracts, or perhaps simply diversifying it away. Recognizing the random character of transmission capacity, and designing contracts consistent with it, could avoid a $25-million investment in a new line.
Transmission is a force-at-a-distance function of an integrated energy-conversion machine. It shares with transportation the notion of distance, but very little else. An electric transmission system consists of three elements:
1. current-carrying hardware;
2. control and protection devices; and
3. planning and operating practices and procedures.
It is remarkable that, in spite of the thousands of miles of transmission circuits in service, and our century of experience with transmission, we do not have a way of measuring transmission capacity directly.
Probably the best definition of transmission capacity, and one that is consistent with Available Transmission Capacity under the Federal Energy Regulatory Commission, was developed by the North American Electric Reliability Council:
FCITC [first contingency incremental transfer capability] is the amount of electric power, incremental above normal base power transfers, that can be transferred over the interconnected transmission systems in a reliable manner based on all of the following conditions:
(1) [W]ith normal (pre-contingency) operating procedures¼ all facility loadings are within normal ratings and all voltages are within normal limits;
(2) The electric systems are capable of¼ remaining stable, following [any single contingency]¼ and
(3) After the dynamic power swings subside following a [single contingency]¼ all transmission facility loadings are within emergency ratings and all voltages are within emergency limits.
One weakness of this definition is that FCITC cannot be measured directly or simply, requiring instead laborious computer studies by a skilled specialist. A second weakness is that the definition requires subjective interpretation since it reflects planning and operating practices and procedures. A third weakness is that transmission capacity varies with time and with operating conditions, some of which are random.
These variations can prove significant. For instance, in recent years the nominal transmission capacity between British Columbia and Washington was 2,300 MW in the summer and 600 MW in the winter. The difference was due to the seasonal changes in generation and load patterns in British Columbia and the northwestern United States. The variation in actual hour-by-hour transmission capacity is probably much greater.
Figure 2 illustrates the highly variable nature of wheeling flows. The month-to-month variations are huge, with wheeling ranging from less than 40,000 MWh to nearly 300,000 MWh. The peak hourly wheeling was 1,998 MW, while the average was only 227 MW, giving an annual load factor of 11 percent. If another transaction, with lower priority, were to contend for transmission capacity, the existing wheeling flows would contribute a significant measure of randomness to the ATC.
Figure 3 makes the same point in a different way. This figure is striking because the wheeling charges for these transactions were based on the capacity reserved. The marginal wheeling cost as seen by the purchaser of the service, up to the capacity reserved, was zero - yet 50 percent of the transactions had load factors of 70 percent or less.
Utilities have not used random models of transmission capacity for several reasons. One important reason is that, until recently, a practical way of computing random transmission capacity was not available.
Transmission system usage is variable because the flows are functions of a number of uncertainties, combined with operating policies and procedures. Key uncertainties include forced outages (contingencies) of generating units and hour-to-hour fluctuations in load, and these in turn affect dispatch, unit commitment, maintenance scheduling, etc., which in their turn affect flows.
Recently developed production simulation programs that include electrical models of the transmission system can compute random flows and transmission capacity. These programs simulate the dispatch and operation of the power system for combinations of the random variables. One type of production simulation program uses Monte Carlo sampling to develop a statistically significant set of outcomes of these variables. Another type combines these variables mathematically to capture the effects of all possible combinations, without sampling.
Either type can produce data like that shown in Figure 4. These are flow-duration curves, similar to the familiar load-duration curves, for several limiting lines and interfaces in a large region with about 65,000 MW of peak load.# A point (x,y) on one of the curves is the probability (x) that the flow is greater than or equal to y MW. For example, there is a 50 percent probability that the flow on line B will exceed 1,200 MW in the negative direction in any given hour. Figure 4 covers one year.
Notice that at least some of these curves do not represent a gaussian random variable. Furthermore, the flows on the various lines and interfaces are not necessarily independent, nor are they perfectly correlated. Table 1 gives the correlation coefficients for the various line/interface pairs for another system. The fact that the distributions are not gaussian, and that correlations are complex, means that classical system theory is not very useful in analyzing available transmission capacity, or ATC. The analysis needs to be based on a production simulation program that can model how the system is really operated.
A first cut at the probabilistic ATC for one of these lines or interfaces is the difference between the total thermal transmission capacity and the scheduled transfers.
Subtracting a reliability margin reflecting the flows that would result from the worst possible contingency can refine this first cut. This reliability margin is random, since which contingency is the worst will depend on the state of the system. The fact that the flow on the outage worst contingency element is also uncertain compounds the randomness. But these effects can be modeled in the production simulation program.
Current production-simulation programs cannot compute limits due to voltage or stability problems. Where such limits exist, they can be calculated using other techniques and combined with the limits found by the production-simulation program.
The resultant probabilistic ATC can be expressed as a table or represented graphically as a curve like the one in Figure 1.
How can using probabilistic transmission capacity reduce risks, increase transmission system usage and revenues, and aid in transmission system planning? For simplicity and clarity, let's look at a sample system instead of a real one using the interface whose probabilistic ATC is given in Table 2. Three wheeling transactions (A, B, and C) contend for this transfer capability. At any given hour, each transaction will need either 150 MW of transfer capability, with probability 0.7, or none, with probability 0.3. The transactions are statistically independent.
CONVENTIONAL DETERMINISTIC MODELING. Suppose the transmission service provider (TSP) decides that the deterministic transmission capacity is 300 MW. As in Figure 1, this means that only about 30 percent of the time will the actual transmission capacity be lower than this amount. Suppose also that the wheeling tariff is $1.67/kW per month, or $20/kW per year. The transmission provider will accommodate only transactions A and B and will charge $6 million per year.
How reliable is the service? Assume that when a transaction must be curtailed, TSP in real time randomly selects either A or B to curtail (probability = 0.5). A transaction will need to be curtailed when two conditions are met: 1) The transmission capacity is reduced, and 2) the transaction is active. Table 3 shows that transaction A will be curtailed with probability 0.0963 - almost 10 percent of the time. Since the two transactions are treated identically, transaction B will be curtailed with the same probability, though usually not at the same time as A.
PROBABILISTIC MODELING. On the other hand, suppose that the TSP offers service to all three contenders, accepting bids for priority of service. Suppose that A outbids B and C for highest priority service, paying $2.50/kW per month, and that B outbids C for the second highest, paying $0.83/kW per month. C pays $0.50/kW per month for the most interruptible service. A and B pay a total of $6 million per year, while C pays $900,000 annually, increasing transmission revenues by 15 percent.
How reliable is the service? Transaction A is interrupted only if ATC drops below 150 MW (probability 0.05) while A is active (probability 0.7), so A will be interrupted only 3.5 percent of the time - a big improvement from the deterministic 9.63 percent.
Transaction B, on the other hand, is interrupted if it is active when ATC drops below 150 MW (probability 0.035) or if both A and B are active when ATC is between 150 MW and 300 MW (probability 0.1225, or 0.25 5 0.49), for a total probability of interruption of 0.1575. This is considerably worse than the deterministic 0.0963 - but B is paying less, too.
Transaction C is interrupted with probability 0.4. This seems counterintuitive, as there is only a 10 percent probability that transmission capability is as much as 3 5 150 MW = 450 MW. But with each of the transactions active only 70 percent of the time, C often can use capacity earmarked for A or B, even when transmission capability is less than 450 MW.
In summary, A gets more reliable service, B gets less reliable service (but pays less), and C gets service where he would have got none. TSP makes more money than before; the mechanics of what happens to this money, in the context of a regulated monopoly, is beyond the scope of this paper. Presumably it will be returned in the form of lower rates.
REDISPATCH DURING CONGESTION. In reality, all three may be able to get more reliable service than computed above. When the TSP discovers it has insufficient transmission capability to accommodate everyone, it may have other options than simply curtailing C, B, and A. It may be able to redispatch (if it owns its own plants) or to negotiate a redispatch by third parties. The cost for doing this would be borne by the transactions (C, B, or A) benefiting from the situation. If the redispatch costs are too high for C, B, and A, then curtailing rather than redispatching is clearly the right thing to do.
Of course, A, B, and C can make their own projections of the probability of congestion, and of the redispatch costs should congestion occur. This will affect the amount they are willing to bid for transmission service.
TRANSMISSION SYSTEM PLANNING. This process gives useful information to the transmission planner. In the example above, it is worth slightly less than $20/kW per year ($3 million per year) to transaction B to increase its service reliability from 15.75 percent to 3.5 percent. If a transmission reinforcement that would accomplish this had a carrying charge of less than $3 million per year, then B likely would be willing to pay for it. Similarly, C would likely be willing to contribute up to $4/kW per year ($600,000 per year) if its service reliability could be increased.
These ideas should find their way into the market place. To make this possible, transmission service providers should: 1) Post probabilistic instead of deterministic ATC for key lines and interfaces; and 2) allow customers to pay for different levels of reliability.
Yes, these actions will make a bit more work for transmission companies. But this ability should lead to increased use of the transmission system, reduced risk, lower rates and better planning decisions.
Hyde M. Merrill is the proprietor of Merrill Energy LLC, an electric power consulting firm in Schenectady, N.Y.
1. R. M. Maliszewski and M. Chau, Application of Probabilistic Transfer Capability Analysis in Transmission System Performance Studies. CIGRé paper 38-01, Aug. 28-Sept. 3, 1988.
2. Transmission Transfer Capability, North American Electric Reliability Council, Princeton, N.J., May 1995.
3. Engineering and Reliability Effects of Increased Wheeling and Transmission Access, Edison Electric Institute, Washington, D.C., November 1988.
4. H. M. Merrill, Probabilistic Available Transfer Capability, presented at panel session on risk analysis of available transfer capability, IEEE PES Winter Meeting, Tampa, Fla., Feb. 4, 1998.
Articles found on this page are available to Internet subscribers only. For more information about obtaining a username and password, please call our Customer Service Department at 1-800-368-5001.