Data gathering and controllability offer the quickest path to reliability.
Managing power grids in North America has become much more complicated in recent years, and that complexity grows with each passing day.
For example, wholesale power trading has caused grid operators to re-think the way they monitor system functions and manage reliability. Now they must account for power flows in and out of control territories and reliability regions. Increased demands for service-quality guarantees and enhanced services are stretching utilities' capabilities to the limits. And in the post-9/11 world, grid operators must coordinate and communicate better than ever before.
The August 2003 blackout taught the industry some valuable lessons about what can go wrong when a system loses stability, and it pointed engineers toward solutions that would strengthen the system's survivability in the future. At the same time, these solutions promise to increase efficiencies and bring additional benefits.
Nevertheless, coping with all the added demands and complexities poses a unique set of challenges and opportunities for utilities. The companies that seize the opportunities and make them work in favor of their stakeholders will prove to be the industry's leaders in the 21st century.
To explore the direction the industry is heading, arranged interviews with three executives who offer very different-yet complementary-perspectives on the current state and future of grid-management and information technology (IT) systems in the utility industry.
GridWise: Teaching the Network to Dance
Efforts to improve the utility industry's reliability standards are proceeding on multiple tracks at the same time. One of these tracks is being followed by the Department of Energy's Pacific Northwest National Laboratory (PNNL) in consortium with a group of utilities and industry vendors. To learn about this effort, spoke with Rob Pratt, PNNL's program manager for the GridWise Alliance.
Fortnightly: What is the GridWise Alliance?
Pratt: To summarize, it is a vision for the future of the power grid, shared by two groups of people. One is a new Department of Energy (DOE) program, and the other is an industry alliance. Both efforts share a vision in which information technology utterly transforms the way in which we plan and operate the power grid, and the groups are dedicated to achieving it sooner rather than later-and with more public benefit.
The alliance includes a variety of different companies and organizations, like BPA, PJM, IBM, AEP, and Con Edison. There are also a lot of high-tech startups involved.
Fortnightly: What is the Alliance doing?
Pratt: Alliance members gather together to try to identify regulatory barriers and help knock them down. We also focus on consensus building for communications protocols and architecture.
One of the basic core directions of GridWise is a focus on the responsibility for maintaining stability and serving peak demand. The responsibility for keeping the lights on will evolve over time to be a shared responsibility between consumers and other parties, including energy service companies that will aggregate demand and perhaps install distributed generation sources.
The opportunity is to increase asset utilization. If, on the margin, we can reduce the need for new infrastructure by squeezing more through existing infrastructure, everyone will be better off, including utilities.
Fortnightly: How will GridWise accomplish that?
Pratt: We are trying to spur investment in technologies that connect demand-response technology to distribution systems, such as sub-stations and feeders. We can displace distribution infrastructure in very localized areas if we can communicate values about power flow among all levels of the system.
An example that we talk about a lot is the grid-friendly appliance. White-goods appliances represent about 20 percent of the peak load at any time. Instead of being a burden on the grid when times get tough, a grid-friendly appliance actually helps the grid. It has a chip in it () that can measure power frequencies very accurately. When the grid is having a disturbance, the power frequency will slow down as a power deficit is made up by all those rotating generators.
We can teach appliances to recognize this frequency shift on their own and turn off for a few minutes. They can do this much faster than you can bring power plants online to make up the deficit. This gives the grid a soft landing when it hits these deficits. It is also a cheaper resource than having plants on standby, and it takes appliances out of the problem and makes them part of the solution.
Fortnightly: How could that work? I don't imagine consumers would want their refrigerators turning off without their control.
Pratt: We're trying to make this a no-harm, no-foul technology. Appliances like dishwashers, refrigerators, clothes dryers, air conditioners, and electric heaters go on and off all the time. If you are clever about it, and you turn it off for two or three minutes, no one will notice. The chips will be built into the appliance, so the light wouldn't go off in the refrigerator, just the compressor.
Fortnightly: How would this technology work in a blackout?
Pratt: When there is a blackout, one of the hardest parts to manage is the fact that every appliance is thirsty for power. The grid operator is trying to get the grid back up and not quite making it because of the huge demand from all the loads, which effectively are trying to pull it back down again. If we create a ladder from the least-important to most-important appliance, then we can kick off appliances according to their end-use category. This could allow the grid operator to intentionally leave the frequency just below 60 Hz to signal appliances to turn off, allowing the grid to stabilize.
In other words, we can create a poor man's end-use grey-out that will replace the need for rolling blackouts.
But this technology isn't just useful for reliability management. If appliances have brains inside them, we can begin to manage loads better than we've ever done before. For example, right now your refrigerator is as dumb as a stone. It will go through its defrost cycle, which is when it uses the most electricity, at a random time, even during peak hours on hot afternoons. At any point in time, 7 percent of the aggregate residential refrigerator load is being created by defrost cycles. There's no reason to do that on peak, but the refrigerator is just too dumb to know that. It doesn't even have a clock.
You don't even need connectivity to reduce the peak refrigerator load by 7 percent. All you need is a battery-backup clock that tells refrigerators not to defrost during peak periods.
Fortnightly: This sounds like a good idea that will be difficult to implement. How are manufacturers and utilities responding to it?
Pratt: It's an idea that is foreign to both groups. Bringing them to the table has been a challenge, but we've been engaged for a couple of years and have made quite a bit of progress. We're working with one manufacturer quite closely, and we are starting a demonstration of the technology in the Pacific Northwest. An appliance manufacturer will build integral control capabilities into appliances and deploy them in places on the grid where we can measure the response of the appliances. Manufacturers are happy to put them in, but they need someone to pay them to do it. We don't want consumers making that decision, because they don't care about energy efficiency. You can appeal to patriotic goodwill, but it's far more effective to just get it built into every appliance sold in America. One way to accomplish that would be brute force; pass a law and make it part of efficiency standards. But that's not the way the world works anymore. So the real trick is getting grid operators to pay for having it installed in appliances.
The grid operators-the ISOs and utilities-are the ones who benefit. They gain by having a grid that is easier to control and not needing to have as many power plants on standby.
Utilities will go along if we can get regulators to cooperate. Utilities need to get credit for making these kinds of investments just as if it were a power plant. If it is equally prudent, they deserve to earn a return on it. If not, we take away the benefits of innovation.
Fortnightly: What else are you working on in the GridWise Alliance?
Pratt: The grid-friendly appliance is just one example of how an information technology can flip something completely on its head. There's a tipping point where transformative technologies and regulatory policies will change the paradigm of how we operate the power grid.
One example we are looking at is transactive control. Process control and energy management systems in large commercial or industrial facilities don't take the cost of energy into account. They have closed-loop control algorithms. In a commercial building where you are cooling certain spaces, the thermostat never bothers to explain to the energy management system how badly it needs cooling. It just whines until it gets it. If you can teach those processes to bid for services, you can change the way the control system behaves and price signals will go right through the premise boundary and into the control system. You just need new software, not a whole new control system, to optimize demand against costs.
Another example that will change the power grid in an important way is creating the means to differentiate power reliability and quality between one customer and his next-door neighbor.
There is a lot of buzz in the industry about differentiating service quality for customers. It sounds good, but a lot of reliability and quality problems are very localized. When you have a radial distribution system, there is only one way for power to flow: namely, downhill. If there's a power pole knocked out, the whole area downstream is affected.
Not every customer must have higher reliability, but manufacturers are demanding it. We can't afford to gold-plate the power system with completely redundant capacity, so we need another way to get power online for customers inside an outage area. With distribution automation, the utility can back-feed power up a feeder and get power to a priority customer from a different substation. Based on geography and load, there may not be enough power available. But if you have some customers with interruptible contracts, you can borrow their capacity. Alternatively, you can use distributed generation to pick up that load, or you can grey-out only a portion of the load-with grid-friendly appliances-to make room for those paying for top-quality service.
If we can deploy clever software, microchips, and control systems, we can save 10 percent of the need for new power plants and T&D infrastructure. Over the course of 20 years, that totals $80 billion, including the benefit of having a system that is more responsive and easier to control.
That, in a nutshell, is the opportunity before us.
Fortnightly: What regulatory structures are needed to make this work?
Pratt: The struggle becomes a regulatory one. If there's $100 billion on the table, who gets it? That becomes a critical issue for state regulators.We will work to make sure they are at least made whole, and preferably more than whole.
We're talking with NARUC (the National Association of Regulatory Utility Commissioners) and individual PUCs. There is a great deal of excitement in some circles, but some caution in others.
But fundamentally, information technology has changed the way we do business. It's inevitable that the same wave is coming to the power grid. The value proposition of GridWise says, let's go surf on that wave.
IntelliGrid: Capturing A Flood in a Teacup
The Electric Power Research Institute (EPRI) stands at the front line of the utility industry's grid-management and reliability developments. To learn how EPRI envisions the industry's future-and how that vision might be realized- interviewed Dejan Sobajic, director of grid reliability and power markets for EPRI in Palo Alto, Calif.
Fortnightly: What is EPRI's IntelliGrid program all about?
Sobajic: IntelliGrid is the word that stands for the power system of the future.
When we look back 30 years, we see the beginning of computers being used in control centers. In the late 1960s and early '70s they were used for the first time to help grid operators with daily tasks. At that time, information technology (IT), generally speaking, was way behind the concepts that people were proposing about how a smart system should operate. They were looking from a high-level point of view about what would be the right thing to do, conceptually, to develop the system.
They introduced the state-based management concept, which categorized the system in either a normal, emergency, or restorative state. Each state defined what the operator had to do; maybe 80 percent of the time it was in a normal state, and 20 percent or less it was having problems. This idea was very powerful and still is valid today, but the IT was way behind it. Most of the idea couldn't be implemented, and gradually we saw things happening for the next 30 years.
Today we find ourselves in an inverse situation. The concepts I described are more or less unchanged, but IT has grown enormously fast and we now have computers capable of handling many things amazingly fast. Data-gathering from substations, generating stations, and the grid in general is becoming more possible on a massive scale. A digital relay has buffers internally that can store data about the last five minutes of operation, and it can sample conditions on a millisecond rate. Extremely high-resolution data is now available for any quantity you want. So more things can be done, but they aren't being done. This is the motivator of the work we are doing right now-to use everything IT has to offer.
Fortnightly: How will the IntelliGrid project accomplish that?
Sobajic: With all this data becoming available, we have to re-think how we gather it and use it to assist functions. EPRI took the first step and conducted a study, and IntelliGrid is the result of that.
The IntelliGrid architecture is the foundation. It has to pick up all the data available, and this data might evolve into something that is not just electrical quantities, but other things like video or sound. Thirty years from now we don't want to say, "If we'd only thought of this… ." It's a far-out system design that carries you as far as the imagination can go.
That part is behind us, though it will be refined as we go forward.
Next we will focus on system models. These are simulated environments where we can model different operations and controls. The big challenge is to make these models more real-time compliant, and to calibrate them to look more like the real system. For example, one model is a load model. It could be one factory or an entire region. To simulate the load of the city of San Francisco, for example, the program uses an amount of impedance. But one single number cannot replace a load of that complexity. It has dynamic components and we need to reflect that with greater fidelity.
The third area of IntelliGrid is wide-area operation, monitoring, and control.
Historically, the power system was operated on the basis of control areas. These were 138 individual geographic territories, where certain power lines were coming in and out. The operator sits at the middle, metering what is going in and out at the boundaries, and trying to keep the system normal forever if possible.
When deregulation started to happen, we saw traders doing transactions that didn't always start and end in one control area. A transaction could cut across dozens of control areas, and the IT wasn't capable of keeping up with these transactions. The overhead was enormous, and operators realized they didn't know what traffic was taking place on their systems. This started operators thinking that we needed to elevate coordination of reliability to a higher level than the control-area operator. The control areas still have their functions, but someone should be looking at it from a higher level.
So now we have 18 reliability coordinators in the United States and Canada, covering the entire system. They can see the impact of trades on a wider area. This could go across an entire interconnection, but we don't have the ability to do that yet.
[After 9/11] the Department of Homeland Security (DHS) called us and said they want an overall big-picture view of the health of the U.S. power system. Translating that into the "state" diagram from 30 years ago, they want to know if the system is normal. If not, is there something DHS should do about it? That is one of the goals-to have a higher level view, a big map that shows the health of the interconnected system, in the next 10 to 20 years.
Fortnightly: What new technologies are needed to accomplish that?
Sobajic: We are trying to build a whole domain of applications inspired by biological systems. If you look at the grid as a whole, you can design a system that will absorb and process information about the conditions in the grid to conclude whether some hypothetical deficiency will affect the system. This is the system that exists in most animal species as a common brain function. Fears are innate processes, often generated in a gland, that operate according to the same principles-like an early warning system. With today's designs and technologies, we can develop a rough copy of that.
What it requires is the ability to take an overwhelming amount of operational data and compress it into simple English and visualizations so the operator knows what must be done. During an emergency, it's even more important that the information is simplified in this way, because an operator under stress is even more likely to make mistakes.
We'll be working on this area over the next 10 years.
Fortnightly: This sounds simple, but I'd guess it's not. Is the technical know-how adequate for this task?
Sobajic: From an R&D perspective, it is not a small task. And if you go to the universities and see what areas Ph.D. scientists are specializing in, they are not specializing in visualization and modeling as much as they are advanced mathematics, chemistry, and physics. There are a few, but not enough by far. We need help in that regard.
Fortnightly: What other challenges are you facing?
Sobajic: We are observing a disconnect today between business models of energy trading and the physical reality of the system, specifically with regard to reactive power. Most market models we have today are compensating providers for generating active power, but not reactive power.
The system cannot work if you shut down the reactive power flows. But the way we account for power flows in the system leaves us with a gap. Reactive power is not just being overlooked, but the importance of what it brings to the system is being underestimated.
Some work needs to be done that will change the way the markets are conceptualized. It may not be technically challenging because we already have the technical solutions, but it requires aligning the business models with the physical realities of the system. Markets were designed to manage active power and ancillary services, which are perceived as being a local phenomenon. Markets are presently preoccupied with flows of active power, because that's what pays. More megawatts mean more money. But a much more effective way to operate the system would be to coordinate reactive power flows as well as reactive power flows.
As the system is operated now, at times when we are having problems, we can't switch on reactive power as fast as we need. We saw that happen during the August 2003 blackout, and one conclusion of the joint U.S.-Canadian report on the blackout was that coordination of reactive power should be put on the front burner. We strongly support these findings.
Reactive power will have to play a bigger role in planning and daily management, and we need to take a fresh look at how reactive power and voltage are related, from an overall system perspective. We need to elevate business thinking so that reactive power is acknowledged as a player, companies are compensated for putting reactive power into the system, and we can coordinate active and reactive power flows. This will improve reliability and make the system more robust. Toward that end, EPRI has started a global initiative around controlling reactive power. We'll leave the power market design and compensation to others, but if we want a robust system, active and reactive power should be managed together.
NRTC: Co-ops, Connectivity, and the Wild Blue Yonder
A large share of the distribution lines in the United States are owned and operated by rural electric cooperatives. To learn about the grid IT and communications priorities of co-ops, spoke with Steven E. Collier, vice president of emerging technologies for the National Rural Telecommunications Cooperative (NRTC), based in Herndon, Va. The NRTC helps telecom and electric cooperatives to develop and implement IT and telecom services.
Fortnightly: What are electric cooperatives doing, vis-à-vis grid management, communications, and information technologies?
Collier: I don't see a lot of brand-new stuff going on. I see our members continuing to expand AMR (automated meter reading), primarily using power-line carrier (PLC) technologies. We see more distribution systems putting in some level of SCADA (supervisory control and data acquisition). Typically, it's not the big centralized system pulling a lot of endpoints, but a smart SCADA that is event-driven.
Some co-ops are combining AMR and SCADA, using AMR technologies from Turtle, DCSI, Hunt, and Cannon that can pull voltage data and get more distributed information.
We also see increasing use of the Internet for various things. We are working on an Internet-based software-defined radio for dispatch. It uses the Internet as a switch to allow radios on three different frequencies to communicate to each other. It's useful for work orders, vehicle location, and those kinds of things. Also a few members are doing Internet-based workforce management and automatic vehicle location, but it is a small minority. Everyone is interested, but few have implemented it to any extent.
We're doing a joint project with NRECA (the National Rural Electric Cooperatives Association). We're co-sponsors with them on the Cooperative Research Network, which this year is doing a project to determine what co-ops could do with Internet connectivity if it were available throughout their distribution areas. We're looking at what options are available, in terms of automation and control.
Fortnightly: How are co-ops justifying telecom and IT investments to their members?
Collier: Primarily, it's about having better information to improve reliability and service quality. Utilities can better understand what's going on in their distribution system, respond quicker to outages, and get better information for billing. It's about operating the system more efficiently, economically, and reliably.
We have a broad diversity among co-ops. A couple hundred of them are doing nothing about automation. Some are very small and have no money. But others, like New Enterprise (Rural Electric Association) in Pennsylvania, with 14 employees and 3,300 meters, are tiny but have decided that if you are going to operate in the 21st century, there is a minimum amount of automation capabilities you must have.
Utilities are still using 100-year-old technology to generate, transmit and even switch and meter electricity. This is consistent with the story that [former EPRI Chairman] Kurt Yeager has been telling around the country since the [August 2003] blackout. Reliability and returns on investment are declining, while costs are rising. Something has to be done, and he thinks automation is the answer. We need 21st century monitoring and control of our T&D system. That's the only way to avoid reliability issues.
Fortnightly: After 9/11, communications have become a big deal as part of security and disaster-recovery plans. How are co-ops responding to that?
Collier: Our members are spending a lot more time and attention on how they would restore service in a post-blackout situation, or in an emergency or terrorist scare. How would they operate in disaster-recovery mode? What are their system-monitoring and control protocols?
The RUS (Rural Utilities Service) has issued a rule that requires a co-op that comes in for a federal loan guarantee to have a documented disaster-recovery plan before they can draw funds. That is occupying a lot of time for our members.
Fortnightly: Are co-ops focusing on regional coordination and reliability management?
Collier: Not much, in terms of getting together and asking how we can centrally organize. They see themselves as the tail at the end of the dog. They are out on the edge of the system. What happens in New York City doesn't affect them much, and vice versa.
The only exceptions are in regions like North Carolina, where you have a single wholesale supplier that serves all of the co-ops. There's a level of coordination that can occur there, and the same in Michigan, Georgia, and Basin's territory (from Montana and the Dakotas to New Mexico). But even there, co-ops take their power on the edge of the system.
Fortnightly: There's a lot of talk about broadband-over-power-line (BPL) services. Are co-ops considering that, as a way to offer more services to customers?
Collier: They are all super interested in BPL, because they realize rural consumers aren't getting broadband and there is a need for it. Some co-ops are comfortable being that entity. The other 800 are interested, but they aren't sure how they'd deal with it. We don't have experience with communications deployment, and co-ops generally aren't the first to do anything. They want to use stuff that they know works.
The other issue is that BPL is not ready for prime time, and it is not well suited to sparsely populated areas.
As a general point, utilities have not done well in diversified business ventures. It's a funny story that they aren't good at running things like ISPs, but they are very interested in BPL. They seem to think BPL will run itself.
Fortnightly: I thought co-ops were famous for providing diversified services.
Collier: Yes, a small group of co-ops, about 150, are providing non-core services, like propane, HVAC and DirecTV. Some do well at it. Also, we have several members who have built out broadband across their service areas because they believe rural areas need Internet access just like they did electricity. Columbia [Rural Electric Association, in Dayton, Wash.] did it with wireless, because they had a vision of the future. But that's pretty unusual.
The great majority of electric co-ops have not had good luck with non-core services. Most are in the same mode that IOUs are in. Their banks and national trade association are saying, "Stick to your knitting. We beat back the deregulatory demon and we don't have to deal with it, so there's no pressure to do new stuff. Most of those who did are in trouble, so let's avoid telecom altogether."
But there are two camps. One is comfortable with offering non-core services, and the other is allergic to it.
Fortnightly: Some co-ops have been selling satellite TV and broadband, and that's largely a modular type of service they can re-sell. Do you expect more of that?
Collier: Yes. In fact, we have a project called Wild Blue, in partnership with Intelsat and others, that would bring entry-level broadband out to rural areas where you don't see DSL or cable because the population density is too low. NRTC made a $30 million investment over a year ago, along with Liberty and Intelsat, to take control of Wild Blue. That allowed us to give our members the opportunity to participate as retail distributors. About 250 members have signed up.
Fortnightly: Why would that be more successful for co-ops than other broadband service businesses?
Collier: Wild Blue is similar to DirecTV, in that it is a marketing and customer-premise installation and support play. They don't have to build any infrastructure. Someone else does the heavy lifting. But that's also true in co-ops' core business; co-ops generally aren't generating and transmitting power, just distributing it.
Also, one thing that is true about rural electric and telephone cooperatives is they have a very strong local community connection. A lot of telephone co-ops are getting into video over DSL. Where telcos offer this kind of service, they are getting better than 50 percent market penetration, and that's in competition with cable TV and satellite. They are getting this success rate because they are the trusted local provider, and people often hate their local cable providers and don't get the same kind of service they get from Joe at the co-op.
Finally, the main place that co-ops have been making money is satellite TV. If members offer both satellite and Internet on the same dish, take rates double. If they have a 4 percent take rate on satellite TV, then the offer with Wild Blue goes to 8 percent.
This is a market advantage if co-ops are willing to seize it.
Articles found on this page are available to subscribers only. For more information about obtaining a username and password, please call our Customer Service Department at 1-800-368-5001.