Going Smart at Scale

Deck: 

Your smart grid rollout should go live everywhere, right from the start.

Fortnightly Magazine - January 2016

Most utilities will conduct a cost-benefit analysis before making any major investment in the grid. Therefore, this analysis should recognize two key strategies for a commitment of this magnitude - a smart grid project, for example.

First, any smart grid rollout will gain the greatest benefits if applied at scale right from the start, to the maximum number of feeders, if not all of them. Second, each smart grid application delivers different benefits, such as cost reduction, improved reliability, greater power quality, or enhanced renewable deployment. The whole exceeds the sum of the parts. Thus, the accumulation of benefits established by layers of multiple smart grid technologies will only enhance the justification analysis. And, the quicker these applications can be implemented, the greater the benefits to be gained.

Yet utilities typically will fail to exploit either of these two strategies.  Instead, utilities usually will implement smart grid automation according to a much different, though faulty set of guidelines - much to their disadvantage.

First, utilities typically will seek to minimize complexity, treating a primary smart grid technology as a solo deployment, aimed at meeting a single specific business objective, such as improved reliability through self-healing, cost reduction through loss minimization, or deploying a greater share of renewables. Unfortunately, however, the payback doesn't keep pace. Nor do the engineers gain the experience of understanding the interaction between the technologies.

Second, to minimize the cost of the evaluation, utilities often will select a limited pilot area to represent all feeders over which the anticipated results can be assessed. However, due to myriad feeder configurations and load types, it may not be possible to find a pilot area diverse enough to fully evaluate all the conditions that could be encountered across the network.

Third, a basic staff often is trained to support and maintain the new automation. Their job is to "own" its implementation and to perform the evaluation, but at the cost of project delay. The support staff's engagement may be lengthy, since the pilot is deployed on operational feeders. It could take years before meaningful data generated by an extreme fault or weather condition arises where the technology's performance can be evaluated under stress. Fourth, utilities typically will enable a new automation function only one feeder at a time, such that the benefits will likely fall short of promised effectiveness.

The drawbacks to such a strategy are considerable, not the least of which involve cost and delay. The ability to layer smart grid applications with multiple operational objectives is not just an issue of economic justification, it is critical to delivering quality power.

A better approach would run as follows:

  • Maximize automation across technologies and networks.
  • Expand the scope of deployment beyond a narrow pilot area.
  • Use simulators to minimize involvement of personnel.
  • Go live now, and everywhere.

 For example, fully automated, self-healing solutions solve one problem but can sometimes create others. The sudden transfer of unfaulted loads from one feeder to another can result in voltage problems on both feeders. Self-healing applications layered with Volt/VAR control are able to more effectively manage the problems created by network reconfiguration, whether the network reconfiguration was fault driven or planned.

If the layered applications do not coordinate effectively with one another, the system isn't very smart. The applications must coordinate to compensate for each other, while each achieves their prime objective.

Of all of the considerations, one is often left unanswered: "Is it practical?" We know it will work. It is with prosaic confidence and resolute determination, guided by engineering focus (and supported by state-of-the-art control and communications infrastructure, optimized by advanced closed-loop control algorithms) that an investment in a pilot implementation will perform as expected. But does success mean that it the pilot project is sustainable and affordable for the entire network? Likely, the cost of the needed infrastructure will not prove justifiable for all circuits.

Furthermore, does the time that is required to upgrade the network fit within the short event horizon of the economic payback? These questions must be considered.

A Better Rollout

For utilities with very large feeder networks, deployments of ADMS (Automated Integrated Distribution Management and Outage Management Systems) present the unique problem of adaptation to scale. Outage management systems historically have not faced this problem, since there is no infrastructure to install - just a connectivity model to maintain. But ADMS adds a new level of complexity and cost, beginning with the need to respond to network changes in real time.

In short, ADMS requires a better plan for implementation - one that overcomes the deficiencies of the typical approach. A better process might run as follows:

Maximize Automation. Rather than employ a single technology approach, resources should be focused on understanding the benefits of the synergistic interaction among a suite of advanced applications. So a better approach would embrace a suite of synergistic applications and fully understand their combined potential to improve network operations. Of course, simplification is fine if used to eliminate the time and distraction of building out the supporting infrastructure. But the infrastructure should not define the technology that can be deployed. Rather, the technology that is to be implemented should define the infrastructure.

Moreover, a simulator for the power system network will greatly reduce the cost, time and complexity of building the network infrastructure early on during the pilot phase. The simulated network infrastructure enables the new suite of automation applications to run unhindered. The simulator provides the utility with the ability to gain in-depth understanding of the interaction between integrated technologies with a minimal investment.

Expand Scope. Rather than implement applications over a narrow pilot area, the simulator can model different feeder types very effectively under all network conditions. This way, the simulator's network model will be the same engineering model that is used in production, which serves to validate the proposed model.

The load data that serves the model is archived data, collected from the substations over each seasonal period. It simulates all-day type conditions at a fraction of the time and cost. The results of the 'before' and 'after' simulations offer indisputable justification of the business case, under normal and abnormal operational conditions, before the utility commits to an investment in costly field work.

Minimize Personnel. A simulator offers several advantages with respect to personnel. First, the simulator provides a powerful training tool, able to create different scenarios that can be used for operator and engineer training. Second, unlike an operational pilot system, the need for a fully trained 24x7 maintenance staff will become unnecessary during the evaluation and discovery phase under simulation. And third, engineers can evaluate the impact of the new automation technology on the network running under various scenarios, with a simulator that can digest months of historical data in mere hours.

Go Live, Everywhere. The biggest problem with implementing advanced grid automation is the cost and time required to install the supporting infrastructure. This can take years, particularly if a significant portion of the network must be extended with automaton in order to realize the benefits needed for cost justification. 

The goal of engaging the maximum number of feeders with smart grid technology is still the key to achieving payback. Full benefit to all feeders with the new technologies can be immediately gained by supplementing traditional telemetry and control with information gained through integrating crews with the application's solutions. This is the Human Grid. The crews are equipped with a hand-held mobile device, which is directly integrated to the smart grid solution algorithm. The difference between the latter versus full traditional supervisory control and telemetry is simply speed. The quality of the solution is the same.

The automation applications provide a toolset that should enable the control center to engage the crews to receive integrated commands with minimal overhead. The real-time network map, network analysis, and tags are available to the crews overlaid on a Google map in a common browser. Crew input, using the mobile device, is uploaded to the control center operator, who can view the input on his geographical user interface as he would telemetry. The crews are automatically assigned switch plans - such as self-healing fault isolation and restoration automation, outage ticket assignments, work order plans and network survey assessments - as the control center follows the crew's movements and progress. Outage codes, failure codes, repair codes, and estimated time of restoration are no longer communicated verbally to the operator for manual entry.

Furthermore, the public is likewise engaged in reporting outages, street lights out, complying with demand response, load control, even meter readings. The public gains an unprecedented value proposition by being able to manage their usage, budget, etc. - all things related to utility operations.