Historically, grid operators tapped into voluntary load reduction as a last resort for keeping the lights on. But now, smart grid technologies and dynamic pricing mechanisms bring vastly greater...
The changing architecture of demand response in America.
in good measure ( see Figure 6 ). The results are much sharper than any of the previous figures. Unfortunately, this amount of variation is still much too large for policy work.
One way to resolve the variation in the data is to use a consistent model that combines the information across all pilots. Such a model doesn’t currently exist. It might be something that DOE, as the issuer of stimulus grants, should consider pursuing in the future, especially as data becomes available from the next generation of pilots being funded by its grants. In the near term, analysts are relying on the Pricing Impact Simulation Model (PRISM) that was developed in California’s statewide pricing pilot. 14
PRISM was the major analytical tool used in the FERC project. But before it could be used, a validation exercise was carried out. A key input into PRISM is the saturation of central air conditioning systems. Other inputs include the existing rate, the new rate and weather conditions. When these data were input into PRISM for a subset of the pilots, and the results compared with those estimated by the pilots, it was found that PRISM was over-estimating DR in hot and humid climates by about 20 percent. Thus, a decision was made in the FERC project to use PRISM as is done for states west of the Rockies and to use PRISM with a 20-percent downward adjustment in states east of the Rockies.
Rating the Experiments
Individual utilities and states that have carried out their pilots are likely to use their own results for estimating the costs and benefits of full-scale deployment of AMI and dynamic pricing. The caveat is that the pilots shouldn’t be impaired with intrinsic design problems that affect their internal or external validity.
For example, pilots that didn’t feature random selection of participants likely will suffer from self-selection bias. If they feature valid control groups, they may be able to establish a valid cause-and-effect relationship within the pilot, saying that a particular dynamic pricing rate lowered the peak demand of treatment group customers by X percent relative to control group customers. But this result might not be applicable to the population outside of the sample.
Pilots that have too few participants can’t yield statistically valid results. For example, the highly cited Olympic Peninsula pilot in the Pacific Northwest featured just over 100 customers. It was a great demonstration of new technology in the form of price-sensitive smart appliances: Three different rate designs were tested, but can the savings estimates attributed to these technology-enabled rate designs be used outside of the pilot? Most econometricians would demur. To be able to draw valid inferences, a study should include about 100 to 150 customers in a treatment cell comprised of a single rate and technology combination, and another 100 to 150 customers in a control group.
Some pilots have featured insufficient price variation. The classic example is Puget Sound Energy’s pilot, which ran during the 2000 to ’01 period. The peak period price was only 15-percent higher than the standard rate, and the