Over the past four years, power prices increased significantly in both restructured and non-restructured states—but then the recession and falling gas prices changed the picture for retail...
Frontiers of Efficiency
What conservation potential assessments tell us about ‘achievable’ efficiency.
economic potential. The range widens to between 3 percent and 31 percent for achievable potential.
The spread of these estimates points to significant uncertainties involved in estimating conservation potential, particularly achievable potential. This variability is reflected in relationships between average values for each class of potential and their variance. Coefficient of variation (CV), the ratio of the standard deviation to the mean, is a useful statistic for comparing the degree of variation from one data series to another, even if their means differ drastically from each other. 5 Estimates for technical potential indicate a CV of 21 percent, rising to 32 percent for economic, and more than doubling to 43 percent for achievable potential.
The variation in estimates of technical potential can largely be explained by differences in technical assumptions, geography (climate), and characteristics of energy markets where studies are performed. Economic potential also is expected to vary across markets because of differences in avoided costs, the costs of deploying conservation measures, and economic assumptions, such as discount rates, which are usually mandated by local regulation. Achievable potential isn’t independent of technical potential. However, the data show a large variability in achievable potential even when it is normalized to technical potential and variations in technical potential explain less than 50 percent of variations in achievable potential. The remaining variability in achievable potential is more difficult to explain, mostly because methods for deriving estimates aren’t always spelled out, at least not satisfactorily.
Methodology or Ideology
Stark differences in potential estimates have prompted attempts to explain them, including a recent article in The Electricity Journal . That article described a study that investigated whether study sponsors (or performers) can influence conclusions about efficiency potential. Specifically, the study asked whether NGOs and advocacy groups, whose mission is to advance energy efficiency, tend to find higher potential than utilities more interested in selling electricity than conserving it. Based on sector-level data from 23 CPAs, with a focus on southern states, the study found NGO-sponsored studies had higher achievable potentials, while utilities tended to find the highest technical and economic potentials and lowest achievable savings. “Despite the pattern,” the study concluded, “differences between these estimates by sponsor type aren’t statistically different from zero;” thus, sponsorship didn’t matter. 6
Results from the larger sample of studies summarized here suggest a different conclusion. Much observed variability in data can be explained if results are grouped by study objectives and orientation (Figure 3) . The data indicate, at the aggregate level, that policy-oriented studies show achievable potentials of 25.5 percent on average—an estimate nearly twice as large as the 13.4 percent reported in utility-sponsored studies. The difference is also statistically significant. Moreover, statistical tests suggest that results not only differ on average, but seem to be derived from fundamentally different populations.
These differences, however, don’t necessarily signal bias. Regardless of methods used, derivation of long-term potential requires making a large number of assumptions about technology, economics, and consumer behaviors, and how these factors interact. Certainly, an element of judgment is involved in any research of the scope and complexity as