Changes in regulatory requirements, market structures, and operational technologies have introduced complexities that traditional ratemaking approaches can’t address. Poorly designed rates lead to...
The Trouble with Freeriders
The debate about freeridership in energy efficiency isn’t wrong, but it is wrongheaded.
tendency of respondents to offer what they think is the right answer, and this tends to result in an overstatement of freeridership. Also, as some evaluation experts have noted, people have internal reasons—as explained by social psychology’s attribution theory—that motivate them to make certain decisions and to follow a cognitive process for justifying those decisions. 17
Survey design practices have improved, and sophisticated ways of designing questionnaires promise a more nuanced way of eliciting information more reliably. Instead of simply asking what participants would have done in the absence of the program, multiple questions probe respondents about timing (would they have adopted the measure at the same time), amount (would they have adopted the measures in the same quantity), and level (would they have adopted the measures at the same level of efficiency).
What questions to ask, what kind of scale to use for recording responses, what weights to consider appropriate, and how to apply the final scores are decisions that expose the analysis to subjective judgment. 18 This problem could make the analysis a subjective exercise, open to constant dispute. Different evaluations of similar programs conducted by analysts using seemingly similar methods have produced drastically different results. The use of surveys for determination of spillover effects, for participants or non-participants, is especially sensitive to variances in spillover scores. Small fractions multiplied by very large numbers of customers can dramatically boost the savings.
Another—and less tractable—aspect to response bias is construct validity, which raises questions about what the survey results actually measure. The problem stems from the fact that survey respondents are naturally predisposed to conservation; After all, they are program participants. Thus, it remains far from clear whether their responses are conditioned by the effects of the conservation program itself.
The survey results would overstate freeridership because the survey may be asking the question from the wrong people: those identified as freeriders are, in fact, exactly the type of participants program administrators would want for a program. 19 What’s being measured, it appears, are the effects of the program—not what would have been expected in its absence. 20 In areas with long histories of conservation programs and activities, it’s no longer possible to parse out who is a freerider and who was influenced by the program.
Could it be that, in the case of such transformed markets, what’s being measured in freeridership surveys is in fact the opposite: spillover?
Considerable practical matters limit the usefulness of self-report as a means of eliciting information about freeridership in upstream, mass-market programs, where it might not be possible to identify participants, let alone freeriders, because consumers might not be aware that the price they pay for a product includes a utility discount. This happens routinely in programs that offer point-of-sale incentives for products such as compact fluorescent light bulbs.
The use of self-report is even more problematic in the large commercial, industrial, and new-construction sectors, where investment decision-making processes are complex and finding the right people to survey is rarely easy. Using the method is even more problematic in upstream programs deployed through retailers,