Usage of utility services is rarely uniform across the day, month, or year. Dramatic increases in loads often appear at particular times of the day or in particular seasons of the year. Telephone utilities may choose not to meet extreme peak demands, but electric, natural gas, sewer, and water utilities usually do not enjoy that option. Failure to meet peak demands can lead to catastrophic consequences for both the customer and the utility, and can draw the attention of regulators. For that reason, utilities adopt design criteria for their production, transmission, and distribution facilities to ensure that peak loads are met.
When it comes to cost allocation, common wisdom assigns costs in proportion to class contributions to peak loads. The justification is simple: Since the equipment had to be sized to meet peak day loads, those costs should be allocated on the same basis. Many different peak allocators have been developed on this assumption: single coincident peak contribution, sum of coincident peaks, noncoincident peak, average and excess demand, peak and average demand, base and extra capacity, and so on. Such pure peak-load allocators may not be politically acceptable, but conceptually, at least, they appear to offer the only defensible approach.
Nevertheless, where capacity can be added with significant economies of scale, making cost allocations in proportion to peak loads violates well-known relationships between economics and engineering. What is missing is any tracing of the way in which the peak-load design criteria actually influence the costs incurred.
The Logical Flaw in Peak Allocators