Ahmad Faruqui and Robert Earle are economists with CRA International located in Oakland and Los Angeles. They would like to acknowledge many helpful discussions with their colleague, Stephen S. George. Contact Faruqui at firstname.lastname@example.org.
A new wave of rate cases is sweeping through the electric industry, as rate freezes of the mid- to late-1990s come to an end, and as utilities sense the need to modernize their electric grid. In addition, the Energy Policy Act of 2005 calls for an evaluation of time-based tariffs.
Staff turnover means that, since the last big wave of rate cases in the mid- to late-1970s, much of the organizational capability for ratemaking and rate design has disappeared in both utilities and commissions.
To remedy this gap in knowledge, this article provides an overview of ratemaking and rate-design principles to ease the myriad tasks awaiting new rate analysts and attorneys. It assumes no prior knowledge of ratemaking and is written primarily for the novice. It may trigger nostalgia in some of the “old salts” that are still manning the ratemaking stations, but it is our hope that even they may pick up a few new pointers.
Ratemaking Through the Years
For decades, the electric industry around the globe was characterized as a natural monopoly with declining average cost curves. Economic theory suggests that the best way to maximize economic efficiency under such conditions is to have a single provider who is regulated. In competitive markets, the most efficient pricing algorithm is to base prices on marginal costs. However, in a declining-cost industry, marginal-cost prices would not recover the full costs of production and create financial difficulties for the sole provider. In such situations, the best rule has to give way to the second-best rule, which is to set prices to average costs. To illustrate the choices and tradeoffs implicit in these pricing rules, it is useful to briefly review the economic theory of pricing regulated services.
We can illustrate the main tenets of this theory by using a demand-supply diagram (see Figure 1). Prices and marginal costs are shown on the vertical axis and quantity consumed is shown on the horizontal axis. The declining cost curves show that the industry is a natural monopoly. The marginal cost (MC) curve lies below the average cost (AC) curve, causing the average cost curve to decline continuously. The best solution is to set price equal to marginal cost, as that would be the outcome in a perfectly competitive market. This case corresponds to point C on the demand curve (labeled AR for average revenue), with a price of P1 and quantity of Q1. While this position would yield the highest level of consumer surplus, it would result in losses to the utility, since average costs at Q1 are higher than P1. So the first-best option is not feasible.
The second-best option, shown by point B, involves average cost pricing. The utility earns zero economic profits, but that includes a return on capital. Prices are set at P2 as they would be under cost-of-service regulation (COSR). Consumption takes place at Q2, a lesser amount than would be consumed under marginal cost pricing. Compared to the first-best case, consumer surplus is lower and so is economic efficiency in resource allocation.
The third-best option is shown by point A. This lets the utility maximize its profits without any regulatory constraint. The firm will chose a price and quantity combination such that its marginal revenue (shown by the MR curve) is equal to its marginal costs. This is shown as point A on the demand curve, with a price of P3 and a quantity of Q3. Customers use a lot less than they would under a COSR or perfectly competitive market design and pay a lot more for it.
By pricing above marginal cost, the monopolist creates a large deadweight loss in economic efficiency and earns super-normal profits labeled as monopoly rents in the figure and shown by the colored rectangle.This represents a redistribution of gains from consumers to the monopolist. The purpose of regulation has been to eliminate these excessive profits. This most often has been accomplished by setting the price at P2. We have assumed in the discussion thus far that electric rates are single-part rates that are the same for everybody. There are indeed better ways to get from inefficient point B (where the single price equals average costs) to “better second-best” points closer to efficient point C (where the marginal price equals marginal cost). The challenge in rate design is to find some acceptable way to do this (i.e., some combination of price discrimination and/or multiple-part tariffs).
In practice, the second-best output level is not Q2 at which average cost (AC) equals price, but is something substantially larger, obtained by selling additional units of output at lower prices. In principle, perfect price discrimination could increase output to the first-best optimal level Q1 at which the (marginal) price equals marginal cost. In practice, that is not possible, but in any real utility situation, it is certainly possible to do better than Q2. One of the major issues in designing electricity tariffs is deciding how much of what kind of price discrimination should be allowed, and for whom.
Prior to the 1980s, optimal regulatory outcomes were realized by COSR in North America and by public ownership of the electric utility in Europe and much of the rest of the world. COSR required regulators to perform a detailed review of utility fixed and variable costs of electricity generation, transmission, and distribution. This detailed approach to regulation is sometimes called “heavy-handed” regulation. It suffered from some notable weaknesses.
One problem was that it was not apparent how costs should be allocated across customer classes, since the same utility assets were used to serve all customers simultaneously. To deal with this problem, a variety of cost-allocation methods were developed that allocated costs across classes based, for example, on the share of each class in the system peak load.
A bigger challenge was the little incentive for the firm to lower its costs of production. The higher its costs under COSR, the more money it made. The commission did not have its own independent information about the utility’s costs, so it could not verify if the AC and MC curves were genuine or inflated. To deal with this informational asymmetry between the regulator and the utility, economists created the principal/agent model of decision making in the 1980s.1
In this model, regulatory mechanisms had to be designed that would induce the utility to produce the optimal amount of electricity (Q2 in the figure) using the optimal amounts of inputs such as capital, fuel, and labor without requiring the regulator to know the quantities of outputs and inputs beforehand. In the end, this process became very time-consuming and expensive. Utilities and commissions had to hire staffs and give them enormous budgets to prepare and review “rate cases.”
Since it was not uncommon to hold extensive and contested public hearings on the rate cases that often ran into several years, a phenomenon known as “regulatory lag” came into being. During times of declining average-cost curves, the regulatory lag worked in the utility’s favor. In addition, cost-of-service regulation gave the electric utility an incentive to artificially inflate the rate base, since that was the key to higher earnings. However, during times of rising costs, such as the decades of the 1970s and beyond, the regulatory lag worked against the utility. As electric prices rose to unprecedented levels in much of the globe, caused by higher inflation rates and rising oil prices, regulatory commissions adopted a practice of disallowing large portions of utility costs in what were termed “prudence reviews.” This caused further problems for the utilities and did little to slow the rate of growth in electric rates. Customers continued to complain about rate hikes causing a problem known as “rate shock.”
Over time, it became evident that the generation of electricity was no longer characterized by declining average costs. New generation technologies increasingly were cost-effective and the economies of scale that had long characterized power generation no longer were visible. So, to lower electric rates, governments in most countries made a decision to stop regulating the generation of electricity on a cost-of-service basis and to allow competitive suppliers to enter the business. If enough new suppliers enter the market, they would create competitive market conditions and ensure that prices are based on marginal costs, the first-best solution in any market.
In such a market, the key issue becomes one of monitoring market power and ensuring that suppliers are not colluding to charge high prices. In other words, the emphasis shifts from cost-based pricing to market-based pricing with regulatory oversight to prevent collusion between suppliers.
A key feature of well-designed competitive-power markets is a provision for customers to reduce their demand for electricity in response to higher prices. This demand response is a crucial ingredient of success. It simultaneously helps mitigate the market power of generators by lowering prices in wholesale markets and helps reduce peak load, diminishing the need for expensive peaking capacity and lowering revenue requirements.
In the transmission and distribution segments of the business, the single supplier model still was judged to be the best one, since both segments were characterized by declining average cost curves. However, in these segments there were calls for light-handed regulation based on simpler concepts tied to the notion of incentive regulation.
In practice, incentive regulation has not proven to be as light-handed as originally conceived. Since the X and Z factors did not stay constant over time, they had to be periodically adjusted based on whether the utility was accruing much larger gains than anticipated (or losing much more money than anticipated). In the original power cost recovery formulation, the utility was allowed to keep any gains beyond the specified level. However, it became politically difficult to do that.
Complex formulas were developed for deciding how to “split” the gains (or losses) if the utility exceeded (or fell short of) its performance on the various indexes. Some argued that all gains should be given to the customers. But if that were to be done, it would eliminate any incentive for the utility to make those improvements in the future.2 In some situations, the commissions agreed to share the gains between the utility and its customers on a sliding-scale basis, but in practice forced the utility to give all the gains to the customer. This has created the well-known problem of reneging on “regulatory commitment.”3
While no perfect solutions have been found, it would be fair to say that the international trend in regulation has been to move away from heavy-handed regulation to light-handed regulation. This generally has meant moving away from cost-based tariffs to performance-based tariffs and ultimately to market-based tariffs.
Cost-Of-Service, Regulation-Based Tariffs
In a nutshell, the goal of utility ratemaking is to set future rates that allow a utility to collect enough revenue in the period when the rates are in effect to cover the utility’s costs and an adequate, but not excessive, return on investment. The process of setting tariffs consists of two major steps. The first step is called ratemaking and involves a determination of revenue requirements. The second step is called rate design and involves the allocation of revenue requirements into functions (generation, transmission, and distribution), class of service (residential, commercial, government, agricultural, and industrial), voltage level (primary, secondary, and tertiary), category (demand, energy, and customer) and time-of-use (seasonal, time-of-day).
With the exception of fuel surcharges, rates are not intended to recover specific expenses incurred in the past. Instead, an approved revenue requirement estimates a utility’s expenses and required return for the period during which the rates will be in effect. Rates then are set to cover the revenue requirement. The approved rates must allow the utility an opportunity to collect enough revenue to cover its expenses and return on investment but the regulatory process does not provide any guarantees of financial success to the utility.
The utility ratemaking process starts when a utility files a revenue-requirement study based on costs and revenue recorded in the utility’s books in a historical or forecast period called the test year. The revenue-requirement study also includes various proposed adjustments to normalize the historical costs. Normalizing adjustments modify booked costs or revenues to make them representative of the costs the utility is likely to incur in the future period when the rates will be in effect. Disputes in rate cases typically focus on issues such as whether specific booked costs were incurred prudently and for the benefit of ratepayers, whether the utility’s adjustments are proper, and whether additional normalizing adjustments should be made.
After considering the evidence and allowing, rejecting or adjusting costs accordingly, the regulatory commission will approve the utility’s revenue requirement (see “Fundamental Equation for Ratemaking” above).
The utility’s approved revenue requirement is compared with its normalized test-year revenue. The test year may be a historical year, a forecast year, or a hybrid of the two. If the revenue is less than the revenue requirement, rates must be increased to make up the deficiency. Rates may be reduced if the revenue exceeds the revenue requirement.
The utility’s capital may be borrowed (through bonds or other debt obligations), or obtained as equity from investors. The utility computes a rate base consisting of the original cost of the utility’s plant in service minus accumulated depreciation plus a working capital allowance. The commission reviews and may adjust the rate-base computation. The approved rate base is multiplied by the cost of capital the commission approves. The cost of capital is established by weighting the costs of borrowed funds at their actual interest rates, and the cost of equity at a rate the commission sets. A regulatory commission must allow equity returns that are sufficient to maintain the utility’s financial integrity and attract capital consistent with its market-risk profile.
The parties in a rate case often attempt to demonstrate the appropriate equity return through studies of the returns earned by a group of similar utilities (called comparables), or by other methods such as discounted-cash-flow analysis (DCF), or risk-premium analysis.
Only prudently incurred expenses are allowed in the definition of revenue requirements
The rate-design process determines what portion of the revenue requirement will be collected from each customer class, and through what rate form, such as a fixed customer charge or a charge that varies with usage.
Rate design begins with a cost-of-service study (COSS). The COSS assigns the costs in each accounting category in three ways in order to identify the cost causer. In this process, some costs can be assigned directly, but many costs are allocated based on a measurable ratio such as labor costs.
Disputes over rate design often focus on whether an appropriate allocation factor is used to assign the costs among functions (generation, transmission and distribution), classes of costs (customer, energy and demand), customer class (residential, commercial, agricultural and government), and time-of-use (seasonal, time of day).
Allocation of total revenue requirements by customer class is based on usage characteristics. This requires load research surveys to be carried out periodically on representative samples of customers. A variety of cost-allocation methods are available for assigning costs to customers according to patterns of usage.
Methods used in cost allocation include:
- Functional or average use;
- Peak responsibility (coincident and non-coincident);
- Base-extra capacity or average-excess;
- Fully-distributed; and
A key issue is establishing which rates are the default rates for standard service and which are for optional rates. Default rates become part of the obligation to serve.
Good rate design involves making tradeoffs between competing objectives and consulting customers through an open and transparent public process. To accommodate different situations, electric utilities often will provide customers with variety of optional rates. These include:
- Uniform or single-tariff pricing (consolidated, regional, or equalized rates);
- Budget billing (equalized payments across the months to ensure a constant monthly bill);
- Lifeline rates (first block priced affordably, often below marginal cost);
- Excess-capacity rates (discounted rates to encourage off-peak consumption);
- Economic-development rates (discounted rates to encourage certain types of economic activity);
- Negotiated rates (for large-volume users);
- Flexible rates (for large-volume users);
- Excess-use (based on an allowable electricity consumption budget per customer);
- Value-of-service pricing (can be based on value);
- Quality differentiated (level of treatment or reliability); and
- Spatially differentiated rates (zonal or district according to cost differences).
Each of these rate designs serves a different objective. For example:
- Uniform rates accomplish the goal of simplicity in rate design;
- Inverted block rates, seasonal rates, and TOU rates reduce growth in peak loads;
- Lifeline rates improve affordability;
- Marginal-cost pricing encourages efficiency;
- Penalties can induce conservation;
- Zonal rates achieve better efficiencies in spatial cost allocation;
- Single-tariff pricing promotes economic growth in high-cost areas; and
- Negotiated rates address economic development, customer retention, and (sometimes) competition.
To meet multiple objectives, a rate might jointly consider the objectives of affordability, equity, and efficiency:
- For affordability, attention is paid in particular to designing the first block;
- For efficiency, price variation in the tail block can be used to reflect significant differences in marginal cost; and
- For equity, rate averaging can be used across multiple customers and systems to recognize commonality and encourage beneficial regionalization.
A lifeline rate provides a subsidy to low-income customers who meet specified program criteria. The first electric usage, generally considered “essential” usage, is priced below the marginal cost of electric service. The difference required to fund the subsidy is recovered in subsequent blocks. Such a rate closely resembles some conservation-oriented rates.
In practice, choosing a rate structure can be a challenge, particularly given the many available options. When choosing a rate, regulators and utility decision-makers should:
(a) establish clear and explicit goals, priorities, and preferences;
(b) select a rate design that best achieves objectives, while maintaining consistency with accepted ratemaking principles; and
(c) involve stakeholders (particularly ratepayers or customers) to the greatest extent possible.
Involving key stakeholders is an important part of the ratemaking process. Some of the relevant stakeholders include:
- Residential customers;
- Commercial customers;
- Industrial customers;
- Consumer advocates;
- Environmental advocates;
- Business leaders; and
- Media representatives.
Rate structures can evolve with the needs and priorities of electric systems, as well as their capabilities. Strategies for implementing a change in rates or the rate structure include:
- Communicate goals clearly to all stakeholders;
- Recognize tradeoffs explicitly;
- Follow sound principles and practices;
- Provide opportunities for stakeholder input;
- Explore a full range of options;
- Weigh complexity against simplicity; and
- Phase-in big changes gradually by conducting small-scale pilots and experiments to gauge customer acceptance and load response and test-associated technologies.
It is important to monitor and evaluate outcomes of new rates and to modify rates as needed to meet changing utility and customer needs.
The best tariffs in the market place are multi-part tariffs that include an access charge (or customer charge), an energy charge, and a demand charge (for large customers). Such tariffs often embody time-of-use elements. In addition, they feature a block structure where the charge varies with usage.
More sophisticated tariffs also include an allowance for the customer’s willingness to pay for electricity, as measured by the price elasticity of demand. Such tariffs, based on a formulation that was originally developed in the theory of optimal taxation, are often called Ramsey tariffs.4 They suggest that the prices vary by segment in inverse proportion to that segment’s price elasticity of demand. Ramsey pricing is controversial because it leads to markups over marginal costs that are highest for those customers whose demands are the least price responsive. Since such customers are often smaller customers, it raises issues of fairness and equity, even though the pattern of consumption that would flow from it is designed to maximize economic efficiency.
These are just some of the many issues to consider in regard to ratemaking and design.
1. For a survey of such models, see Laffont, Jean-Jacques and David Martimort, The Theory of Incentives: The Principal-Agent Model, Princeton University Press, 2002.
2. Conversely, if the utility accrued losses consistently, it would become insolvent and unable to serve its customers.
3. This is discussed in detail in Jean Jacques Laffont and Jean Tirole, A Theory of Incentives in Regulation and Procurement, Cambridge: MIT Press, 1993.
4. Frank P. Ramsey, “A Contribution to the Theory of Taxation,” Economic Journal, 1927.