Assessing willingness to pay
The contingent valuation method (CVM) is a stated preference method to obtain individuals’ valuation of a particular good.
The summary below is mainly taken from Boardman et al. (2018).
Different methods
Direct elicitation methods
Open-ended willingness-to-pay method
Respondents are simply asked to state their maximum WTP for the good, or policy. For example:
What is the most that you would be prepared to pay in additional federal income taxes to guarantee that the Wildwood wilderness area will remain closed to development?
This method had fallen out of favor as analysts feared unrealistic responses because respondents needed some initial guidance on valuations. Concerns that open-ended questions result in unrealistically large estimates of WTP, however, seem unfounded. Yet a serious problem does appear to be that respondents who have low valuations of the good often state a zero rather than the low value.
Closed-ended iterative bidding method
In the closed-ended iterative bidding method, respondents are asked whether they would pay a specified amount for the good or policy that has been described to them. If respondents answer affirmatively, then the amount is incrementally increased. The increments continue until the respondent expresses an unwillingness to pay the amount specified. Similarly, if respondents answer negatively to the initial amount specified, the interviewer lowers the amount by increments until the respondent expresses a WTP that amount.
Although iterative bidding was at one time the most common method in use, it is rarely used now because of considerable evidence that its results are highly sensitive to the initially presented (starting) value.
Contingent ranking method
In the contingent ranking, or ranked choice, method, respondents are asked to rank specific feasible combinations of quantities of the good being valued and monetary payments.
Unlike either the open-ended WTP method or the closed-ended iterative bidding method, the WTP of interest must be inferred from ordinal rankings rather than directly elicited. Additionally, responses appear to be sensitive to the order in which alternatives are presented to respondents.
Dichotomous choice method
In the dichotomous choice method, respondents are asked whether they would be willing to pay a particular specified price to obtain a change in the quantity or quality of a good or the adoption of a policy that would induce the change in the quantity or quality of a good.
A major advantage of a binary choice formulation is that it meets the necessary condition for incentive compatibility – that is, the presence of incentives for respondents to give truthful rather than strategic answers.
In the double dichotomous choice (also sometimes called double-bounded dichotomous choice), depending on the answer to the first offer, a follow-up offer is made that is either double (if yes) or half (if no) the first offer.
There is danger that the respondent’s exposure to the first offer may affect the probability that he or she will accept the follow-up offer. Those who initially rejected bids may take the lower follow-up bid as an indication that the bid price is negotiable and reject bids that they previously would have accepted. Alternatively, those who initially accepted bids may believe that there is some possibility that the good could be provided at the lower price and reject the higher second bid even though it is below their WTP. Also, in responding to the follow-up bid price, respondents may base their responses not on that price but on some weighted average of the two bid prices they encountered.
Whether the greater amount of information generated by the double dichotomous choice method is worth the risk of such response behaviors remains an open research question.
Contingent valuation issues
Specifying the payment vehicle
A payment vehicle describes how the good would be paid for if the policy were to be implemented. Depending on the policy and fiscal context, plausible payment vehicles could include taxes paid into a fund specifically earmarked for the good, increased utility bills, higher income or sales taxes, or higher product prices. Specifying a payment vehicle, along with reminders that payments reduce the availability of funds for expenditures on other goods, helps ensure that respondents perceive the questions as if they pose real economic choices.
Hypotheticality, meaning, and context problems
A major concern in CVM design is whether respondents are truly able to understand and place into context the questions they are being asked and can thus accurately value the good or policy in question.
Clearly specifying the project and its impacts increases the likelihood of correspondence between attitudes and behavior; so too does providing explicit detail about the payment vehicle. Visual aids also assist in understanding. One important class of visual aids useful in reducing hypotheticality is known as quality ladders. Quality ladders help respondents understand both what quality is under the status quo, and what particular increments of quality mean.
The only effective way to minimize hypotheticality and meaning problems in CVM surveys is to devote extensive effort to developing detailed, clear, informative, and highly contextual materials and to pretest these materials extensively on typical respondents.
Neutrality
As CBA deals with increasingly controversial and complex topics, the neutrality of the CVM questionnaire becomes an increasingly important issue.
Neutrality can best be ensured by pretesting the survey instrument with substantive experts who have no axe to grind in terms of the specific project that is being considered. If neutral experts cannot be found, then pretesting with opposing advocates can be an alternative, perhaps enabling researchers to avoid the most serious challenges from those with positions not supported by the results.
Decision-making and judgment biases
Non-commitment bias
It is well recognized in the marketing literature that respondents to surveys tend to overstate their willingness to purchase a product that is described to them.
An indirect way of testing for the bias is to introduce elements to the survey that encourage respondents to think more carefully about their income and their budget constraints.
For example, after respondents were asked about their WTP to avoid a specified oil spill, they were then asked about their valuations of environmental protection versus reduction in crime, homelessness, and other social problems (see Michael Kemp and Christopher Maxwell).
Asking questions that encourage respondents to think about how much of their income is discretionary may help avoid non-commitment bias.
A less complicated and likely effective approach in dichotomous choice elicitations has been commonly used following research showing that adjusting bid acceptances to reflect respondent certainty provided estimates of WTP closer to those from a revealed preference method. This approach requires a follow-up question to the elicitation that asks respondents who accept bids about the certainty of their acceptance. Only acceptances at face value for which the respondent is “definitely sure” appear to provide adjusted acceptance rates very close to actual purchase rates. This suggests that analysts should routinely include a certainty question following a dichotomous choice elicitation and convert bid acceptances to bid rejections for respondents who were unsure about their initial acceptances.
Order effects
CVM studies have also found important order, or sequence, effects. These findings could be explained by either an income effect, a substitution effect, or a combination of both. The issue of whether substitution effects could account for much of the order inconsistency in CVM surveys of passive use values is still unclear. Critics argue that the phenomenon is explained neither by income nor by substitution effects, but it instead demonstrates that respondents cannot really understand these kinds of questions. Hence, they inevitably engage in judgment heuristics that usually cause them to overstate valuations. Thus, it is uncertain that income and substitution effects provide a complete explanation for order effects.
As the interpretation of order effects remains ambiguous, analysts generally avoid asking respondents to value more than one good.
Embedding effects
A fundamental axiom in the standard economic description of preferences is that individuals value more of a good more highly than less of it. This is sometimes referred to as a scope test.
However, research indicates that individuals often do not readily distinguish between small and large quantities in their valuations of a good when the different quantities are embedded in one another. For example, William Desvousges and colleagues found that different samples of respondents value 2000 migratory birds approximately the same as 200,000 birds, and small oil spills much the same as much larger oil spills (given that these are different samples, it does not directly test whether given individuals prefer more to less).
When dealing with passive-use values, embedding is probably the most worrisome problem identified by critics of the use of CVM in CBA.
Critics argue that the empirical evidence suggests that in these contexts respondents are not actually expressing their valuations but instead are expressing broad moral attitudes to environmental issues – a “warm glow” or “moral satisfaction.” However, Richard Carson and colleagues have argued that most studies that manifest embedding problems, or scope insensitivity, are poorly designed and executed, and well-designed CVM instruments do not manifest the problem. Nonetheless, although costly because it doubles the required sample size, in valuing changes in non-use goods, analysts should conduct a scope test by randomly splitting their respondents into two subsamples with each valuing a different magnitude of change in the good.
Starting point bias
A problem arises in CVM when starting values are presented to respondents. The iterative bidding method is particularly prone to this problem because it provides respondents with a specific initial starting “price.”
The dichotomous choice question format seeks to eliminate starting point bias. It has been argued, however, that “responses to dichotomous choice questions are very strongly influenced by starting-point bias, because respondents are likely to take initial cues of resource value from the solicited contribution amount (e.g., assuming it to be his/her share of the needed contribution).” There is, in fact, some empirical evidence to support the contention that the dichotomous choice method is subject to starting point bias, although WTP estimated from dichotomous choice responses appears to be less affected by the provision of information about the total cost of providing a good or the size of the group receiving it than does WTP estimated from open-ended surveys. Whether or not starting point bias arises from information provided within a single elicitation, it is very likely to arise from information provided in the first of a series of elicitations to the same respondent. Like the order effect, starting point bias further supports the general practice of one elicitation per respondent.
The strategic response (honesty) problem
It is frequently argued that respondents in CVM surveys have incentives to behave strategically, that is, to misrepresent their true preferences in order to achieve a more desired outcome than would result if they honestly revealed their preferences.
We should not expect respondents’ answers necessarily to be consistent with economic theory, and hence appropriate for inclusion in CBA, unless respondents believe that the survey is consequential in the sense that it could potentially influence some outcomes about which they care. In contrast, if respondents believe that the survey will have absolutely no influence on outcomes about which they care, then it is inconsequential, and economic theory offers no predictions about the nature of responses.
Mechanisms that provide incentives for individuals to reveal their preferences truthfully are called incentive-compatible.
As it involves a binary response, the dichotomous choice method may be incentive compatible.
An important reason why the open-ended WTP method is now rarely used, is that it will generally be subject to strategic response bias, most likely leading to aggregate WTP estimates that are too small if payments are coerced and too large if payments are voluntary.
Heuristics for the design and use of CVM surveys
- Design survey instrument to reduce hypotheticality bias
- The survey instrument should provide respondents with sufficient and understandable information to remove their uncertainty about the good being valued. When possible, respondents should have familiarity with the type of choice being posed or given experience with it within the survey.
- Use the dichotomous choice method with one elicitation per respondent
- Always include budget reminder; consider priming with disposable income question
- Be more confident in eliciting WTP; use social budget constraints in eliciting WTA
- Include a spilt-sample scope test (especially for non-use goods)
- Include a follow-up certainty question to allow adjustment for non-commitment bias
- Use respondent income or disposable income to address “fat tails”
Acceptability of the contingent valuation method
The US federal courts have held that surveys of citizens’ valuations enjoy “rebuttable presumption” status in cases involving the assessment of damage to natural resources.
A blue-ribbon panel of social scientists convened by the National Oceanic and Atmospheric Administration (NOAA) further legitimized the use of CVM by concluding that it could be the basis for estimating passive use values for inclusion in natural resource damage assessment cases.