The unreliability of value elicitation methods in valuing development interventions.

Abstract

This study assesses the relative reliability of the most common incentive-compatible value elicitation techniques, and compares valuations generated by each technique to those from a hypothetical question. Specifically, we collect valuations for 18 common aid interventions from 793 potential aid recipients using 6 randomly assigned elicitation methods. In a follow up survey, respondents were given a ‘take-it-or-leave-it’ (TIOLI) offer for an intervention – we measure reliability as whether the elicitation method predicts the respondent’s choice at follow-up. Our results show that valuations are systematically overstated across methods and are generally not consistent with responses to a concrete TIOLI offer – only 40% of valuations were consistent with TIOLI choices. Valuations are also sensitive to the elicitation method used and to framing. Overall, incentive-compatible techniques do not perform meaningfully better than a hypothetical question. We conclude that valuations can be obtained inexpensively using a hypothetical question, but that policy makers should use valuation outputs with caution and refrain from using them as ‘point estimates’ given the limitations to their content.

  • Country
    Nigeria
  • Behavior
    Willingness to Pay
  • Sector
    Work and Productivity
  • Authors
    Jeremy Shapiro, Changing Jang, Nicholas Owsley
We're hiring!
  • Get monthly updates on our experiences from the frontlines of applied Behavioral Science.