Nudging isn’t easy…

Indranil Goswami and I have a new paper in the most recent issue of JMR on the effects of defaults (or suggested amounts) in fundraising.  There’s a nice writeup out today in the Wall Street Journal describing our findings.

We’re looking at situations in which people choose among a menu of options (such as smaller and larger donation amounts), and one of the options could be set as the default amount. We wanted to figure out what the optimal default level was. Would a charity raise more money if they started suggesting the typical donation amount, or something smaller, or something larger?

The answer is that it’s complicated — small defaults lead to more people donating but smaller donation amounts. Setting a large amount as the default leads to fewer people donating, but the average person giving more. The right strategy then depends on the charity’s goals and whether donation amounts or participation are “stickier” in the specific setting.  Our practical recommendation after a few years of studying this?  Run an experiment to find out what works in your own unique setting.  Not the most satisfying answer.

This got us thinking more broadly about how “nudges” are developed and disseminated. Typically, there’s an academic  research finding, often conducted in a lab setting to isolate just the effect of interest and designed to be either a precise (or dramatic) test of a psychological theory.  Then, there might be a debate about why the effect occurs, with further studies designed to distinguish between different explanations.

What emerges seems like a fully-developed theory, ready for implementation. After all, the initial paper on defaults has been cited over 3500 times already!  But the academic research process often neglects questions that are especially important for practical use. Because we typically focus on situations in which a single effect can be studied, the resulting incomplete theories may have little to say about the preconditions for the intervention to be successful, whether the intervention will have conflicting effects (e.g., on participation vs. amount, for example), and whether multiple interventions will complement each other or detract from each other.

To their credit, groups like the “nudge unit” in the UK and the SBST in the US have used the academic literature as a source of hypotheses to test in their own field experiments, rather than as a source of solutions to implement. But with the increasing public interest in “behavioral economics” (loosely defined), not everyone will be as humble about what the field knows and doesn’t know.  Beware of behavioral scientists bearing solutions! Even when the psychology of the intervention is right, the consequences in one context can be very different from the consequences in another context.

Advertisement

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s