The value of open data for teaching

Christopher Madan has a nice article on the usefulness of open data (i.e. making public the underlying data for a research article) for developing teaching materials. In business schools, MBA teaching relies heavily on cases, particularly Harvard Business Review cases. Since I started teaching, I’ve been puzzled that almost none of the cases provide data that can be statistically analyzed.  Of the few that do, as far as I know, the data is simulated or at least doesn’t claim to be the actual data. This seems like an odd way to teach students about the increasingly analytics-based practice of making business decisions.

For the past few months, I’ve been developing an MBA course on experimental methods (think online A/B testing, test-markets, in-store stocking experiments, direct-mail tests, advertising and communications experiments, etc..). After not finding suitable cases, we started writing cases based on published field (not lab) experiments.  The catch was that I wanted field experiments with publicly available data, so that the students could go through the data analysis process themselves, using real data.

I’ve found some very nice examples to develop into cases, but I was surprised at how difficult it was.  Even in those research journals which require (or at least strongly encourage) making data public, the data for many papers had restrictions or were completely unavailable due to the data being proprietary. Often this is because of requirements that companies have before they will share data with researchers. I’m sensitive to companies’ concerns about how their data might be used if made public, of course. But the benefits of open data (and the value of papers that make their data public) go far beyond just checking up on the authors’ analyses.

 

 

Advertisement

A textbook case: Definitely true. Maybe.

Yesterday, I discussed a recent paper which looked at whether introductory psychology textbooks promote or correct “myths” about psychology.

A few years back, I came across an interesting book published in 2009, specifically about correcting myths in psychology, 50 Great Myths of Popular Psychology, by Lilienfeld et al. They discuss misconceptions about how much of our brains we use, hypnosis, polygraphs, positive attitudes and disease, self-esteem, criminal profiling, expert judgment and so on. While some researchers are sure to disagree with their characterizations, at a minimum their critiques suggest substantial caution in accepting the claims as facts.

At the end of the book, in a postscript, they list 13 findings that they characterize as “Difficult to Believe, But True” (p. 248-250).

The irony?  One of the findings they list as true is implicit egotism, the theory that people are more likely to select options in major life choices (including professions, locations and spouses) that have similar names to their own name (i.e., “Dennis the Dentist”). Much of the evidence for this, however, has been found by Uri Simonsohn, in his 2011 paper “Spurious? Name similarity effect (implicit egotism) in marriage, job and moving decisions” to be replicable, but accounted for by other explanations.

Another on their list of true findings? The finding that holding a warm object makes people feel warmer to others, which other researchers recently have not replicated.

This is not to say that implicit egotism or social-warmth priming are now known to be false. Perhaps subsequent research will revise the currently negative prognoses of these effects.  But it is telling, I think, that even in a book about skepticism towards psychological theories, at least two of thirteen findings were oversold as being known to be true.

There has been a lot of discussion about how to change the publication process to try to make individual papers’ findings more reliable. This is definitely important.  But perhaps it’s just as important that we lower our expectations about what any one paper will achieve.  In most cases, we simply can’t adopt the conclusions of a paper until they have been subjected to critical debate and replication attempts, direct and conceptual, especially by those skeptical of the theory. The less enthusiastically a field supports such debate, the longer it will take until we can reliably consider a finding well-established. Until then, every finding is definitely true. But only maybe.

A textbook case.

A recent paper surveys coverage of famous conclusions and examples in psychology that are the topic of active debate, or that are outright incorrect.

Perhaps the most famous case they discuss is the Kitty Genovese bystander effect example. This 1964 murder case was once considered a classic example of the bystander effect, people in groups not taking action because they assume someone else will or should take action. However, the truth turns out to be more complicated, with fewer witnesses than assumed, neglected calls to the police and some questionable journalism.

The paper documents inaccurate coverage of some of these topics in introductory textbooks, particularly media violence, stereotype threat and the bystander effect example. They discuss these inaccuracies in terms of the desires to support favored views (e.g., an ideological bias) or a preference for simplicity and conclusiveness which would present psychological progress in a positive light.

“Aside from this, textbooks had difficulty covering controversial areas of research carefully, often not noting scholarly debate or divergent evidence where it existed. .. The errors on these issues were universally in the direction of presenting controversial research or scientific urban legends as more consistent or factual than they are.”

Much of social science is inherently noisy and even the most reliable findings are usually multiply determined. The impatient response to that is to downplay what we don’t know, sweep complexity and uncertainly under the rug, and prematurely declare hypotheses to be established theories and established theories to be scientific laws.

Perhaps it would make for unsatisfying textbooks if we paid more attention to  what we don’t know and instead discussed controversies in the literature. But one of the benefits that students could get from studying psychology is an accurate understanding of human behavior as complex and difficult to predict.

What disrupts continuity of personal identity?

[4th post in a series, start with the first post here]

I’ve been writing this week about Parfit’s idea that personal identity should be thought of as defined by the continuity or overlap in important psychological properties between the self at different times. One question that’s raised by his work is what are important changes in terms of personal identity? Which changes are important enough to disrupt psychological connectedness to the future self, and which changes are merely superficial, not impacting how connected people feel to their future selves?

Nina Strohminger and Shaun Nichols have a fascinating paper looking at mental changes, specifically those caused by dementia, Alzheimers and ALS (Lou Gehrig’s diseases). They study family members of patients suffering from these diseases, looking at relationships with the patient, as well as perceptions of whether the patient is still the same person.

They find that dementia has the strongest negative effect on relationships to family members, and ALS the least, with Alzheimers somewhere in-between.  These differences in negative effects of the diseases on patients’ relatives can be traced back to the relative’s higher perceptions  of the dementia (vs. ALS) patients as a different person.  These perceptions, fundamentally about continuity of others’ identity, in turn, seem to depend most strongly on the relative’s perceptions that the patient’s morality has changed.

To put it another way, the patient’s amnesia, depression, changes in the patient’s personality, and length of illness all had minimal effects on whether the patient’s relatives saw the patient as no longer the same person.  What mattered the most was the relative’s perception of the patient’s morality having changed (as well as, to a lesser degree, the patient having difficulty speaking).  This points to morality as potentially being at the core of what needs to remain the same for people to feel connected to their future selves.

Stephanie Chen, along with Dan Bartels and I, has looked at this same question in a very different way.  She measured people’s causal maps of aspects of their identity (see an example below), by having people identify which aspects of their identity caused which other aspects. She then asked people to identify which aspects, if changed, would be most disruptive to their identity.

causal-map

She finds that morality is important, but no necessarily the most important, when people are contemplating possible future changes in their own identity.  Instead, what seems to matter the most in this way of looking at it are the causal inter-relationships. Those aspects which are linked to the most other aspects of the self, either as a cause or as an effect, are the ones that are most important for identity.

The aspects that have the most causal connections vary across people.  Of the aspects we tested, the ones that had the most links to other aspects and had the most impact on identity were personal goals and intelligence.  Wholesomeness, loyalty and honesty had somewhat lower levels of both links and perceived impact on identity, on average.

causal-links

These results suggest that what matters for a person’s identity will depend on how that person sees their personal aspects. If I think of my moral values as connected to lots of others aspects of myself, then I am more likely to see a future change in my morality as disrupting the continuity of my identity. But if I see my moral values as occurring separately from other aspects, then I will tend to see changes in those moral values as less disruptive than something else that is more central and connected — perhaps my intelligence, goals or memories.

 

 

Parfit, connectedness and death

[3d post in a series, start with the first post here]

On Tuesday, I wrote about Derek Parfit’s paper, Personal Identity, and his views about how thinking of the future self as not fully the same person as the present self (i.e., as not fully psychologically connected) might have implications for how people think about death:

“The second consequence which I shall mention is implied in the first. Egoism, the fear not of near but of distant death, the regret that so much of one’s only life should have gone by — these are not, I think, wholly natural or instinctive. They are all strengthened by the beliefs about personal identity which I have been attacking. If we give up these beliefs, they should be weakened.”

This is a view that he took seriously in his own life, and that he evidently found liberating.

In 2015, we organized a conference on personal identity and decision making. Shaun Nichols gave a fascinating talk on work he and his co-authors were doing on exactly this question, including comparing across religions with different views of personal identity. He briefly discusses his research in the video below (at the 1:10 mark).

It turns out not to work for most other people quite as Parfit described in his own views. In Shaun and his co-authors’ experiments, making people more connected did not seem to reduce their anxiety about death. They speculated that when people think about death, they are not thinking in terms of a future self, but are imagining it as occurring in the present.

I also wonder if the psychological will to live transcends how we think of tradeoffs between present and future. One potential implication of how Parfit thinks of death is that we are kind of constantly dying a little bit. Every day’s self is at least a bit disconnected from the previous day’s self, and that previous day’s self is gone. For Parfit, this makes the entire question of death moot, in a way — there is no single long-term enduring self that ceases to exist. But another way of thinking about this is to see your child self as (at least somewhat) dead, and to know that at some point in the future, your future self will consider your present self to be (at least somewhat) dead too.

In some sense that seems right. My 10 year old self is gone, and in a weird way, I kind of miss that kid. A lot of what defined our past selves can remain in our present selves, of course, but some of it is gone. I think for most people, what is gone is a combination of those things that they wanted to change, those changes that are inevitable, and those aspects of the self that somehow just slip away when we’re focused on other things.

So, while I think Parfit’s view resonates with people’s experience to some degree, it’s not clear to me that the implication is to necessarily be less threatened by death.  If a person has low connectedness, believing that the future self will be fundamentally different, then that eventual death of the disconnected future self may be less threatening. But if the low connectedness itself is seen as a kind of gradual death of the current self, thinking in terms of connectedness could feel threatening rather than liberating.

[See the last post here].

More on Parfit

[2d post in a series, start with the first post here]

Vox has a profile of Derek Parfit out today, covering the impact of his ideas on philosophy.

When we were working on the connectedness research I discussed yesterday, it was back in the days of paper surveys, and I brought unused blank surveys home, for my kids to draw on.  At one point, we got into a discussion about it, and I tried to explain to my then 9 year old daughter the idea of connectedness to the future self.  I wasn’t sure that my explanation was making all that much sense to her.  But then a few days later I found this picture:

connected

I think it actually captures the idea of having high connectedness to the future self really well. We age and our bodies change, but if the core that defines me, whatever I think that is, remains the same, then I’m still mainly the same person.

[See the next post here].

Derek Parfit

The philosopher Derek Parfit has died. The New Yorker published an interesting profile of his work a few years ago.

I’ve spent a lot of time in the last several years working with Dan Bartels on research related to Parfit’s thinking. My own knowledge of Parfit’s work was always a bit secondhand. Dan had read his work closely and saw the potential relevance for how people value outcomes in the future, or to put it in psychological and economics terms, for time discounting.  I had been working on time discounting, and Dan got me interested in how Parfit’s ideas might shed light on the topic.

Much of Parfit’s thinking is laid out in his 1984 book Reasons and Persons. His 1971 paper, “Personal Identity” lays out a shorter case for how he thought about present and future selves.

Parfit is interested in an issue widely debated in philosophy, how personal identity can be defined. He starts with a scenario by David Wiggins, in which a person’s brain is removed, split in two, and transplanted into two new bodies, each of which will have the original person’s character and memories. The question is whether the original person has survived, and if so, whether as one or two people? After discussing this case, and an opposite case in which two people’s brains are merged into one, he introduces a third way of thinking about identity:

“But let us look, finally, at a third kind of being.

In this world there is neither division nor union.  There are a number of everlasting bodies, which gradually change in appearance.  And direct psychological relations, as before, hold only over limited periods of time.

Our beings would have one reason for thinking of themselves as immortal.  The parts of each “line” are psychologically continuous.  But the parts of each “line” are not all psychologically connected.  Direct psychological relations hold only between those parts which are close to each other in time. “

In this paper, Parfit proposed that we rethink personal identity in terms of psychological overlap defined by the degree to which psychologically important features are held in common by the present self and a future or past self, which he calls “psychological connectedness”. He goes on to say:

“On this way of thinking, the word ‘I’ can be used to imply the greatest degree of psychological connectedness. When the connections are reduced, when there has been any marked change of character or style of life, or any marked loss of memory, our imagined beings would say, ‘It was not I who did that, but an earlier self.’ They could then describe in what ways, and to what degree, they are related to this earlier self.

This revised way of thinking would suit not only our ‘immortal’ beings.  It is also the way in which we ourselves could think about our lives. And it is, I suggest, surprisingly natural.”

What starts off as an abstract philosophical discussion suddenly has, if you agree with his point of view, clear implications for how we should think about the past and future when making decisions. Parfit goes on to elaborate on the implications for the norm of self-interest. In his view, if a person has low psychological connectedness with the future self, then it makes sense for that person to care less about that future self, and by extension to make decisions that don’t give much weight to long-term consequences. This is presented in normative terms (and has since been widely debated in philosophy).

Descriptively, psychologists and behavioral economists had long noted that people seemed to give more weight to present than future outcomes in making decisions (like choosing $50 now over $100 in 6 months). This occurs to a degree that is difficult to explain in purely economic terms (i.e. in terms of interest rates and inflation).

My first year at Chicago, I saw Dan present a paper he had been working on with Lance Rips, showing that differences in how people weighted near vs. distant future outcomes against the present could be at least partially explained by how people’s subjective psychological connectedness declines over greater lengths of time.

Dan and I then looked at how people who are (or are made to be) more or less connected to their future selves are more vs. less patient in economic decisions. We also studied how the effect of connectedness on decisions depends on whether the person tends to think about (or is reminded to think about) the future consequences of their decisions. Most recently, Stephanie Chen, Dan and I have investigated  which disruptions to an aspect of a person’s identity would have the most impact on reducing subjective connectedness. Dan and quite a few other people (including Hal Hershfield and Shane Frederick) have also been studying these and related questions in the last few years, and I’ve just finished a review paper summarizing the current state of research on this topic.

In Personal Identity, after discussing the norm of self-interest, Parfit then goes on to discuss possible implications for how people might think about death:

“The second consequence which I shall mention is implied in the first. Egoism, the fear not of near but of distant death, the regret that so much of one’s only life should have gone by — these are not, I think, wholly natural or instinctive. They are all strengthened by the beliefs about personal identity which I have been attacking. If we give up these beliefs, they should be weakened.”

[First in a series, see the next post here].

Which embodied cognition?

Andrew Gelman has a post up noting that the upcoming APS conference will feature a Presidential Symposium on embodied cognition (“Sense and Sensibility: How Our Bodies Do – and Don’t – Shape Our Minds”). He’s troubled by the inclusion of Amy Cuddy, whose work on “power posing” has not held up well to replication attempts, and whose co-author has questioned the findings and the methods used.

It’s worth noting, however, that research on what is called “embodied cognition” encompasses some very different approaches and assumptions.

One view, closely related to priming, focuses on the effects of bodily cues on thinking, often non-consciously and often via metaphors.  This kind of research suggests, for example, that people holding a heavy clipboard would find the information more important (e.g, conflating physical weight with the metaphor of “weighty information”).

Another view is based more on an association between mental states and bodily states, so that the things we do as a response to a mental state can then also independently cause that mental state.  The classic example here is the idea that inducing or preventing a smile-like facial expression can increase or reduce feelings of happiness.  Cuddy’s hypotheses fall within this framework.

A third view is based on the observation that we use our bodies when engaging in abstract reasoning. Behaviors like gesturing when speaking, doing mathematics using an abacus, or sign-language all involve physical hand movements in mental reasoning processes. The idea is that over time, those physical movements then become an important part of the reasoning process. Kids who learn math on the abacus, for example, will make abacus-like hand movements even when doing mental arithmetic, and some research suggests that encouraging the use of gesture helps children learn mental arithmetic.

The concerns about non-replication have been focused on findings in the first two areas, as far as I know (e.g., here and here).  That could be because findings in those areas were more dramatic and got a lot more press and replicator attention, or because the findings supporting the third view are actually more robust.

I will say that the third view accords with my own experience in a way that the first two don’t.  I have moderate right-left confusion. I often make mistakes when I need to say “right” or “left”, which is a problem when I’m a passenger giving directions.  I find that I’m a lot more accurate if I point physically, rather than try to say it verbally.  So, the physical movement actually seems to help with a conceptual task, at least in my case.

Getting back to the APS session, it’s worth noting that several of the speakers in the session study what I’ve called the third kind of embodied cognition (gestures and sign language). This area of research actually has a long history, predating the more attention-getting recent stuff. Sometimes when more speculative findings in an area don’t hold up, the reaction is to then throw out related findings, even those that make more sense. If this session is a step towards the field sorting through which theories have more or less empirical support, debating when bodily states do and don’t impact thinking, and mapping out a sensible way forward, that would be a good thing.

Nudging isn’t easy…

Indranil Goswami and I have a new paper in the most recent issue of JMR on the effects of defaults (or suggested amounts) in fundraising.  There’s a nice writeup out today in the Wall Street Journal describing our findings.

We’re looking at situations in which people choose among a menu of options (such as smaller and larger donation amounts), and one of the options could be set as the default amount. We wanted to figure out what the optimal default level was. Would a charity raise more money if they started suggesting the typical donation amount, or something smaller, or something larger?

The answer is that it’s complicated — small defaults lead to more people donating but smaller donation amounts. Setting a large amount as the default leads to fewer people donating, but the average person giving more. The right strategy then depends on the charity’s goals and whether donation amounts or participation are “stickier” in the specific setting.  Our practical recommendation after a few years of studying this?  Run an experiment to find out what works in your own unique setting.  Not the most satisfying answer.

This got us thinking more broadly about how “nudges” are developed and disseminated. Typically, there’s an academic  research finding, often conducted in a lab setting to isolate just the effect of interest and designed to be either a precise (or dramatic) test of a psychological theory.  Then, there might be a debate about why the effect occurs, with further studies designed to distinguish between different explanations.

What emerges seems like a fully-developed theory, ready for implementation. After all, the initial paper on defaults has been cited over 3500 times already!  But the academic research process often neglects questions that are especially important for practical use. Because we typically focus on situations in which a single effect can be studied, the resulting incomplete theories may have little to say about the preconditions for the intervention to be successful, whether the intervention will have conflicting effects (e.g., on participation vs. amount, for example), and whether multiple interventions will complement each other or detract from each other.

To their credit, groups like the “nudge unit” in the UK and the SBST in the US have used the academic literature as a source of hypotheses to test in their own field experiments, rather than as a source of solutions to implement. But with the increasing public interest in “behavioral economics” (loosely defined), not everyone will be as humble about what the field knows and doesn’t know.  Beware of behavioral scientists bearing solutions! Even when the psychology of the intervention is right, the consequences in one context can be very different from the consequences in another context.