Featured Collaborator of the Month: Nicholas Epley
Featured Collaborator of the Month:
Professor Nicholas Epley, author of Mindwise: How We Understand What Others Think, Believe, Feel, and Want is the featured collaborator of the month for November.
This section includes:
- An interview with Professor Epley about his work and ethical systems design
- A video of a talk he’s given
- One of his academic articles
- A popular article published in Salon
- A summary of his recent book Mindwise
Interview with Professor and author Nicholas Epley
What is the main research for which you are known?
I study social cognition, what my colleague John Cacioppo once defined as, “how thinking people think about other thinking people.” More specifically, I study a particularly focused version of social cognition that can be thought of as “mind reading”: how thinking people make inferences about others’ thoughts, beliefs, intentions, emotions, and other mental states. These inferences guide our social lives, and they are made so quickly and easily that we rarely pause to consider that we might be mistaken. And yet, the mind of another person is immensely complex, and our inferences are far from perfect. Most interesting to me is understanding how we, as human beings, make these inferences so that we can enable people to understand each other better.
My interest in relating social cognition, or mind reading, to ethics began with my first research project in graduate school, conducted with one of my advisors, David Dunning. This project examined self-righteousness, an all-to-easily documented tendency that many people have: to think that they are more moral or ethical than others. Our interest was in the accuracy of these inferences. When people claim to be more ethical than others, which judgment is right – the optimistic judgment about themselves or their more pessimistic judgment about others? To find out, we conducted a series of different experiments in which we asked one group of people to predict how they and another person would behave in an ethical situation, and then tested how people actually behaved in these situations. In one experiment, we asked undergraduates to predict how they and others would behave during “Daffodil Days,” an annual charity drive on campus that benefits the American Cancer Society. Our participants predicted that they would be more ethical than others, with 83% predicting that they would give money to the charity but that only 56% of their peers, on average, would do the same. Over one month later, we contacted these original participants again to find out how many of them actually donated to the charity. In fact, only 44% actually donated to the charity, suggesting that expectations about others were more calibrated than expectations about oneself.
More interesting is why we observe this difference in accuracy. In subsequent experiments, we found that that is explained by the different kinds of information we have about ourselves than we have about others. When thinking about ourselves, we have access to our good intentions and plans, but we do not have such ready access to others’ intentions and plans. Instead, we must rely on what we actually see others do. We found that this “inside access” to one’s own good intentions leads to mispredictions of ethical behavior. Learning about others’ good intentions leads to optimistic predictions that are every bit as optimistically biased. In this case, people misunderstood themselves because they had too much information, not because they had too little. This gap between the information we have about our own minds and what we infer about others has guided most of my work ever since.
How does your work help companies that want to improve themselves as ethical systems?
In two ways. First, it identifies the ways in which our thinking about ethical behavior might be mistaken, overestimating the importance of individual attributes such as good intentions or moral character and underestimating the power of contextual attributes that guide behavior in surprisingly powerful ways. Second, it also identifies how people’s inferences about each other can enable unethical behavior. In recent years we have become increasingly interested in “dehumanization,” cases in which people think about others more as mindless animals or objects than as fully mindful human beings. We think this is a critical precursor to moral disengagement, one that enables people who think of others as animals or objects to actually treat them as such. Understanding how social cognition can shape ethical action helps organizations design systems that preclude this kind of thinking.
If we could only highlight one paper or research finding that relates to Ethical Systems, which one would it be and why?
I’d highlight one that on its face seems to have nothing to do with ethical behavior at all, but at a deeper level it shows the power of subtle changes in context to alter human judgment and behavior. This was a very simple set of experiments some years ago in which we gave people money in various different ways, and then looked at how they spent it. The critical tweak was that in one condition, participants were told that the money was a “bonus,” additional money that was not theirs to begin with. In the other condition, participants were told that the money was a “rebate,” money they had already paid into some larger system (such as university tuition), that they were now getting back. Technically, both things were true (it was bonus money, and it was being funded by my research grant that was indirectly funded by tuition dollars), but it was the same $50 no matter what you called it. In one experiment, we gave people $50 either framed as a bonus or framed as a rebate. A week later, we got in touch with the participants again and asked them what they did with the money. We found that our participants reported spending, on average, $22.04 in the bonus condition but only $9.55 in the rebate condition. This framing effect was a small contextual change that altered the way our participants interpreted the money they received. That small change, however, had surprisingly large effects on subsequent behavior because it altered the way people construed the situation they were in. Evidence like this is what convinces me that a systems-based approach is critical for understanding how to enable more ethical behavior in everyday life.
The reference is: Epley, N., Mak, D., & Idson, L. (2006). Bonus or Rebate?: The impact of income framing on spending and saving. Journal of Behavioral Decision Making, 19, 213-227.
Tell us about one of your current or future research projects.
Juliana Schroeder, one of our outstanding Ph.D. students at the University of Chicago, and I have recently been studying what we think is the humanizing power of a person’s voice. That is, the ability for a person’s voice to reveal not just what might be on the mind of that person, but also to reveal what are seen to be fundamentally human capacities, such as the capacity for sophisticated reasoning, rationality, and intellect. In one experiment, for instance, we asked MBA students at the University of Chicago to give a short “elevator pitch” to a potential employer. This is a roughly two-minute speech every student knows how to give that is intended to convince a potential employer to hire you. Essentially, this is an MBA student’s chance to show how smart he or she is. We asked our MBA students to both give a spoken pitch, as well as to create a written pitch. We then transcribed their spoken pitch to text. Observers are then asked to listen to the pitch, to read the transcript, or to read the written pitch, to judge how intelligent, thoughtful, and competent the student is, and also to report how interested they would be in hiring the student. The MBA student participants themselves did not anticipate a significant difference in how they will be evaluated, and yet there was a significant difference. The MBA students were judged to be more mindful—more thoughtful, intelligent, and competent—when observers heard what they have to say than when they read a transcript of the same speech (with obvious dysfluencies removed) or when they read a written pitch. Observers also reported being more interested in hiring the person when they heard the candidate. We found that adding additional individuating cues, such as visual cues in the video, did not affect these judgments. A person’s mind, I think, is most clearly conveyed through a person’s mouth, through paralinguistic cues (particularly pitch variance) that reveal the presence of both thought and emotion. Stripped of a voice, a person seems less mindful, less fully human. There is much more to say about this topic, and many more experiments we’ve conducted to describe, but I think this is potentially important for understanding ethical behavior towards others. Without a voice, a person may, in subtle ways, be evaluated and subsequently treated as slightly less human.
Featured video: Professor Epley speaks at the Myron Scholes forum sponsored by the Initiative on Global Markets at the University of Chicago.
Featured academic article: Waytz, A., & Epley, N. (2012). Social connection enables dehumanization. Journal of Experimental Social Psychology, 48, 70-76
Featured popular article: The psychology of hate: How we deny human beings their humanity
Summary of Mindwise: How We Understand What Others Think, Believe, Feel, and Want