The electoral votes have confirmed Joe Biden won the 2020 United States presidential election. The presidential electors gave Biden 306 electoral votes to President Donald Trump’s 232 votes. Biden also recorded a solid lead of over 7 million in the popular vote.
Nonetheless, results from a new NPR/PBS NewsHour/Marist survey found that approximately three-quarters of Republicans did not trust the election results. Corroborating this finding, a separate study of 24,000 Americans found that nearly two-thirds of Republicans lacked confidence in the fairness of the election and over 80 percent feared fraud, inaccuracy, bias, and illegality. In addition, nearly 60 lawsuits filed by Trump claiming various forms of election fraud have been dismissed, including two evaluated by the U.S. Supreme Court.
Of course, doubting the fairness of a disappointing decision is not a Republican phenomenon—it’s a human one.
People often come to a conclusion that is aligned with their self-interest—what psychologists call “motivated moral reasoning.”
When a decision is made and people get the outcome they want, they often tend to see the outcome as fair. For example, when people apply for a promotion and get it, they are more than likely to believe they deserved it. But if they didn’t get the promotion, it is likely to drive a different reaction. At that point, the process used to make the decision becomes of utmost importance. Some might ask whether the process was free of bias, consistent, and ethical.
To investigate this perplexing phenomenon, it’s important to understand the psychology of fairness. Research consistently finds that when people get an unfavorable outcome but believe the process used to make the decision was fair, they react more positively. They may be disappointed, but they tend to accept the decision and stay loyal to the institution that made the decision. This is known as the “fair process effect”: the tendency for fair procedures to mitigate negative reactions to an unfavorable decision.
However, research my colleagues and I conducted in 2009 identifies an important caveat to this effect. We found that when an unfavorable decision is very important to someone—that it is central to their identity as part of a group or their personal values—they tend to look for flaws demonstrating that the process used to make the decision was unfair.
In the first study, we asked 180 university students about a decision that the administration would soon make about limiting the free speech of students. We manipulated whether the outcome was favorable such that half of the students were told the administration planned to restrict free speech, and the other half were told there would be no restrictions. We also manipulated the process by telling students they had an opportunity to express their concerns in a public forum or did not have that opportunity. We then assessed whether the decision made by the administration violated students’ identity as a member of the university and their personal values.
We found that when students felt the decision violated their social or personal identity, they perceived the process and outcome were unfair, even when they had the opportunity to express their views at a public forum. In other words there was a weak or no relationship between providing an opportunity for voice, and fairness perceptions for people whose identity was violated.
Subscribe to the Ethical Systems newsletter
In the second study, we asked 277 adults with work experience about a time a decision was made at work when the outcome was favorable (or not) and the process was fair (or not). As in the prior study, we found that an objectively fair process did not improve fairness perceptions when an outcome violated one’s identity. Instead, these participants were more likely to say that there was a procedural flaw—they doubted the opinions they provided to the decision-maker were ever considered.
The fact that they did not get the outcome they wanted on something that was central to their identity led participants to seek out reasons that an objectively fair process was somehow flawed in a meaningful way. They felt the need to discredit the process.
These findings are consistent with other research showing that, for those who have a strong moral stance on an issue, judgments about whether the process and outcome are fair are determined more by whether the outcome was favorable than whether the procedure was objectively fair.
For example, when participants supported abortion rights, and a defendant in a trial was not convicted of bombing a clinic that performed abortions, these participants believed the trial process was less fair than those who held anti-abortion rights beliefs. Similarly, when participants held anti-abortion rights beliefs and a physician on trial for providing illegal late-term abortions was acquitted, participants believed the trial was less fair than did those with abortion-rights beliefs.
When we care deeply about an issue and get an unfavorable outcome, we question the process used to make the decision.
In an environment in which partisan and identity politics rule, perhaps it is not surprising that a decision that hurts one’s in-group—in this case, Republican supporters—is dismissed on the basis of perceived procedural flaws that render the election unfair, despite objective reality. Of course, the act of discounting the fairness of a decision process when a decision violates one’s identity is not limited to one political party. For example, after Brett Kavanaugh was confirmed as a Supreme Court justice, Democrats tended to believe that his confirmation hearings were unjust, including the withholding of important evidence.
Given that anyone can fall victim to this bias, several things can be done.
First, it is important for leaders to legitimize the decision process. For example, when an organization makes a policy change to extend or reduce the number of remote days of work per week, it is important for leadership at all levels to clarify there was reasonable and fair process used to make the decision. Second, it is critical to ask someone who is impartial. When wrestling with an ethical conundrum, people often come to a conclusion that is aligned with their self-interest—what psychologists call “motivated moral reasoning.” Thus, a neutral person can more accurately assess the decision. Third, reducing how much a person feels distinct and isolated from members of another group by not dehumanizing members of the other group can lessen beliefs that a decision process was rigged or biased.
People often do not get the outcome they want on issues central to their identity, so it is important to actively guard against questioning the legitimacy of an objective and fair process.
David Mayer is an Ethical Systems collaborator and a Professor of Management & Organizations at the University of Michigan.
Lead image: Chad Davis / Flickr
Reprinted from The Conversation.