As we reported in our September Newsletter, fraud allegations have been raised against one of Ethical Systems’ past collaborators, Francisca Gino. While the truth of the matter remains in dispute, Gino is suing Harvard University (her employer), after it placed her on leave and revoked her professorship. She is also suing the behavioral scientists at Data Colada who discovered and publicized what appears to be fraudulent data in four now-retracted studies (1,2,3,4) authored by Gino and colleagues.
Since juicy headlines about ethics researchers being unethical are irresistible to editors and readers, media reports have abounded. And I’ve found myself asking a lot of questions. Kindly don’t confuse my quest for explanations as one for excuses: Explaining how and why ethical failures occur can offer us tremendous opportunities to learn from them—and potentially prevent future lapses, as we’ll see below.
First, does this indicate that the profession of ethics is on the wrong track? Misconduct in research is commonplace. A scandal recently led Stanford’s president, a neurologist, to resign. But shouldn’t researchers in ethics operate with higher standards than those in other fields? Sadly, available research suggests that ethicists are no more principled than the rest of us.
Does this mean that the field of ethics research is particularly plagued by unethical data and protocol breaches and that its researchers are hypocrites who fail to practice what they preach? Or is the field more closely scrutinized due to the higher expectations it raises—and because some ethics experts feel that their best contribution might be to police the research of others? (The second explanation could serve as evidence that this field contains more ethical thinking than others, though more scrutiny might reflect a hunger for headlines, not truth.) Which brings us to a common query in ethics: Are these the bad actions of individuals or the typical actions of individuals in a bad system?
Do other experts fare better with respect to the subject of their research? Doctors tend to smoke less. But it’s questionable that cancer researchers have less cancer. Do geneticists have better genes? Of course not. Ethics may be more similar to (or determined by) genetics than we have previously been aware; experts cannot change who we are at a fundamental level and can only help describe our true nature. Awareness, attitudes, and even the threat of being caught might not be strong enough to overcome a researcher’s hard-wired self-interest. And cheating can advance careers and help overloaded academics manage high workloads.
What we experience as free will may operate in a narrow channel between genetics and the environment, but we certainly have choices. It’s difficult to overstate the potential value of solving these problems. Nearly every realm of society suffers from unethical behavior, and our solutions remain primitive.
A study on the actual (self-reported) behavior of philosophy professors engaged with ethics found that they do not behave better than others in general, the exception being that they eat less meat. Meat consumption by students can be reduced by teaching them relevant ethics. This isn’t very edifying, however: It’s not all that difficult to eat less meat; vegetables, grains, and legumes usually cost less money; and abstaining from factory-farmed meat genuinely reduces the suffering of animals. Moreover, eating less meat can bring abstainers social praise (never guaranteed), enhancements to identity, and potentially, improved health.
A more impressive behavioral change would be one that moves us to act against our human and selfish nature in a way that generally brings negative personal rewards while providing a net benefit for others or for the world at large. When an ethics researcher cheats, no animals are harmed. When they behave ethically, nobody notices.
Subscribe to the Ethical Systems newsletter
When ethicists behave unethically, it probably affects the general credibility of ethics research. Like such other hypocrisies as when climate change activists take private jets, behavior that contradicts the core message taints it. But fields tasked with overcoming humanity’s self-interested nature are likelier to be riddled with personal failures—and plenty of onlookers are pleased to gloat. Getting people to act against their self-interest for the greater good is akin to trying to get people to eat less—or worse, advocating for celibacy as a means of population control.That’s how difficult and antithetical to human nature these endeavors tend to be.
There has always been debate on whether true altruism exists. Do any of us genuinely do things for others without personal benefit? It’s possible that we are entirely selfish without exception, and that even something that appears to be altruistic was done for reciprocity, reputation, or positive emotional feedback. Or we may exercise altruism as an evolutionary adaptation to ensure the survival of our social group and species. The important thing here is that we don’t yet know the answer. We are still discovering what it means to share our human nature as we continue probing to what extent acting in our self-interest is hardwired and unavoidable.
The processes by which professional ethicists act unethically may present the kind of experiences that will help answer these unanswered questions. This is why investigation and learning, rather than total condemnation, may be our best way forward. Perhaps fresh ethics researchers should work with those who have failed to be ethical; the latter might serve as valuable research subjects for in-depth case studies. Ethicists caught being unethical might well be asked to write articles or books about the entire experience for the benefit of a university or charity as a useful penance.
There is also the issue of success and its relationship to unethical behavior. Success seems to have a dual role in promoting unethical behavior. First, attempts to succeed can be enhanced by cheating and fabrication. Less obvious is that chasing success may lead to accidentally unethical behavior by way of running too many studies, processing them too quickly, or creating similar problems by valuing quantity over quality. Achieving success may even raise ethical problems. I’ve noted that many, if not most, successful people in my field (psychology) become burdened by their successes, particularly at the highest levels. Many of those with big names in the field tend to be chronically overworked, spread far too thin in response to overwhelming demand. All that speaking, writing, researching, teaching, and traveling can put them at the limit of their abilities. When everyone wants to collaborate or get something from them, their prominent names can wind up on studies they lacked time for. They must choose between taking a rare moment of rest or rigorously completing every task before them.
It is while in this state of keeping “too many irons in the fire” that people are most susceptible to ethical breakdown. When the paths to and from success are paved with effort that takes people to the limit amid multiple incentives for unethical behavior, something is seriously wrong with the framework in which they operate.
By changing its focus from improving human beings to improving their systems and frameworks, ethics could become the first field to design systems in which flawed people can function ethically. Leading by example cannot mean demanding that ethicists be saints of impeccable character. It could instead mean conceding that few such people exist. Ethics could become the first field to reorganize itself accordingly. This could provide the example humanity truly needs. After all, if we can’t solve ethics for ethics experts, how will we ever improve corporations and the government?
What would reorganizing the field of ethics look like if it were to prioritize the goal of working with, rather than against, human nature? Frankly, I can’t say with certainty. I am calling for this new perspective, not claiming to have all the answers. Still, it might look something like:
o publishing (and celebrating) null results as progress;
o providing greater job security/mobility in academia that’s not based on publishing success;
o moving incentives (acclaim, promotion, fame) to a later stage in the research process, following collaborative replication;
o expanding delegation of work to reduce overbearing workloads, thereby seeking quality over quantity.
Instead of rewarding promising new theories or a single paper, rewards would come after adequate replication and be awarded to a larger group of independent researchers or teams. Research would probably improve by having more people involved; issuing potentially spurious, one-time results would no longer be sufficient. And by recognizing that people have choices—and that punishment can be effective—broad agreement on clear and meaningfully negative consequences for ethical breaches would help a lot. Whether sanctions are monetary, require certain actions, or mandate a change in employment status, they should be so clear that everyone knows them.
There are few examples of breakthrough research that has solved ethical problems. What we think of as landmarks tend to uncover human faults that we can adapt to once we know them. Milgram showed us that people will harm others if they can claim to be following orders, and Zimbardo demonstrated that civilized college students become just like defiant prisoners and authoritarian guards (including embracing abusive behavior), when the situation demands it. (It’s worth noting that both experimenters rocketed to fame on research—conducted in earnest to understand human behavior—that was later deemed unethical!) Nobody solved the human tendencies they revealed. Progress that resulted grew out of an acceptance that these things cannot be changed; only the scenarios to which people are subjected can be modified.
Any way to inspire and support more ethical behavior, even if it derives from a pessimistic view of human nature, would constitute great progress that could be replicated in other areas. So if we accept that human nature cannot readily be changed, we might find ourselves positioned to advance the field. And while some might fear that shifting some responsibility from human beings to environments and systems reduces accountability, a systems-focused approach would hardly toss meritocracy out the window. We could reward effort and productivity (instead of publication) while retaining punishments that shape behavior. Rewards and sanctions would remain important parts of the system; they could simply be shifted and made more effective.