Sunday, February 26, 2023

Social production of moral indifference - 5d

Ethical blindness describes the risk that over time and under the pressure of their context, individuals lose the ability to see that what they do is wrong. It is important because it is the driving force behind big scandals. We all know why bad people do bad things. However, we will never understand large scale systematic and systemic cases of immoral and illegal behaviour if we do not understand why and under what conditions good people are vulnerable to such behaviour. The fascinating question is, why you and me, under certain circumstances, would have done what managers did at Volkswagen, Boeing, Purdue or Wells Fargo.

The various cases of corporate fraud - Enron, Worldcom, the 2008 financial crisis, etc. - leads to a fundamental question: Is dishonesty largely restricted to a few bad apples, or is it a more widespread problem? If the problem is not confined to a few outliers, that would mean that anyone could behave dishonestly at work and at home — you and I included. This is the problem that Dan Ariely seeks an answer to in The Honest Truth about Dishonesty. He shows that the rational cost-benefit forces that are presumed to drive dishonest behavior often do not, and the irrational forces that we think don’t matter often do. 

In rational economics, the prevailing notion of cheating comes from the economist Gary Becker, a Nobel laureate who suggested that people commit crimes based on a rational analysis of each situation. He  noted that in weighing the costs versus the benefits, there was no place for consideration of right or wrong; it was simply about the comparison of possible positive and negative outcomes.  According to this model, we all seek our own advantage as we make our way through the world. Whether we do this by robbing banks or writing books is inconsequential to our rational calculations of costs and benefits.  

In The Honest Truth about Dishonesty, Dan Ariely writes that if this theory is correct, then the response should be to a) increase the probability of being caught (through hiring more police officers and installing more surveillance cameras, for example); b) increase the magnitude of punishment for people who get caught (for example, by imposing steeper prison sentences and fines). This is the model that is generally followed and accepted by policy-makers and the public. 

Ariely’s experiments suggest that we don’t cheat and steal as much as we would if we were perfectly rational and acted only in our own self-interest. Cheating is not necessarily due to one guy doing a cost-benefit analysis and stealing a lot of money. Instead, it is more often an outcome of many people who quietly justify taking a little bit of cash or a little bit of merchandise over and over. Essentially, we cheat up to the level that allows us to retain our self-image as reasonably honest individuals.

Our behaviour is driven by two opposing motivations. On the one hand, we want to view ourselves as honest, honourable people. We want to be able to look at ourselves in the mirror and feel good about ourselves (psychologists call this ego motivation). On the other hand, we want to benefit from cheating and get as much money as possible (this is the standard financial motivation). Clearly these two motivations are in conflict. How can we secure the benefits of cheating and at the same time still view ourselves as honest, wonderful people?

This is where our amazing cognitive flexibility comes into play. Thanks to this human skill, as long as we cheat by only a little bit, we can benefit from cheating and still view ourselves as marvellous human beings. This balancing act is the process of rationalisation, and it is the basis of what he calls the “fudge factor theory.” All of us continuously try to identify the line where we can benefit from dishonesty without damaging our own self-image. The question is: where is the line?

The fudge factor suggests that if we want to take a bite out of crime, we need to find a way to change the way in which we are able to rationalise our actions. When our ability to rationalise our selfish desires increases, so does our fudge factor, making us more comfortable with our own misbehaviour and cheating. The other side is true as well; when our ability to rationalise our actions is reduced, our fudge factor shrinks, making us less comfortable with misbehaving and cheating. There are various environmental forces that increase and decrease honesty in our daily lives, including conflicts of interest, counterfeits, pledges, and simply being tired.

If we increased the psychological distance between a dishonest act and its consequences, the fudge factor would increase and our participants would cheat more.  People are more apt to be dishonest in the presence of non-monetary objects than actual money — you are more likely to take paper or pencil from the office than money from the petty cash box. When you take money, you can't help but think you're stealing. When you take a pencil, there's all kinds of stories you can tell yourself. You can say this is something everybody does. Or, if I take a pencil home, it's actually good for work because I can work more. 

The situation changes our ability to rationalise. When rationalisation increases -- for example, when we say things like everybody does it; or when we say we are doing it for a good cause -- we cheat to a higher degree. Non-monetary exchanges allow people greater psychological latitude to cheat – leading to crimes that go well beyond pilfered pens to backdated stock options, falsified financial reports, and crony deals. Ariely writes:

From all the research I have done over the years, the idea that worries me the most is that the more cashless our society becomes, the more our moral compass slips. If being just one step removed from money can increase cheating to such degree, just imagine what can happen as we become an increasingly cashless society. 

Could it be that stealing a credit card number is much less difficult from a moral perspective than stealing cash from someone’s wallet? Of course, digital money (such as a debit or credit card) has many advantages, but it might also separate us from the reality of our actions to some degree.

If being one step removed from money liberates people from their moral shackles, what will happen as more and more banking is done online? What will happen to our personal and social morality as financial products become more obscure and less recognisably related to money (think, for example, about stock options, derivatives, and credit default swaps)?'

Again the idea is that once we get distanced from money it is easier for us to feel that we are honest but nevertheless be dishonest. Moreover, you aren’t dealing with real cash; you are only playing with numbers that are many steps removed from cash. Their abstractness allows you to view your actions more as a game, and not as something that actually affects people’s homes, livelihoods, and retirement accounts. As Auden said in Letter to Lord Byron, ‘Today, thank God, we’ve got no snobbish feeling /Against the more efficient modes of stealing.’

The creation on Wall Street of mortgage-backed securities made it harder to be a good person. When being not such a nice guy suddenly gets more profitable, it becomes harder to resist temptation.  People didn’t get greedier, but the gains from dishonesty rose, so we got more dishonesty. And when you're surrounded by all these people who think the same way, it's very hard to think differently. Every time you reward someone’s dishonesty you are encouraging others to do the same.

Tuesday, February 14, 2023

Social production of moral indifference - 5c

Although the benefits of cooperation are shared equally among all members of the group, the costs are borne privately by each cooperator. The tension between “public goods—private costs” results in what is sometimes known as the Cooperator’s Dilemma. It turns out that a group consisting entirely of rational agents motivated solely by greed and fear is incapable of cooperation. But it is cooperation that underlies the ability of human groups - whether economic organizations, firms and corporations, or political organizations, such as states. 

In the years since  the publication of The Selfish Gene, lots of work has been done exploring  human-specific mechanisms for fostering cooperation. For eg., take the idea of open-ended play. Two individuals play the Prisoner’s Dilemma, knowing that after a single round, they’ll never meet again. Rationality decrees that you defect. What about two rounds? It requires noncooperation for the same reasons the single-round game does - the game defaults to a single-round game where the rational strategy is to defect. 

Three rounds? The same. In other words, playing for a known number of rounds biases against cooperation, and the more rational the players, the more they foresee this. Cooperation flourishes when games have an uncertain number of rounds. This produces the shadow of the future, where retribution is possible. Here our reputations precede us and produce a sense of obligation and reciprocity. The Nobel Prize–winning economist Kenneth Arrow has concluded, 'Virtually every commercial transaction has within itself an element of trust, certainly any transaction conducted over a period of time.' 

A gene-centric theory is unable to explain such obvious features of human social life as morality, sympathy, and generosity. As the evolutionary biologist David Sloan Wilson explains in Does Altruism Exist?, “. . . group-level functional organization evolves primarily by natural selection between groups.” Groups have multiple-round games and the means to spread news of someone being a jerk. On the other hand, when rational calculations enter into the equation, they tend to undermine ultrasociality. We know them as “nepotism” and “cronyism.”

In Evolution for Everyone, David Sloan Wilson writes that an animal breeder at Purdue University, William Muir, tried to figure out how to make chickens produce the most eggs. He bred two groups of chickens. In both cases, he housed the hens in cages,  which is standard practice in the poultry industry. In the first method he did what the 'rank and yank' system does - he picked the hardest working, most productive hen within each cage to breed the next generation of hens and put all of those in one group. The second method involved selecting the most productive cages and using all the hens from those cages to breed the next generation of hens. 

Thus same trait (egg productivity) is selected in both cases although in the second group, the whole cage is selected when even the best cage might have some individual duds. The first method caused egg productivity to perversely decline, even though the most productive hens were chosen each and every generation. The second method caused egg productivity to increase 160 percent in six generations.

The first method favoured the nastiest hens who achieved their productivity by suppressing the productivity of other hens. After six generations, Muir had produced a nation of psychopaths, who plucked and murdered each other in their incessant attacks. No wonder egg productivity plummeted! In the second approach, he selected the most productive groups and because they were already a group that worked well together, they included peaceful and cooperative hens.

Why was each superstar the prime egg producing champion in her original group? Because she would aggressively peck subordinates enough to stress them into reduced fertility. Put all these mean champions together, and a group of subordinated chickens, who are now in peace, will outproduce them. This is the circumstance of a genetically influenced trait that, while adaptive on an individual level, emerges as maladaptive when shared by a group. Thie “rank and yank” system is a recipe for disruptive self-serving behaviors. 

Even the smartest person can be misled by a false view of human nature. In the long term, the success of the individual is  inextricably bound up with the success of the group. Reciprocity is made up of a series of acts each of which is short-run altruistic (benefiting others at a cost to the altruist), but which together typically make every participant better off. The loss of the capacity to feel guilty and the consequent loss of a sense of responsibility may be the biggest problems facing the world today. 

Economists have recently discovered that trusting communities, other things being equal, have a measurable economic advantage. This is probably because when individuals cooperate, what economists term “transaction costs” — the costs of the everyday business of life, as well as the costs of commercial transactions — are reduced. Moreover, students of public health find that life expectancy itself is enhanced in more trustful communities. Honesty and trust smoothen the inevitable frictions of social life.

Putting all one's faith in a legal system, complete with courts and law enforcement provides only a partial solution. If we needed legal advice and a police presence to formulate and enforce the simplest agreement,  escalating transaction costs would prevent much mutually beneficial cooperation. As Diego Gambetta, a student of trust (and of the Mafia), points out, “Societies which rely heavily on the use of force are likely to be less efficient, more costly, and more unpleasant than those where trust is maintained by other means.” In Bowling Alone, Robert D. Putnam writes:

. . . social capital greases the wheels that allow communities to advance smoothly. Where people are trusting and trustworthy, and where they are subject to repeated interactions with fellow citizens, everyday business and social transactions are less costly. There is no need to spend time and money making sure that others will uphold their end of the arrangement or penalizing them if they don’t. Economists such as Oliver Williamson and political scientists such as Elinor Ostrom have demonstrated how social capital translates into financial capital and resource wealth for businesses and self-governing units.