Wednesday, May 27, 2020

The tyranny of algorithms - IV

It is commonly assumed that algorithms mindlessly execute their programs and see patterns in the data without any biases. This view fails to acknowledge that they reflect the minds and worldviews of their creators. When we outsource thinking to machines we are actually outsourcing thinking to the organisations that run those machines. For example, both Amazon and Netflix give recommendations about books and  films respectively but the nature of their recommendations differ. Amazon will nudge you towards the types of books with which you are familiar while Netflix will direct users towards unfamiliar movies. Blockbuster movies cost more to stream and Netflix makes more profit when users watch more obscure films.

Thus the algorithms are programmed to direct users towards what benefits the corporation although the propaganda will be 'to enhance user experience'. The power to include, exclude, and rank is the power to ensure that certain public impressions become permanent, while others remain fleeting. How does Amazon decide which books to prioritize in searches? How does it identify fake or purchased reviews? Why do Facebook and Twitter highlight some political stories or sources at the expense of others? Although internet giants say their algorithms  are scientific and neutral tools, it is very difficult to verify those claims.

When the Amazon boss Jeff Bezos started  out, he said that Amazon intended to sell books as a way of gathering data on affluent, educated shoppers. The books would be priced close to cost in order to increase sales volume. After collecting data on millions of customers, Amazon could figure out how to sell everything else dirt cheap on the Internet. Now books are not the only business of Amazon. It also sells hardware, is a video distributor, a  production studio, a grocery deliverer. According to one publisher’s estimate, book sales in the U.S. now make up no more than seven per cent of the company’s  annual revenue. Books were going to be the way to get the names and the data, a customer-acquisition strategy.

Amazon is ruled by computer engineers and M.B.A.s who value data most and believe only in measurable truths. The vast majority of them can be classified into two or three similar categories, and Bezos is the same: introverted, detail-oriented, engineer-type personality. Not musicians, designers, salesmen. A former Amazon employee who worked in the Kindle division said that few of his colleagues in Seattle had a real interest in books: “You never heard people say, ‘Hey, what are you reading?’ Everyone there is so engineering-oriented. They don’t know how to talk to novelists.” Amazon's writers were under pressure to prove that their work produced sales. If a customer clicked on a review or an interview, then left the page without making a purchase, it was logged as a Repel. Writers had to make sure that their repulsion rate was not too high.

The customer has always been king in the Bezos ethos."Amazon gives the customers what they want: low prices, vast selection and extreme convenience," he told a shareholders’ meeting. On these terms, Amazon’s success is stellar. It has more than 2 million titles on sale; bestselling books are routinely discounted by 50 percent or more; and it ranked first in Business Week‘s "customer service champs" awards a couple of years ago. Dennis Johnson, an independent publisher, says that “Amazon has successfully fostered the idea that a book is a thing of minimal value — it’s a widget.”  Adrian Chen of Gawker.com said, 'Do you remember books? A book is basically thousands of tweets printed out and stapled together between pieces of cardboard.'

Recently, Amazon even started creating its own “content” — publishing books. The old print world of scarcity — with a limited number of publishers and editors selecting which manuscripts to publish, and a limited number of bookstores selecting which titles to carry — is yielding to a world of digital abundance. Amazon will say that, because an unprecedented number of titles are available in an instant, “it’s never been a better time to be a reader.” It will point to the growth of online reader networks, such as GoodReads, which Amazon owns, as a welcome development: “Suddenly, we’re not locked into hearing the opinions of a small number of reviewers in newspapers.” The elimination of the “gatekeepers,” as Bezos calls the professionals who get in the customer’s way, is one of his pet themes, thinking that it will rid the public sphere of their biases and inefficiencies.

Bezos believes that he provides an argument against elitist institutions and for “the democratization of the means of production”. “Even well-meaning gatekeepers slow innovation,” Bezos wrote in his 2011 letter to shareholders. “When a platform is self-service, even the improbable ideas can get tried, because there’s no expert gatekeeper ready to say ‘that will never work!’" Amazon believes that its approach encourages ever more people to tell their stories to ever more people, and turns writers into entrepreneurs; the price per unit might be cheap, but the higher number of units sold, and the accompanying royalties, will make authors wealthier. It believes that selling digital books at low prices will democratize reading.

When it comes to the books it stocks, Amazon makes no pretense of selectivity. Provided it carries an ISBN and isn’t offensive, Amazon is happy to sell any book anyone cares to publish. "We want to make every book available — the good, the bad and the ugly," Bezos once said. As Evgeny Morozov says in To Save Everything Click Here, 'Whether these books are Sudoku puzzles or Tolstoy novels doesn't matter at al, for it is all about the number of books downloaded, pages flipped, and memes created.' Many would argue that the increase in the variety of books being published that Amazon has encouraged can only be a good thing, that it enriches cultural diversity and expands choice.

But is this the whole truth? Is the method of production the only criterion to consider? Is the only goal of publishing to produce as many books as possible? Gatekeepers are barriers against the complete commercialization of ideas, allowing new talent the time to develop and learn to tell difficult truths. Ideas and artistry are important in deciding what is sold but Amazon neglects these criteria and focuses only on measurable data like sales volumes and price points provided by 'unbiased' algorithms. There is also the paradox of choice: when people are offered a narrower range of options, their selections are likely to be more diverse than if they are presented with a number of choices so vast as to be overwhelming. When consumers are overwhelmed with choices, some experts argue, they all tend to buy the same well-known thing.

Amazon has the ability -- and willingness -- to lose money in order to put competitors out of business. It views losing money as a marketing expense, the cost of acquisition of a new customer. Very few companies besides Amazon could absorb such losses without being penalized by the market. A monopoly is dangerous because it concentrates so much economic power, but in the book business the prospect of a single owner of both the means of production and the modes of distribution is especially worrisome: it would give Amazon more control over the exchange of ideas than any company in history. Some have wondered if Amazon would eventually control so much of the market that it would stop selling books at cost and raise prices to become more profitable.

Saturday, May 16, 2020

The tyranny of algorithms - III

Because we focus so much on the miracles of Google, we are too often blind to the ways in which Google exerts control over its domain especially because it offers many services free. But there is an implicit non-monetary transaction between Google and its users. Google rules the Web through its power to determine which sites get noticed, and thus trafficked. It stores “cookies” in our Web browsers to track our clicks and curiosities. Google gives us Web search, e-mail, Blogger platforms, and YouTube videos. In return, Google gets information about our habits and predilections so that it can more efficiently target advertisements at us. Google’s core business is consumer profiling. It generates dossiers on many of us. Google, and our habits (trust, inertia, impatience) keep us from clicking past the first page of search results. Google understands the fact that default settings can work just as well as coercive technologies.

When confronted with questions about its dominance in certain markets, Google officials always protest that, on the Internet, barriers to entry are low, and thus any young firm with innovative services could displace Google the way Google displaced Yahoo and Alta Vista. As a Google lawyer said, “Competition is a click away.” That argument relies on the myth that Internet companies are weightless and virtual. It might be valid if Google were merely a collection of smart people and elegant computer code. Instead, Google is also a monumental collection of physical sites such as research labs, server farms, data networks, and sales offices. Replicating the vastness of Google’s processing power and server space is unimaginable for any technology company except Microsoft. The argument about user behavior could be valid if boycotting or migrating from Google did not incur significant downgrades in service by losing the advantages of integration with other Google services.

Google’s argument also ignores the “network effect” in communication markets: a service increases in value as more people use it. A telephone that is connected to only one other person has very limited value compared with one connected to 250 million people. YouTube is more valuable as a video platform because it attracts more contributors and viewers than any other comparable service. The more users it attracts, the more value each user derives from using it, and thus the more users it continues to attract. If only a few people used Google for Web searching, Google would not have the data it needs to improve the search experience. Network effects tend toward standardization and thus potential monopoly.

Google and other online vendors shy away from presenting effective ways for users to manage their privacy. An important point that Siva Vaidhyanathan makes in The Googlisation of Everything is that “celebrating freedom and user autonomy is one of the great rhetorical ploys of the global information economy … meaningful freedom implies real control over the conditions of one’s life. Merely setting up a menu with switches does not serve the interests of any but the most adept, engaged, and well-informed”.

Google sells our fancies, fetishes, predilections, and preferences to advertisers. While Google provides users with the information that they seek, seemingly for free, it collects the gigabytes of personal information and creative content that millions of Google users provide for free to the Web everyday and sells this information to advertisers of millions of products and services. Google runs an instant auction among advertisers to determine which one is placed highest on the list of ads that run across the top or down the right-hand column of the search results page.

Although Google’s contextual advertising and instant auctions often serve the interests of small firms, its freedom to set such rates at any level it desires allows it to crowd out some of the small firms that have grown to depend on Google for their most valuable advertising outlets, including small firms that are Google’s potential competitors. Another way in which Google limits its completion is by a touchy issue called cross-subsidization. Siva Vaidyanathan says in The Googlisation of Everything:
Google can use its prominence in people’s lives — the network effect — and its surplus revenues to support its other ventures — its online document business, for example. This poses a serious threat to small, creative companies that offer Web-based word processors, such as Zoho. If Google uses its profitable ventures to subsidize those activities destined to lose money, and if that practice kills off innovative potential competitors like Zoho, Google has crossed the line into shaky legal territory. 
Google refuses to acknowledge that its algorithms can malfunction sometimes and cause ethical problems. In To Save Everything Click Here, Evgeny Morozov gives the example of Google's Autocomplete feature. When you start typing your query, Google's algorithms gives four suggestions based on how other users have completed the query. It is a useful feature that saves a few seconds of the users' time. (Of course it often also limits the query to the given options because users are too lazy to complete their original query.)

Suppose in a deliberate attempt to smear your reputation, someone pays a large number of users to search for your name followed by the word 'pedophile'. Enough volume can be generated to make the term one of the search suggestions replacing a more benign term. Now everyone who searches for your name is also informed that you may be a pedophile.  You cannot appeal to the algorithms because they are supposed to be always right. Google will say, 'We believe that Google should not be held liable for terms that appear in Autocomplete as these are produced by computer algorithms based on searches from previous users, not by Google itself ' even though it knows that its algorithms can be gamed.

It cannot be ascertained for certain if Bettina Wulff, Germany's former first lady, was the victim of a hit job. In 2012, she sued Google for 'auto completing' searches for her name with terms like 'prostitute' and 'escort'. In Japan, Google was ordered to modify its Autocomplete search results after a man complained that that they linked him to crimes he did not commit. In France, Google was ordered to modify its Autocomplete search results after a man complained that they suggested that he was a 'satanist' and 'rapist'.

Wednesday, April 29, 2020

The tyranny of algorithms - II

Once upon a time, everybody loved the internet. It would make us freer, richer, smarter. It would make us better citizens, better consumers, better humans. Most people continue to think of tech companies as businesses that sell some product or service at a profit. Amazon just wants to sell you stuff, Facebook wants to show you advertising, and Apple wants you to buy the latest iPhone product. Your phone, your computer, and various apps and programs etc. come from just a few Silicon Valley overlords. Their companies want very much to shape what you do. They hope to automate the choices that we make everyday. Unrestrained power, no matter how well-meaning or alluring, is something of which we should always be wary.

Google, Facebook, Amazon use technology and data to sidestep traditional restrictions on monopoly power. Credit raters, search engines, major banks etc. collect data about us and convert it into scores, rankings, risk calculations, and watch lists which impact us daily. They use it to make important decisions about us and to influence the decisions we make for ourselves. A bad credit score may prevent a customer from accessing any loans but he will never understand exactly how it was calculated. A predictive analytics firm may score someone as a “high cost” or “unreliable” worker, yet never tell her about the decision. As Frank Pasquale writes in The Black Box Society:
Continuing unease about black box scoring reflects long-standing anxiety about misapplications of natural science methods to the social realm. A civil engineer might use data from a thousand bridges to estimate which one might next collapse; now financial engineers scrutinize millions of transactions to predict consumer defaults. 
But unlike the engineer, whose studies do nothing to the bridges she examines, a credit scoring system increases the chance of a consumer defaulting once it labels him a risk and prices a loan accordingly. Moreover, the “science” of secret scoring does not adopt a key safeguard of the scientific method: publicly testable generalizations and observations.  As long as the analytics are secret, they will remain an opaque and troubling form of social sorting.
The proprietary algorithms by which digital economy companies collect and analyse data of their customers are immune from scrutiny. Internet companies collect more and more data on their users but fight regulations that would exercise some control over the resulting digital dossiers. The conclusions they come to — about the productivity of employees, or the relevance of websites, or the attractiveness of investments — are determined by complex formulas devised by lots of engineers and guarded by many lawyers. The law is very protective of secrecy in the world of commerce but is increasingly silent when it comes to the privacy of persons.

Are these algorithmic applications fair? Why, for instance, does YouTube (owned by Google) so regularly beat out other video sites in Google’s video search results? How does one particular restaurant or auto stock make it to the top of the hit list while another does not? It’s anyone’s guess, as long as the algorithms involved are kept secret. Without knowing what Google actually does when it ranks sites, we cannot assess when it is acting in good faith to help users, and when it is biasing results to favor its own commercial interests. The same goes for status updates on Facebook, trending topics on Twitter, or recommendations on Amazon. Franklin Foer says in World Without Mind:
Computer scientists have an aphorism that describes how algorithms relentlessly hunt for patterns. They talk about torturing the data until it confesses. Yet this metaphor contains unexamined implications. Data, like victims of torture, tells its interrogators what it wants to hear. 
Google is the dominant way we navigate the Internet which makes it the primary lens through which we experience the world. Google has steadily added to the roles it plays in people’s lives – it hosts e-mail, it owns the free blog-hosting service Blogger, offers online software such as a word processor, spreadsheets, presentation software, and a calendar service, has its own Web browser called Chrome, its Google Books project has scanned millions and millions of volumes and has made many of them available online at no cost, it has Google Maps, Street View, and Google Earth, etc. As we shift more of our Internet use to Google-branded services such as Gmail and YouTube, Google is on the verge of becoming indistinguishable from the Web itself.

This gives it enormous power to set agendas and alter perceptions. But we should be cautious about putting our faith in the goodwill of an enterprise whose mission is “to organize the world’s information and make it universally accessible and useful,” We see so clearly how it makes our lives better, our projects easier, and our world smaller that we fail to consider the costs, risks, options, and long-term consequences of our over-reliance on it. Its biases (valuing popularity over accuracy, established sites over new, etc.) are built into algorithms and they have an affect how we value and perceive things. These may be legitimate choices by Google but one has to recognize that these are choices, not science. 

Friday, April 10, 2020

The tyranny of algorithms - I

[T]here is one world in common for those who are awake, but [when] men are asleep each turns away into a world of his own. — Heracleitus

It was assumed that the internet would level the playing field and invite new competition into markets that had always had high barriers to entry. Thoreau remarked that our inventions are but improved means to an unimproved end. Those who think they see clearly the direction in which a new technology will take us, especially the inventors of that technology, are blind to unforeseen consequences. In Technopoly, Neil Postman says that we are currently surrounded by people who see only the positives of a technology and are blind to its negatives. ‘They gaze on technology as a lover does on his beloved, seeing it as without blemish and entertaining no apprehension for the future. They are therefore dangerous and are to be approached cautiously.’

To illustrate his point about there being winners and losers in the development and spread of any technology, Postman writes about computer technology. It has increased the power of large-scale organizations like the armed forces, or airline companies or banks or tax-collecting agencies. But to what extent has computer technology been an advantage to the masses of people like steelworkers, vegetable-store owners,  garage mechanics, musicians, bricklayers, etc.? He writes:

Their private matters have been made more accessible to powerful institutions. They are more easily tracked and controlled; are subjected to more examinations; are increasingly mystified by the decisions made about them; are often reduced to mere numerical objects. They are inundated by junk mail. They are easy targets for advertising agencies and political organizations. 
[SNIP]
It is to be expected that the winners will encourage the losers to be enthusiastic about computer technology. That is the way of winners, and so they sometimes tell the losers that with personal computers the average person can balance a checkbook more neatly, keep better track of recipes, and make more logical shopping lists. They also tell them that their lives will be conducted more efficiently. But discreetly they neglect to say from whose point of view the efficiency is warranted or what might be its costs. 
Should the losers grow skeptical, the winners dazzle them with the wondrous feats of computers, almost all of which have only marginal relevance to the quality of the losers' lives but which are nonetheless impressive. Eventually, the losers succumb, in part because they believe. . . that the specialized knowledge of the masters of a new technology is a form of wisdom. The masters come to believe this as well . . . The result is that certain questions do not arise. For example, to whom will the technology give greater power and freedom? And whose power and freedom will be reduced by it? 
About 15-20 years ago, the Internet was promising a new era of transparency, in which open access to information would result in extraordinary liberty. Law professor Glenn Reynolds predicted that “an army of Davids” would overthrow smug, self-satisfied elites. But the powerful players in the worlds of business, finance, and search deployed strategies of obfuscation and secrecy to consolidate power and wealth. The fates of individuals, businesses, and even our financial systems are at the mercy of hidden databases and dubious correlations generated by ‘unbiased’ algorithms. It has always been the case that those who have control over the workings of a particular technology accumulate power and inevitably form a kind of conspiracy against those who have no access to the specialized knowledge made available by the technology.

Technology gurus of Silicon Valley are creating a new universal narrative that legitimises the authority of algorithms that manipulate huge amounts of data. Every day people take in huge amounts of data through emails, phone calls and articles; process the data; and transmit back new bits through more emails, phone calls and articles. According to Yuval Noah Harari, the motto of these data mavericks says: “If you experience something — record it. If you record something — upload it. If you upload something — share it.” Internet companies will claim that the more you tell them, the more they can help you.

A key trend in how the Internet is developing today is the drive toward the personalization of our online experience. Everything we click, read, search, and watch online is increasingly the result of some complicated optimization effort (which is immune from scrutiny), whereby our previous clicks, searches, “likes,” purchases, and interactions determine what appears in our browsers and apps. Internet companies want that endless array of data points to develop detailed profiles of the people who use them. Pattern recognition is the name of the game — connecting the dots of past behavior to predict the future. Every business wants a data advantage that will let it target its ideal customers. Since it costs us nothing we don't think twice about it.

But it is a myth that sharing data has no costs. Those who cultivate competence in the use of a new technology become an elite group that are granted undeserved authority and prestige by those who have no such competence. For every discount or shortcut big data may offer, it’s probably imposing other, hidden costs or wild goose chases. Your data is a source of huge profit to other people, but often at your expense. When we click on an ad promising a discount, there’s probably a program behind the scenes calculating how much more it can charge us on the basis of our location or whether we’re using a Mac or PC.  Recommendation engines at Amazon and YouTube affect an automated familiarity, gently suggesting offerings they think we’ll like. What finance firms do with money, leading Internet companies do with attention. They  direct it toward some ideas, goods, and services, and away from others.

Harari says that this reliance on Big Data to make important decisions will make authority shift from humans to algorithms. Big Data could then empower Big Brother. Given enough bio-metric data and enough computing power, external algorithms could know us better than we know ourselves, and then governments and corporations could predict our decisions, manipulate our emotions and gain absolute control over our lives. If and when artificial intelligence (AI) surpasses human intelligence, it may be given control of weapon systems and crucial decisions, with potentially calamitous consequences.

Wednesday, March 18, 2020

Reality check on nuclear waste

Since US dropped atomic bombs on Hiroshima and Nagasaki during the Second World War, the erstwhile USSR also decided to join the nuclear arms race, thereby increasing their nuclear stockpiles manifold. The standard argument that is given is that the deterring nature of these weapons provides a security guarantee to many states. Kenneth Waltz, who is recognized as the father of realism in international relations has argued that the consequences of nuclear proliferation are likely to be positive.The power of a nuclear weapon state actually lies in not using the weapon, but in having it—because once a state uses such weapons, it can risk the wrath of the entire international community.

It is argued that nuclear weapons, thus, aren’t weapons for offence, but for deterrence. Even their usage for deterrence might be justified only when a state faces the gravest threat to its security and survival. Since 2014, the United Nations has been annually observing the International Day for the Total Elimination of Nuclear Weapons. The biggest threat today about nuclear weapons is the fear of these going into the hands of non-state actors, like terrorist groups, who can exploit them, inflicting tremendous harm to humanity at large.

The present era has been called the era of the balance of terror. The nuclear weapon powers hold populations of nations as mutual hostages. Many scientists support the destructive deeds of nations and politicians. Not surprisingly, the best scientists usually live and work in countries that are rich as well as strong. Many scientists are amoral and opportunistic, prone to claim credit for the good done in the name of science, while hastily repudiating the evil,  claiming that the latter was the responsibility of either the technologist or his political and economic mentors, not that of the scientist.

The existence in social consciousness of the perception that the scientist's inventions cannot be separated from his moral values is illustrated by the fact that Frankenstein, the creator of the monster in Mary Shelley's story, has become the name of the monster in the public's mind. What is technically possible is not necessarily morally admissible. In all the rational, realpolitik discussions about nuclear weapons what is often ignored is the problem of nuclear waste.  In Tomorrow is already here,  Robert Jungk writes:
Most strongly supervised of all are the "burial grounds" in which radioactive refuse is interred. These are dismal squares in the desert surrounded by red painted cement stakes. Each is under the care of a "burial operator," an atomic cemetery custodian, and is serviced by heavily masked workers.
Here, in long deep graves are buried the contaminated objects made of solid materials, such as receptacles, cans, metal caps, under a layer of earth a yard thick. Fluid refuse goes from the factory through subterranean pipes directly into deep under-ground tanks. These atomic graves increase in dimensions year by year. They provide the Atomic Energy Commission with more headaches than any other phase of its activity. 
For the materials buried here in the northwest inland desert will outlive us, the generation who have freed them through nuclear fission, by thousands, in part even by millions, of years before they lose their life-destroying power. Therefore the grave-yards must be marked so clearly and durably that each succeeding generation will know to shun them. Woe if the knowledge of the exact position of these poisoned zones were to be lost in the course of time! 
But there is also the danger that the "buried" in the Hanford graveyard may not be lying as quiet as their custodians wish. It is possible, even probable, that the radioactive poisons may be gradually working their way through the subsoil water and conceivably even through the layers of earth to regions not yet contaminated. A constant supervision of the entire geological sub-structure not only during our lifetime but increasingly during the lives of our grandchildren, great-grandchildren and more remote descendants is therefore indispensable. 
All other attempts at "removal of waste" through encasement in cement blocks which were sunk into the sea, interspersion with certain forms of bacteria and seaweed, mixture with special sorts of loam, have so far shown themselves uncertain and not particularly promising. There has even been some thought of the possibility later on of shooting the bits of refuse with rockets out of our atmosphere into space. Only in this way, it is said, shall we be truly rid of them. 
"In the long run," a research worker at one of the Hanford laboratories said to me, "this problem seems weightier to me than the question of atomic-weapon control. For even if the powers were finally to agree and an atomic war should never be fought, the fact still remains that by splitting the atom we have released life-destroying forces into the world with which the future will have to deal. With each century it will be more difficult to control the mounting quantity of atomic waste. Everything made by man has faded, fallen into ruin or rotted within measurable time. For the first time we have produced something by our own interference with nature which if not eternal, is, by our measures, nearly eternal. A dangerous inheritance which may far outlive all our other creations, a bit of near-eternity: a bit of hell." 
The technical achievement of advanced industrial society, and the effective manipulation of mental and material productivity have brought about a change in how mystification is achieved. In modern society, the rational rather than the irrational has become the most effective vehicle of mystification. Previously, floods, earthquakes and other natural calamities were explained as the wrath of gods. Now it is the rational mobilization of the material and mental machinery which does the job of mystifying the society.

Apart from some scientists and technicians, nobody knows how the gadgets they use do what they do. Modern myth-makers or fairy tales tellers are commonly called advertising executives, web-designers, reputation managers, image makers, etc. (Rationality coins impressive titles for con-men.) This mystification makes the individuals incapable of seeing “behind" the machinery. Herbert Marcuse says in One-Dimensional Man:
Today, the mystifying elements are mastered and employed in productive publicity, propaganda, and politics. Magic, witchcraft, and ecstatic surrender are practiced in the daily routine of the home, the shop, and the office, and the rational accomplishments conceal the irrationality of the whole. For example, the scientific approach to the vexing problem of mutual annihilation — the mathematics and calculations of kill and over-kill, the measurement of spreading or not- quite-so- spreading fallout, the experiments of endurance in abnormal situations — is mystifying to the extent to which it promotes (and even demands) behavior which accepts the insanity. 
It thus counter-acts a truly rational behavior — namely, the refusal to go along, and the effort to do away with the conditions which produce the insanity.

Monday, March 2, 2020

Objective science and its human consequences - VI

The targeting of individuals if their views are not to the liking of the state is common even in democracies. Robert Oppenheimer followed the American military line on every issue during the making of the atom bomb but raised objections later to making the hydrogen bomb.He soon fell from the status of an American hero to a  hesitant egghead who was a security risk. The militarist pressure groups maneuvered an investigation into Oppenheimer's activities, and he was deprived of his security clearance and stripped of his honors.

The persecution of Oppenheimer illustrates a key objection to modernity and modernization that Gandhi had: it renders individuals impotent by making them subservient to institutions and unable to act according to the dictates of their conscience. He emphasizes that things are not always what they seem and continually draws attention to what is ignored. He does not deny the benefits that modernity brings but draws attention to the costs that individuals will have to bear in order to get those benefits. For eg., take the case of freedom: you may be free to pursue pleasures and comforts but  you may not be free to make moral choices as you see fit. You will always be captive to fear and live at the mercy of the powerful. Ronald Terchek writes in Gandhi: Strugling for Autonomy:
. . .Gandhi recognizes that the costs involved in pursuing a person's moral principles are often high and that many refuse to pay the price; and he is not ready to to condemn ordinary men and women who fail to rise to the highest sacrifice. He continually seeks to design institutional arrangements that lessen the costs to ordinary people of meeting their moral responsibilities. In his ideal society, men and women are not constantly placed in morally tragic situations in which the the only way to follow the good is at continued high personal sacrifice. 
Scientists and Gandhi focused on different issues. Scientist focused on their research and said the technologies that resulted from their discoveries were not their responsibility. Gandhi maintained that theories were irrelevant and the only issue of consequence was how scientific research was used. Nowadays, only a small percentage of scientists are engaged in pure science and the vast majority are involved in technology with the majority of them in defense technology. As Sir Solly Zuckerman says, 'The needs of defense, or the presumed needs of defense ... condition the kind of technology, and ... the kind of science, that is encouraged in countries which by political circumstances have been forced into the arms race.'  Ashis Nandy writes in Science, Hegemony and Violence:
Yet, at the same time, we can be reasonably sure that the concept of pure science and the conceptual difference between science and technology will be carefully retained. It will be retained not because of the demands of the philosophers of science but because it is only by distinguishing between science and technology that all social criticism of science can continue to be deflected away from science towards technology. A shadowy, ethereal concept of science that has little to do with the real-life endeavors of practicing scientists can then be politically defended as the pursuit of truth uncontaminated by human greed, violence and search for power. 
One key principle that Gandhi espoused was that the end rarely justifies the means. For him, means were invariably more important than ends. In the goal-driven and competitive environment of many academic settings, it is easy to forego moral principles. I heard of an interesting way in which scientists avoid taking responsibility for the products of their research. Who discovered penicillin? Vaccination? Every school-child will know the name and they will be mentioned in school textbooks. Scientists are eager to showcase them because they are seen as examples of benefits of science for mankind.

Now comes the other side of the ledger. Who invented Agent Orange? Nerve Gas? Napalm? Nobody will know the answer. This is because the scientists concerned and the enterprise of science as a collectivity want to avoid taking responsibility for them. They will pass on the responsibility for the nefarious uses of the products of their research on to the state. They will say that they only provide the means and whether they are used for good or bad purposes depends on who is using them. This brings us to the question of the value neutrality of science and technology.

Value-neutrality is a principle that directs us to keep our emotions and biases in check when dealing with certain products. Scientific research requires an investment of money and time and it is often chosen keeping in mind the possible use of their outcomes and results. Therefore, receiving funds and practicing scientific research are not value-neutral activities because by approving and accepting researches and projects scientists tacitly do agree with the goals by providing means to them.

Scientists require to adhere to moral norms and values and to be responsible for conducting scientific researches. It is the responsibility of scientists to consider the implications and usage of their findings since they know much more about them than the general public. They cannot ignore the consequences of their scientific conducts for society.  This is more so because of the strong presence of science in moral, social, and political decision-making processes; not just as an impersonal set of formulae.

Scientists can no longer hope naively that people will only use science for the public good and will not be hijacked by the greed for dominance and power. An assessment of the desirability of the pursuit of a particular project has to be part of the mental make-up of scientists. Jungk cites von Laue's statement that 'no one ever invents anything that he does not really want to invent'. Like all other people, scientists are responsible for both what they intend to achieve and for the application of their work that are readily foreseeable. There is nothing sacrosanct about being a scientist that removes this burden of responsibility. 

The gap between power and responsibility has widened more than ever before. According to Hanna Arendt's diagnosis of the contemporary predicament, processes with unfathomable consequences are being released in a society of beings too absorbed in consumption to take any responsibility for the human world or to understand their political capacities. She observes in her prologue to The Human Condition, that "thoughtlessness" (itself related to the loss of the common human world) is "among the outstanding characteristics of our time'.

She says, 'If it should turn out to be true that knowledge (in the modern sense of know-how) and thought have parted company for good, then we would indeed become the helpless slaves, not so much of our machines as of our know-how, thoughtless creatures at the mercy of every gadget which is technically possible, no matter how murderous it is.' But she points out that in human affairs it is actually quite reasonable to expect the unexpected, and that new beginnings cannot be ruled out even when society seems locked in stagnation or set on an inexorable course.

PS: In Tomorrow is Already Here  (the book was published first in 1954 so it is about the present), Robert Jungk gives a glimpse of the world that a deterministic science is attempting to build. The major assumption of such a science when shorn of all flowery language is that the world is like a machine whose uncertainties can be eliminated by planning using more data. For them, nature's shortcomings are, as Donald Worster puts it, "but an invitation to man to become nature's engineer and create a paradise on Earth of his own design, whose functioning he can plan and direct in all its detail."

Friday, February 14, 2020

Objective science and its human consequences - V

The power and prestige of modern science is huge. One of its characteristics is its willingness to put itself at the service of the state. One reason for its power comes from its striking ability to perfect ever more deadly means of warfare. Many scientists take their job to be to dream up new weapons systems and persuade the state that the security of the country depends upon buying what they have dreamed up. The more consequential a decision is, the more difficult it is to take responsibility for it and the easier it is to persuade yourself that you did a good thing. ‘I did the best thing for the nation.’

The society has to take part of the blame for this. One of the middle class heroes in India is A. P. J. Abdul Kalaam. He was one of leading scientists involved in India acquiring nuclear weapons. One online poll rated him as the most important Indian since Gandhi.  Hannah Arendt says in The Human Condition, 'We are perhaps the first generation which has become fully aware of the murderous consequences inherent in a line of thought that forces one to admit that all means, provided that they are efficient, are permissible and justified to pursue something defined as an end.' Robert Jungk mentions the lament of a scientist he met at Los Alamos which exemplifies Gandhi's complaint against modern civilization - modern man is trapped by the institutions he creates:
What an extraordinary and incomprehensible thing! My whole youth was absolutely devoted to truth, freedom, and peace; and yet fate has seen fit to deposit me here where my freedom of movement is limited; the truth that I am trying to discover is locked behind massive gates; and the ultimate aim of my work has to be the construction of the most hideous weapons of war. Could fate have been more perverse?' 
Are scientists responsible for the potentially negative impacts of their work? You could say that Einstein was not responsible for the use of his E=mc2 equation to build an atomic bomb and its use in wartime. He said later  that if he had known that his discoveries would ultimately result in the making of an atom bomb, he would have preferred to be a watchmaker. But if it is readily foreseeable that such knowledge could be used for nefarious purposes, the scientists who introduce such new technological capacities are partially responsible for an attack that could ultimately cause millions of deaths.

The scientists at Los Alamos certainly were responsible for their creation. There was genuine fear in the beginning that Germany might make the bomb first. But the scientists continued to work feverishly on the bomb long after it was known that Germany was not in the race. They agreed to drop the bomb on Japan even when it was known that it it was only weeks away from being defeated. They also gave technical inputs for how the bombs should be dropped for maximum effectiveness.

Two different types of atomic bombs were dropped on Japan: one was plutonium, the other uranium. The plutonium bomb was tested in the U.S. at Alamogordo, and later dropped on Nagasaki as a weapon. But the uranium bomb was the first of its kind in history; it was tested out on the people of Hiroshima in the manner of a scientific experiment. It has been argued that Auschwitz and Hiroshima are not aberrations but the logical playing out of the idea of modern civilization in which scientists have had a starring role. In The Human Condition, Hannah Arendt writes:
Giinther Anders, in an interesting essay on the atom bomb (Die Antiquiertheitdes Menschen [1956]), argues convincingly that the term "experiment" is no longer applicable to nuclear experiments involving explosions of the new bombs. For it was characteristic of experiments that the space where they took place was strictly limited and isolated against the surrounding world. The effects of the bombs are so enormous that "their laboratory becomes co-extensive with the globe".
When General Groves, overall military co-ordinator of the atom bomb, observed the initial retreat from the company town of Los Alamos back to the freedom of the university after the atom bomb project, he retorted that 'his little sheep would find their way back'. He was right. By 1947, the scientists' crusade against the hydrogen bomb had failed and they were trudging back to Los Alamos. Groves remarked later, 'What happened is what I expected, that after they had this extreme freedom for six months, their feet began to itch, and, as you know, almost everyone has come back to government research because it was just too exciting.’

Financial pressures make disinterested research difficult to sustain.  Big business provides funds for specific research, such as the invention of new products that will increase their profits irrespective of their social and environmental impact. National governments are governed by their defense policies,  budgets and profit which determine where research and development money goes. The military establishments spend substantial amounts on research projects of specific interest. For eg., they may push for development of polymers capable of withstanding the impact of bullets and explosives.

Scientists are trapped between their conscience and a need for funds that only the government can provide. But accepting funds from the government means your work is tied to defense research. Patriotism is perhaps the greatest temptation. As one scientist has recently pointed out: "While scientists see more clearly than can others the terrible consequences of the use of the weapons they are developing, they see with equal clarity also the possible consequences of their nation being left at the mercy of an enemy equipped with them". This is the real dilemma that faces us in our time.

It is easy to say 'our choices and actions reflect our understanding of good and evil'. or that 'we alone are responsible for our conduct'. This assumes an abstracted individual divorced from his social and institutional setting. To act in a morally upright fashion could invite penalties not only on the person involved but also on those dependent on him. Institutions with their structures of dominance and control constrain the choices of individuals and direct them towards certain pre-determined alternatives. Whistle-blowing requires a courage, and an indifference to personal consequences, that few people possess.

One of the difficulties that modernity has created in assuming responsibility is caused by a social phenomena called: diffusion of responsibility. Whenever a task is placed before a group of people, there's a strong tendency for each individual to assume someone else will take responsibility for it —  so no one does. These days scientists participate in large scale collective work and in this context it is very easy to avoid the question of responsibility.

The overwhelming majority of scientists would not be able to live up to Gandhi's idea of responsibility which places a heavy burden on the individual. In his view, responsibility has to be taken not only for what we do but also for  what we tolerate. Tolerating institutional actions even though they go against our deepest convictions means for him that we surrender our autonomy. When this happens, we look for actions that will please our superiors and this was not acceptable to Gandhi.