Tuesday, July 14, 2020

The tyranny of algorithms - VIII

Some magazines now employ a company called Narrative Science to automatically generate online articles about what to expect from upcoming corporate earnings statements. Just feed it some statistics and, within seconds, the clever software produces highly readable stories. Or, as Forbes puts it, “Narrative Science, through its proprietary artificial intelligence platform, transforms data into stories and insights.” In an article, A Robot Stole My Pulitzer!  Evgeny Morozov writes:
Don’t miss the irony here: Automated platforms are now “writing” news reports about companies that make their money from automated trading. These reports are eventually fed back into the financial system, helping the algorithms to spot even more lucrative deals. Essentially, this is journalism done by robots and for robots. The only upside here is that humans get to keep all the cash. 
Apart from sports, finance, and real estate in which news stories tend to revolve around statistics, Narrative Science has also entered the political reporting arena. It’s much cheaper than paying full-time journalists who tend to get sick and demand respect and there is no one to fret about the terrible working conditions. The article takes only a second to compose, a deadline that no journalist can beat. Science promises to be more comprehensive — and objective — than any human reporter. Few journalists have the time to find, process, and analyze millions of tweets, but Narrative Science can do so easily and instantaneously.

In the long run, the civic impact of such technologies may be more problematic.  Everything we click, read, search, and watch online is increasingly the result of some optimization effort, whereby our previous clicks, searches, “likes,” purchases, and interactions determine what appears in our browsers and apps. Such personalization of the Internet may usher in a world in which we see only articles that reflect our existing  interests and never venture outside of our comfort zones. What if we click on the same link that, in theory, leads to the same article but end up reading very different texts? In an article, A Robot Stole My Pulitzer!,   Evgeny Morozov writes:
How will it work? Imagine that my online history suggests that I hold an advanced degree and that I spend a lot of time on the websites of the Economist or the New York Review of Books; as a result, I get to see a more sophisticated, challenging, and informative version of the same story than my USA Today-reading neighbor. 
If one can infer that I’m also interested in international news and global justice, a computer-generated news article about Angelina Jolie might end by mentioning her new film about the war in Bosnia. My celebrity-obsessed neighbor, on the other hand, would see the same story end with some useless gossipy tidbit about Brad Pitt. 
Producing and tweaking stories on the spot, customized to suit the interests and intellectual habits of just one particular reader, is exactly what automated journalism allows — and why it’s worth worrying about. Advertisers and publishers love such individuation, which could push users to spend more time on their sites. But the social implications are quite dubious. 
At the very least, there’s a danger that some people might get stuck in a vicious news circle, consuming nothing but information junk food and having little clue that there is a different, more intelligent world out there. And the communal nature of social media would reassure them that they aren’t really missing anything. 
Another piece of human creation that people presume cannot be created by machines is music. Emotions are not some mystical phenomenon — they are a biochemical process. Hence, given enough biometric data and enough computing power, suppose external algorithms are able to understand and manipulate human emotions better than Shakespeare, Picasso or Lennon? Allow a learning machine to go over millions of musical experiences, and it will learn how particular inputs result in particular outputs.

David Cope, a musicology professor at the University of California in Santa Cruz, created a computer program called EMI (Experiments in Musical Intelligence), which specialized in imitating the style of Johann Sebastian Bach. In a public showdown at the University of Oregon, an audience of university students and professors listened to three pieces — one a genuine Bach, another produced by EMI and a third composed by a local musicology professor, Steve Larson. The audience was then asked to vote on who composed which piece. The result? The audience thought that EMI’s piece was genuine Bach, that Bach’s piece was composed by Larson, and that Larson’s piece was produced by a computer.

Hence in the long run, algorithms may learn how to compose entire tunes, playing on human emotions as if they were a piano. Will this result in great art? As Yuval Noah Harari says, 'To enter the art market, algorithms won’t have to begin by straightaway surpassing Beethoven. It is enough if they outperform Justin Bieber.' In The World Without Mind, Franklin Foer writes:

If algorithms can replicate the process of creativity, then there is little reason to nurture human creativity. Why bother with the tortuous, inefficient process of writing or painting if a computer can produce something seemingly as good and in a painless flash? . . . No human endeavour has resisted automation, so why should creative endeavours be any different?
The engineering mind has little patience for the fetisization of words and images, for the mystique of art, for moral complexity and emotional expression. It views humans as data, components of systems, abstractions. . . The whole effort is to make human beings predictable . . . With this sort of cold-blooded thinking . . . it's easy to see how long-standing values begin to seem like an annoyance . . .

Wednesday, July 1, 2020

The tyranny of algorithms - VII

As Robert Jungk says in Tomorrow is Already Here, ‘The devil has many names, and in this century he likes to call himself "Mr. Profit" or "Mr. Efficiency".'  He further writes: ‘Planning down to the smallest detail, control of each labor process, the abolition of all possible waste of time, characterize the American earning system. The same smooth functioning is required of the man as of the machine. "Efficiency" has grown more important in the working world than freedom.‘ One manifestation of this obsession with efficiency is “clopening”, closing late at night and opening again just a few hours later.

In the US, many businesses rely on scheduling software that determine the number of workers required and exactly when they are required using sales patterns and other data from thousands of locations. They can bring in more hands in anticipation of a delivery truck pulling in or the weather changing, and sending workers home when real-time analyses show sales are slowing. Managers are often compensated based on the efficiency of their staffing. Scheduling is now a powerful tool to bolster profits, allowing businesses to cut labor costs with a few keystrokes. Yet those advances are injecting turbulence into parents’ routines and personal relationships, undermining efforts to expand preschool access, driving some mothers out of the work force and redistributing some of the uncertainty of doing business from corporations to families.

Having the same employee close the store late at night and open it again at dawn makes logistical sense for a company. Many employees find out only a day or two in advance that they are scheduled for 'clopening'. This results in unpredictable work schedules, preventing parents from committing to regular drop-off times or answering standard questions on subsidy forms and applications for aid: “How many hours do you work?” and “What do you earn?” Previously, inefficiencies in the workplace benefited workers by giving them regular working hours and time to read / study. Now they are under the control of software and they are kept busy every minute (praised by authorities as 'hard work') which makes them 'better' cogs in the wheel.

The automation of many entry-level roles will make it even harder for young people to gain traction in the working world. It is thought that two-thirds of the job losses for young people could occur in food, hospitality, or retail. One-third of the automation-related job losses for young people could occur in white-collar jobs, including entry-level roles in accounting, finance, human resources, and administration. In the legal profession, for example, AI can handle document review and case law search — not a favorite task for junior attorneys, but it provides valuable opportunities for learning. Now AI is outperforming humans at these tasks. This means aspiring young professionals will need to enter the labor force in higher-level roles. But employers have been saying for years that too many new hires, even those with college degrees, are not work-ready.

Online commerce allows even conscientious consumers to forget that other people are involved. Amazon employs or subcontracts tens of thousands of warehouse workers, with seasonal variation. Accounts from inside the centers describe the work of picking, boxing, and shipping books and dog food and beard trimmers as a high-tech version of the dehumanized factory floor satirized in Chaplin’s “Modern Times.” Pickers holding computerized handsets are perpetually timed and measured as they fast-walk up to eleven miles per shift around a million-square-foot warehouse, expected to collect orders in as little as thirty-three seconds. Warehouse jobs are gradually being taken over by robots. Bezos recently predicted that, in five years, packages will be delivered by small drones. Then Amazon will have eliminated the human factor from shopping.

In 21 Lessons for the 21st Century, Yuval Noah Harari says that Computer scientists are developing artificial intelligence (AI) algorithms that can learn and analyse massive amounts of data and recognize patterns with superhuman efficiency. At the same time, biologists and social scientists are deciphering human emotions, desires and intuitions. The merger of info-tech and biotech is giving rise to algorithms that can successfully analyse us and communicate with us, and that may soon outperform human doctors, drivers, soldiers and bankers in such tasks. These algorithms could eventually push hundreds of millions out of the job market.

This has already happened in the field of medicine. The most important medical decisions in your life are increasingly based not on your feelings of illness or wellness, or even on the informed predictions of your doctor — but on the calculations of computers who know you better than you know yourself. This situation is likely to take place in more and more fields. It starts with simple things, like which book to buy and read. Harari speculates that in the 21st century we will create more powerful myths and more totalitarian religions than in any previous era. With the help of biotechnology and computer algorithms these religions will not only control our minute-by-minute existence, but will be able to shape our bodies, brains and minds.

Monday, June 22, 2020

The tyranny of algorithms - VI

Facebook was founded by an undergraduate with good intentions but with a flawed understanding of human nature. While it has been beneficial in general terms for individuals; improving communication with friends and relatives, and even people who we would never have hoped to keep in touch with before its arrival, Facebook has done significant damage to society as a whole. People use it for all kinds of things, many of them innocuous, but some of them absolutely pernicious. They use it to try to influence democratic elections, to threaten and harass others, to spread fake news, publish revenge porn and perform a host of other antisocial acts.

It has no effective competitors, so it’s a monopoly – and a global one at that.Facebook's strategy has been to buy potential rivals before they can get too big. Of “social networking apps”, Facebook owns the top 3 -  Facebook, Instagram, and Whats App. Mark Zuckerberg said that there is a breakdown in global communities and Facebook’s mission is to help build communities and make the world a better place. A few months later, the Cambridge Analytica scandal broke showing that the personal data of Facebook users can be leaked to third parties which can be used to influence elections around the world. When your business model is built on taking the data of your users and selling  it to advertisers, you cannot build lasting communities.

Facebook derives its revenues solely by monetizing the data provided by its users – the photographs they upload, the status updates they post, the things they “like”, their friendship groups, the pages they follow, etc. This enables it to build detailed profiles of each user which can then be used for even more precisely targeted advertising. Thus the more “user engagement” there is, the better. Facebook optimists to push our emotional buttons to increase the number of ‘engagements’. This type of design ensures that the most inflammatory and sensational item will be circulated the most because they will generate the maximum engagements. It thus concentrates and amplifies our prejudices. Sober, balanced, well-researched reports don’t stand a chance. Siva Vaidyanathan says in Antisocial Media:
If you wanted to build a machine that would distribute propaganda to millions of people, distract them from important issues, energise hatred and bigotry, erode social trust, undermine journalism, foster doubts about science, and engage in  massive surveillance all at once, you would build something a lot like Facebook. 
The precise targeting of ads by Facebook (and Google) using massive surveillance to create elaborate personal dossiers is something that cannot be matched by other media companies. Thus a firm with a small advertising budget is likely to shift its ad spend towards Facebook and Google forcing reputable news organisations to lay off staff thus affecting their quality of work. The editors and publishers of these organisations spend much of their time trying to design their content to be picked up by Facebook’s algorithms. They have to feed the very monster which is killing them in order to stay alive.

When we visit the site, we scroll through updates from our friends. The machine appears to be only a neutral go-between. We do not see that Facebook's engineers can tweak its algorithms to change what we see - whether text or photos is prioritized, which newspapers appear in news feeds etc. It runs psychological experiments on its users without them being aware of it. For eg., it once sought to discover whether emotions are contagious. For one group it removed the positive words from the posts in its news feed while for another group it removed the negative words. It concluded that each group wrote posts that reflected the mood of the posts it was exposed to.

Facebook’s success, Mr. Vaidhyanthan argues, is based on two elements. The first being that Facebook is deliberately engineered to be addictive; rewarding interactions, likes, and shares, in similar ways to how casinos keep their guests playing. The second element of Facebook’s success being that it has become 'one of the most effective advertising machines in history.' Facebook knows so much about us, and offers advertisers such levels of targeting that were never before dreamed of, that it is unparalleled as a sales tool.

If you frequently click on certain sites, friends or web pages, the Facebook algorithm knows that you are highly engaged with them. So it gives you more of the stuff with which you would engage and less of the stuff you would ignore. The ability of the Facebook algorithm to predict your behavior improves over time with your willing cooperation. Thus over time, your news feed becomes narrower in perspective and you find yourself in an echo chamber as it is less likely that you will find information coming from outside the group. Thus Facebook users are unable to engage with people outside their group because they don’t share a body of truths.

The easy availability of various internet tools has led to what is called 'clictivism'. The premise behind clicktivism is that social media allows for quick and easy ways to support an organization or cause but this leads only to slactivism  -  a pejorative term for "feel-good" measures in support of an issue or social cause. The "Like" button used on Facebook is a popular slacktivist tool. Other Slacktivist activities include signing Internet petitions, joining a community organization without contributing to the organization's efforts, copying and pasting of social network statuses or messages or altering one's personal data or avatar on social network services. People can now express concern about social or political issues with nothing more than the click of a mouse since they can easily "like", "share" or "tweet" about something interesting. The sociologist, Zygmunt Bauman said in an interview 'Social media are a trap':
The question of identity has changed from being something you are born with to a task: you have to create your own community. But communities aren’t created, and you either have one or you don’t. What the social networks can create is a substitute. The difference between a community and a network is that you belong to a community, but a network belongs to you. You feel in control. You can add friends if you wish, you can delete them if you wish. 
You are in control of the important people to whom you relate. People feel a little better as a result, because loneliness, abandonment, is the great fear in our individualist age. But it’s so easy to add or remove friends on the internet that people fail to learn the real social skills, which you need when you go to the street, when you go to your workplace, where you find lots of people who you need to enter into sensible interaction with. 
Pope Francis, who is a great man, gave his first interview after being elected, to Eugenio Scalfari, an Italian journalist who is also a self-proclaimed atheist. It was a sign: real dialogue isn’t about talking to people who believe the same things as you. Social media don’t teach us to dialogue because it is so easy to avoid controversy.   But most people use social media not to unite, not to open their horizons wider, but on the contrary, to cut themselves a comfort zone where the only sounds they hear are the echoes of their own voice, where the only things they see are the reflections of their own face. Social media are very useful, they provide pleasure, but they are a trap.

Monday, June 8, 2020

The tyranny of algorithms - V

Book retailers, such as Barnes & Noble, negotiate “co-op,” or cooperative promotional fees, from publishers in exchange for prominent product placement. Amazon has been particularly good at squeezing this money out of publishers. They have to pay lots of money for a book to be prominently featured on the home page. Judgments about which books should be featured on the site are increasingly driven by promotional fees. In its drive for profitability, Amazon does not raise retail prices; it simply squeezes its suppliers harder. Amazon demands ever-larger co-op fees and better shipping terms; publishers know that they would stop being favored by the site’s recommendation algorithms if they don’t comply. (Few customers realize that the results generated by Amazon’s search engine are partly determined by promotional fees.)

This squeezing of co-op fees from publishers is due to one tenet of Amazon’s business philosophy: low prices are always good for customers. In addition to regularly offering bestsellers at more than 50 percent off, Amazon offers a wide range of titles for around a third off the recommended price. Such low prices have forced its competitors to follow suit. Of course, everyone loves low prices, but as with breadth of choice, the matter is more complex than it first appears. To achieve such low prices retailers must seek ever deeper discounts from publishers who have seen their revenues fall, forcing many to make cutbacks and concentrate more on lead titles, the blockbusters that are the most profitable component of their business.

Authors, too, can be added to the list of price-cutting’s victims. It is thought that the money for serious fiction and nonfiction has eroded dramatically in recent years; advances on what are called mid-list titles — books that are expected to sell modestly but whose quality gives them a strong chance of enduring — have declined. These are the kinds of books that particularly benefit from the attention of editors and marketers, and that attract gifted people to publishing. Without sufficient advances, many writers will not be able to undertake long, difficult, risky projects. 

Lower advances and royalties make for less well-researched books and an author pool increasingly populated by hobbyists rather than those who are good at writing. 'Writing is being outsourced, because the only people who can afford to write books make money elsewhere — academics, rich people, celebrities',  Colin Robinson, a veteran publisher, said. 'The real talent, the people who are writers because they happen to be really good at writing — they aren’t going to be able to afford to do it.' The accumulated effect of Amazon’s pricing policy, its massive volume and its metric-based recommendations system is, in fact, to diminish real choice for the consumer.

The manner of purchasing books is different in brick and mortar stores compared to online stores. When you  enter the Amazon virtual store, a message pops up and tells you: “I know which books you liked in the past. People with similar tastes also tend to love this or that new book.” Devices such as Amazon’s Kindle are able constantly to collect data on their users while they are reading books. Your Kindle can monitor which parts of a book you read quickly, and which slowly; on which page you took a break, and on which sentence you abandoned the book, never to pick it up again, what words are looked up in Kindle's dictionary, which paragraphs are underlined most frequently, etc. Soon, books will read you while you are reading them.

If Kindle was to be upgraded with face recognition software and bio metric sensors, it would know how each sentence influenced your heart rate and blood pressure. It would know what made you laugh, what made you sad, what made you angry. And whereas you quickly forget most of what you read, computer programs need never forget. Such data should eventually enable Amazon to choose books for you with uncanny precision. It will also allow Amazon to know exactly who you are, and how to press your emotional buttons. It would enable Amazon to replace authors with algorithms that churn out books tailored precisely to suits customers' preferences.

Shopping for books on Amazon can be called 'a directed experience'. If you know the kind of book you are looking for, it can be a rewarding experience. Further recommendations by Amazon's algorithms will direct you towards books on similar topics. In brick and mortar stores, you may stumble on great books you had not heard of. The loss of serendipity that comes with not knowing exactly what one is looking for is a cost of shopping on Amazon. As ex-Amazon editor James Marcus says, 'Personalization strikes me as a mixed blessing. While it gives people what they want — or what they think they want — it also engineers spontaneity out of the picture. The happy accident, the freakish discovery, ceases to exist. And that’s a problem.'

Even the existing experience of shopping in physical book stores will be changed by Amazon. It is creating a chain of physical book stores, called Amazon Books,  to take the place of the book stores the company has destroyed. Amazon Books does not accept cash and instead lets Prime members use the Amazon app on their smartphone to pay for purchases. Non-members can use a credit or debit card. In these stores, there are no price tags at all: You scan the items with your phone and have a price delivered to you, personalized by Amazon. “Our goal with Amazon Prime, make no mistake,” says Amazon CEO Jeff Bezos, “is to make sure that if you are not a Prime member, you are being irresponsible.” For eg., there is  speculation that Jeff Bezos is going to offer the COVID-19 test as part of Prime membership.

The Spanish sociologist Manuel Castell  predicted that in the networked age, more value would accrue in controlling flows of information than in controlling the content itself.  Controlling content increasingly involves autonomous, self-teaching systems that are increasingly inscrutable to humans. Gatekeepers like authors, publishers and professional reviewers are immersed in books and writing styles their whole lives and regard books as sacred objects. They have discerning eyes and separate great novels from trash more often than the average man on the street. When these old world gatekeepers are gone, only one gatekeeper will be left - Amazon with its spreadsheet maniacs. What will be the kind of books that will be available when that happens? Evgeny Morozov says in To Save Everything Click Here:

If one thinks that the goal of literature is to maximize the well-being of memes or to ensure  that all readers are satisfied (and why wouldn't they be, given that the books they read already reflect their subconscious inclinations and preferences?), then Amazon should be seen as the savior of literature. 
 But if one believes that some ideas are worse than others, that some memes should be put to rest rather than spread around, that many authors are public intellectuals who serve important civic functions that surely cannot be outsourced to algorithms, and that one of the goals of literature is to challenge and annihilate, than to appease and amplify - then there is very little to celebrate in Amazon's  fantasy world without gatekeepers.  
This is only books but Amazon's ambition extends to every other commodity on earth. To understand the depth and breadth of Jeff Bozos’ ambitions for the company he built, consider the original name he chose - 'relentless'. He still retains the domain name and if you type www.relentless.com into your browser it  will redirect to Amazon, the company aptly, and ambitiously, nicknamed The Everything Store. He tells his shareholders that the company will act like an aggressive startup — that at Amazon, it is always Day One.  There will be many ways that Amazon can use its power and you can be sure that it will exploit them relentlessly. For eg., you may seeing different prices depending on any way that you interact with Amazon.

Wednesday, May 27, 2020

The tyranny of algorithms - IV

It is commonly assumed that algorithms mindlessly execute their programs and see patterns in the data without any biases. This view fails to acknowledge that they reflect the minds and worldviews of their creators. When we outsource thinking to machines we are actually outsourcing thinking to the organisations that run those machines. For example, both Amazon and Netflix give recommendations about books and  films respectively but the nature of their recommendations differ. Amazon will nudge you towards the types of books with which you are familiar while Netflix will direct users towards unfamiliar movies. Blockbuster movies cost more to stream and Netflix makes more profit when users watch more obscure films.

Thus the algorithms are programmed to direct users towards what benefits the corporation although the propaganda will be 'to enhance user experience'. The power to include, exclude, and rank is the power to ensure that certain public impressions become permanent, while others remain fleeting. How does Amazon decide which books to prioritize in searches? How does it identify fake or purchased reviews? Why do Facebook and Twitter highlight some political stories or sources at the expense of others? Although internet giants say their algorithms  are scientific and neutral tools, it is very difficult to verify those claims.

When the Amazon boss Jeff Bezos started  out, he said that Amazon intended to sell books as a way of gathering data on affluent, educated shoppers. The books would be priced close to cost in order to increase sales volume. After collecting data on millions of customers, Amazon could figure out how to sell everything else dirt cheap on the Internet. Now books are not the only business of Amazon. It also sells hardware, is a video distributor, a  production studio, a grocery deliverer. According to one publisher’s estimate, book sales in the U.S. now make up no more than seven per cent of the company’s  annual revenue. Books were going to be the way to get the names and the data, a customer-acquisition strategy.

Amazon is ruled by computer engineers and M.B.A.s who value data most and believe only in measurable truths. The vast majority of them can be classified into two or three similar categories, and Bezos is the same: introverted, detail-oriented, engineer-type personality. Not musicians, designers, salesmen. A former Amazon employee who worked in the Kindle division said that few of his colleagues in Seattle had a real interest in books: “You never heard people say, ‘Hey, what are you reading?’ Everyone there is so engineering-oriented. They don’t know how to talk to novelists.” Amazon's writers were under pressure to prove that their work produced sales. If a customer clicked on a review or an interview, then left the page without making a purchase, it was logged as a Repel. Writers had to make sure that their repulsion rate was not too high.

The customer has always been king in the Bezos ethos."Amazon gives the customers what they want: low prices, vast selection and extreme convenience," he told a shareholders’ meeting. On these terms, Amazon’s success is stellar. It has more than 2 million titles on sale; bestselling books are routinely discounted by 50 percent or more; and it ranked first in Business Week‘s "customer service champs" awards a couple of years ago. Dennis Johnson, an independent publisher, says that “Amazon has successfully fostered the idea that a book is a thing of minimal value — it’s a widget.”  Adrian Chen of Gawker.com said, 'Do you remember books? A book is basically thousands of tweets printed out and stapled together between pieces of cardboard.'

Recently, Amazon even started creating its own “content” — publishing books. The old print world of scarcity — with a limited number of publishers and editors selecting which manuscripts to publish, and a limited number of bookstores selecting which titles to carry — is yielding to a world of digital abundance. Amazon will say that, because an unprecedented number of titles are available in an instant, “it’s never been a better time to be a reader.” It will point to the growth of online reader networks, such as GoodReads, which Amazon owns, as a welcome development: “Suddenly, we’re not locked into hearing the opinions of a small number of reviewers in newspapers.” The elimination of the “gatekeepers,” as Bezos calls the professionals who get in the customer’s way, is one of his pet themes, thinking that it will rid the public sphere of their biases and inefficiencies.

Bezos believes that he provides an argument against elitist institutions and for “the democratization of the means of production”. “Even well-meaning gatekeepers slow innovation,” Bezos wrote in his 2011 letter to shareholders. “When a platform is self-service, even the improbable ideas can get tried, because there’s no expert gatekeeper ready to say ‘that will never work!’" Amazon believes that its approach encourages ever more people to tell their stories to ever more people, and turns writers into entrepreneurs; the price per unit might be cheap, but the higher number of units sold, and the accompanying royalties, will make authors wealthier. It believes that selling digital books at low prices will democratize reading.

When it comes to the books it stocks, Amazon makes no pretense of selectivity. Provided it carries an ISBN and isn’t offensive, Amazon is happy to sell any book anyone cares to publish. "We want to make every book available — the good, the bad and the ugly," Bezos once said. As Evgeny Morozov says in To Save Everything Click Here, 'Whether these books are Sudoku puzzles or Tolstoy novels doesn't matter at al, for it is all about the number of books downloaded, pages flipped, and memes created.' Many would argue that the increase in the variety of books being published that Amazon has encouraged can only be a good thing, that it enriches cultural diversity and expands choice.

But is this the whole truth? Is the method of production the only criterion to consider? Is the only goal of publishing to produce as many books as possible? Gatekeepers are barriers against the complete commercialization of ideas, allowing new talent the time to develop and learn to tell difficult truths. Ideas and artistry are important in deciding what is sold but Amazon neglects these criteria and focuses only on measurable data like sales volumes and price points provided by 'unbiased' algorithms. There is also the paradox of choice: when people are offered a narrower range of options, their selections are likely to be more diverse than if they are presented with a number of choices so vast as to be overwhelming. When consumers are overwhelmed with choices, some experts argue, they all tend to buy the same well-known thing.

Amazon has the ability -- and willingness -- to lose money in order to put competitors out of business. It views losing money as a marketing expense, the cost of acquisition of a new customer. Very few companies besides Amazon could absorb such losses without being penalized by the market. A monopoly is dangerous because it concentrates so much economic power, but in the book business the prospect of a single owner of both the means of production and the modes of distribution is especially worrisome: it would give Amazon more control over the exchange of ideas than any company in history. Some have wondered if Amazon would eventually control so much of the market that it would stop selling books at cost and raise prices to become more profitable.

Saturday, May 16, 2020

The tyranny of algorithms - III

Because we focus so much on the miracles of Google, we are too often blind to the ways in which Google exerts control over its domain especially because it offers many services free. But there is an implicit non-monetary transaction between Google and its users. Google rules the Web through its power to determine which sites get noticed, and thus trafficked. It stores “cookies” in our Web browsers to track our clicks and curiosities. Google gives us Web search, e-mail, Blogger platforms, and YouTube videos. In return, Google gets information about our habits and predilections so that it can more efficiently target advertisements at us. Google’s core business is consumer profiling. It generates dossiers on many of us. Google, and our habits (trust, inertia, impatience) keep us from clicking past the first page of search results. Google understands the fact that default settings can work just as well as coercive technologies.

When confronted with questions about its dominance in certain markets, Google officials always protest that, on the Internet, barriers to entry are low, and thus any young firm with innovative services could displace Google the way Google displaced Yahoo and Alta Vista. As a Google lawyer said, “Competition is a click away.” That argument relies on the myth that Internet companies are weightless and virtual. It might be valid if Google were merely a collection of smart people and elegant computer code. Instead, Google is also a monumental collection of physical sites such as research labs, server farms, data networks, and sales offices. Replicating the vastness of Google’s processing power and server space is unimaginable for any technology company except Microsoft. The argument about user behavior could be valid if boycotting or migrating from Google did not incur significant downgrades in service by losing the advantages of integration with other Google services.

Google’s argument also ignores the “network effect” in communication markets: a service increases in value as more people use it. A telephone that is connected to only one other person has very limited value compared with one connected to 250 million people. YouTube is more valuable as a video platform because it attracts more contributors and viewers than any other comparable service. The more users it attracts, the more value each user derives from using it, and thus the more users it continues to attract. If only a few people used Google for Web searching, Google would not have the data it needs to improve the search experience. Network effects tend toward standardization and thus potential monopoly.

Google and other online vendors shy away from presenting effective ways for users to manage their privacy. An important point that Siva Vaidhyanathan makes in The Googlisation of Everything is that “celebrating freedom and user autonomy is one of the great rhetorical ploys of the global information economy … meaningful freedom implies real control over the conditions of one’s life. Merely setting up a menu with switches does not serve the interests of any but the most adept, engaged, and well-informed”.

Google sells our fancies, fetishes, predilections, and preferences to advertisers. While Google provides users with the information that they seek, seemingly for free, it collects the gigabytes of personal information and creative content that millions of Google users provide for free to the Web everyday and sells this information to advertisers of millions of products and services. Google runs an instant auction among advertisers to determine which one is placed highest on the list of ads that run across the top or down the right-hand column of the search results page.

Although Google’s contextual advertising and instant auctions often serve the interests of small firms, its freedom to set such rates at any level it desires allows it to crowd out some of the small firms that have grown to depend on Google for their most valuable advertising outlets, including small firms that are Google’s potential competitors. Another way in which Google limits its completion is by a touchy issue called cross-subsidization. Siva Vaidyanathan says in The Googlisation of Everything:
Google can use its prominence in people’s lives — the network effect — and its surplus revenues to support its other ventures — its online document business, for example. This poses a serious threat to small, creative companies that offer Web-based word processors, such as Zoho. If Google uses its profitable ventures to subsidize those activities destined to lose money, and if that practice kills off innovative potential competitors like Zoho, Google has crossed the line into shaky legal territory. 
Google refuses to acknowledge that its algorithms can malfunction sometimes and cause ethical problems. In To Save Everything Click Here, Evgeny Morozov gives the example of Google's Autocomplete feature. When you start typing your query, Google's algorithms gives four suggestions based on how other users have completed the query. It is a useful feature that saves a few seconds of the users' time. (Of course it often also limits the query to the given options because users are too lazy to complete their original query.)

Suppose in a deliberate attempt to smear your reputation, someone pays a large number of users to search for your name followed by the word 'pedophile'. Enough volume can be generated to make the term one of the search suggestions replacing a more benign term. Now everyone who searches for your name is also informed that you may be a pedophile.  You cannot appeal to the algorithms because they are supposed to be always right. Google will say, 'We believe that Google should not be held liable for terms that appear in Autocomplete as these are produced by computer algorithms based on searches from previous users, not by Google itself ' even though it knows that its algorithms can be gamed.

It cannot be ascertained for certain if Bettina Wulff, Germany's former first lady, was the victim of a hit job. In 2012, she sued Google for 'auto completing' searches for her name with terms like 'prostitute' and 'escort'. In Japan, Google was ordered to modify its Autocomplete search results after a man complained that that they linked him to crimes he did not commit. In France, Google was ordered to modify its Autocomplete search results after a man complained that they suggested that he was a 'satanist' and 'rapist'.

Wednesday, April 29, 2020

The tyranny of algorithms - II

Once upon a time, everybody loved the internet. It would make us freer, richer, smarter. It would make us better citizens, better consumers, better humans. Most people continue to think of tech companies as businesses that sell some product or service at a profit. Amazon just wants to sell you stuff, Facebook wants to show you advertising, and Apple wants you to buy the latest iPhone product. Your phone, your computer, and various apps and programs etc. come from just a few Silicon Valley overlords. Their companies want very much to shape what you do. They hope to automate the choices that we make everyday. Unrestrained power, no matter how well-meaning or alluring, is something of which we should always be wary.

Google, Facebook, Amazon use technology and data to sidestep traditional restrictions on monopoly power. Credit raters, search engines, major banks etc. collect data about us and convert it into scores, rankings, risk calculations, and watch lists which impact us daily. They use it to make important decisions about us and to influence the decisions we make for ourselves. A bad credit score may prevent a customer from accessing any loans but he will never understand exactly how it was calculated. A predictive analytics firm may score someone as a “high cost” or “unreliable” worker, yet never tell her about the decision. As Frank Pasquale writes in The Black Box Society:
Continuing unease about black box scoring reflects long-standing anxiety about misapplications of natural science methods to the social realm. A civil engineer might use data from a thousand bridges to estimate which one might next collapse; now financial engineers scrutinize millions of transactions to predict consumer defaults. 
But unlike the engineer, whose studies do nothing to the bridges she examines, a credit scoring system increases the chance of a consumer defaulting once it labels him a risk and prices a loan accordingly. Moreover, the “science” of secret scoring does not adopt a key safeguard of the scientific method: publicly testable generalizations and observations.  As long as the analytics are secret, they will remain an opaque and troubling form of social sorting.
The proprietary algorithms by which digital economy companies collect and analyse data of their customers are immune from scrutiny. Internet companies collect more and more data on their users but fight regulations that would exercise some control over the resulting digital dossiers. The conclusions they come to — about the productivity of employees, or the relevance of websites, or the attractiveness of investments — are determined by complex formulas devised by lots of engineers and guarded by many lawyers. The law is very protective of secrecy in the world of commerce but is increasingly silent when it comes to the privacy of persons.

Are these algorithmic applications fair? Why, for instance, does YouTube (owned by Google) so regularly beat out other video sites in Google’s video search results? How does one particular restaurant or auto stock make it to the top of the hit list while another does not? It’s anyone’s guess, as long as the algorithms involved are kept secret. Without knowing what Google actually does when it ranks sites, we cannot assess when it is acting in good faith to help users, and when it is biasing results to favor its own commercial interests. The same goes for status updates on Facebook, trending topics on Twitter, or recommendations on Amazon. Franklin Foer says in World Without Mind:
Computer scientists have an aphorism that describes how algorithms relentlessly hunt for patterns. They talk about torturing the data until it confesses. Yet this metaphor contains unexamined implications. Data, like victims of torture, tells its interrogators what it wants to hear. 
Google is the dominant way we navigate the Internet which makes it the primary lens through which we experience the world. Google has steadily added to the roles it plays in people’s lives – it hosts e-mail, it owns the free blog-hosting service Blogger, offers online software such as a word processor, spreadsheets, presentation software, and a calendar service, has its own Web browser called Chrome, its Google Books project has scanned millions and millions of volumes and has made many of them available online at no cost, it has Google Maps, Street View, and Google Earth, etc. As we shift more of our Internet use to Google-branded services such as Gmail and YouTube, Google is on the verge of becoming indistinguishable from the Web itself.

This gives it enormous power to set agendas and alter perceptions. But we should be cautious about putting our faith in the goodwill of an enterprise whose mission is “to organize the world’s information and make it universally accessible and useful,” We see so clearly how it makes our lives better, our projects easier, and our world smaller that we fail to consider the costs, risks, options, and long-term consequences of our over-reliance on it. Its biases (valuing popularity over accuracy, established sites over new, etc.) are built into algorithms and they have an affect how we value and perceive things. These may be legitimate choices by Google but one has to recognize that these are choices, not science.