Tuesday, July 14, 2020

The tyranny of algorithms - VIII

Some magazines now employ a company called Narrative Science to automatically generate online articles about what to expect from upcoming corporate earnings statements. Just feed it some statistics and, within seconds, the clever software produces highly readable stories. Or, as Forbes puts it, “Narrative Science, through its proprietary artificial intelligence platform, transforms data into stories and insights.” In an article, A Robot Stole My Pulitzer!  Evgeny Morozov writes:
Don’t miss the irony here: Automated platforms are now “writing” news reports about companies that make their money from automated trading. These reports are eventually fed back into the financial system, helping the algorithms to spot even more lucrative deals. Essentially, this is journalism done by robots and for robots. The only upside here is that humans get to keep all the cash. 
Apart from sports, finance, and real estate in which news stories tend to revolve around statistics, Narrative Science has also entered the political reporting arena. It’s much cheaper than paying full-time journalists who tend to get sick and demand respect and there is no one to fret about the terrible working conditions. The article takes only a second to compose, a deadline that no journalist can beat. Science promises to be more comprehensive — and objective — than any human reporter. Few journalists have the time to find, process, and analyze millions of tweets, but Narrative Science can do so easily and instantaneously.

In the long run, the civic impact of such technologies may be more problematic.  Everything we click, read, search, and watch online is increasingly the result of some optimization effort, whereby our previous clicks, searches, “likes,” purchases, and interactions determine what appears in our browsers and apps. Such personalization of the Internet may usher in a world in which we see only articles that reflect our existing  interests and never venture outside of our comfort zones. What if we click on the same link that, in theory, leads to the same article but end up reading very different texts? In an article, A Robot Stole My Pulitzer!,   Evgeny Morozov writes:
How will it work? Imagine that my online history suggests that I hold an advanced degree and that I spend a lot of time on the websites of the Economist or the New York Review of Books; as a result, I get to see a more sophisticated, challenging, and informative version of the same story than my USA Today-reading neighbor. 
If one can infer that I’m also interested in international news and global justice, a computer-generated news article about Angelina Jolie might end by mentioning her new film about the war in Bosnia. My celebrity-obsessed neighbor, on the other hand, would see the same story end with some useless gossipy tidbit about Brad Pitt. 
Producing and tweaking stories on the spot, customized to suit the interests and intellectual habits of just one particular reader, is exactly what automated journalism allows — and why it’s worth worrying about. Advertisers and publishers love such individuation, which could push users to spend more time on their sites. But the social implications are quite dubious. 
At the very least, there’s a danger that some people might get stuck in a vicious news circle, consuming nothing but information junk food and having little clue that there is a different, more intelligent world out there. And the communal nature of social media would reassure them that they aren’t really missing anything. 
Another piece of human creation that people presume cannot be created by machines is music. Emotions are not some mystical phenomenon — they are a biochemical process. Hence, given enough biometric data and enough computing power, suppose external algorithms are able to understand and manipulate human emotions better than Shakespeare, Picasso or Lennon? Allow a learning machine to go over millions of musical experiences, and it will learn how particular inputs result in particular outputs.

David Cope, a musicology professor at the University of California in Santa Cruz, created a computer program called EMI (Experiments in Musical Intelligence), which specialized in imitating the style of Johann Sebastian Bach. In a public showdown at the University of Oregon, an audience of university students and professors listened to three pieces — one a genuine Bach, another produced by EMI and a third composed by a local musicology professor, Steve Larson. The audience was then asked to vote on who composed which piece. The result? The audience thought that EMI’s piece was genuine Bach, that Bach’s piece was composed by Larson, and that Larson’s piece was produced by a computer.

Hence in the long run, algorithms may learn how to compose entire tunes, playing on human emotions as if they were a piano. Will this result in great art? As Yuval Noah Harari says, 'To enter the art market, algorithms won’t have to begin by straightaway surpassing Beethoven. It is enough if they outperform Justin Bieber.' In The World Without Mind, Franklin Foer writes:

If algorithms can replicate the process of creativity, then there is little reason to nurture human creativity. Why bother with the tortuous, inefficient process of writing or painting if a computer can produce something seemingly as good and in a painless flash? . . . No human endeavour has resisted automation, so why should creative endeavours be any different?
The engineering mind has little patience for the fetisization of words and images, for the mystique of art, for moral complexity and emotional expression. It views humans as data, components of systems, abstractions. . . The whole effort is to make human beings predictable . . . With this sort of cold-blooded thinking . . . it's easy to see how long-standing values begin to seem like an annoyance . . .

No comments:

Post a Comment