The algorithm method: Programming our lives away

1 posts

Niccolo and Donkey
The algorithm method: Programming our lives awa

The Globe and Mail

Ira Basen

November 26, 2010


Here are two stories about love in the age of the algorithm.

The first one is from the hit sitcom The Big Bang Theory , which features a cast of science geeks trying to navigate through the non-geek world.

In one episode, the nerdiest of the nerds, Sheldon Cooper, is trying to score a piece of lab equipment from a colleague. He needs to befriend the colleague, but he has no idea how to make a new friend. So he does what he does best: He draws up an elaborate flow chart, which he calls a “friendship algorithm,” to help guide him through the phone call.

“Perhaps we could share a meal together?” Sheldon asks. If the answer is yes, he moves on to negotiate a time and a place. If the answer is no, he defaults to the next question, “How about a hot beverage?” followed by, “Perhaps we share a recreational interest?”

“I believe I've isolated the algorithm for making friends,” Sheldon gushes to his friend Leonard.
“Sheldon,” Leonard replies, “there is no algorithm for making friends.”

Not so fast, Leonard. Fortune magazine recently featured a story about the 10-year-old matchmaking site eHarmony, which has recently embraced the age of the algorithm. It has developed a formula that looks at hundreds of factors to determine whether two people might be compatible, including the way they use eHarmony: For example, it collects data on how long a user takes to respond to an e-mail about a match, presuming that procrastinators will be attracted to other procrastinators and vice-versa.

Algorithms are turning up in the most unlikely places, promising to assert mathematical probability into corners of our lives where intuition, instinct and hunches have long held sway.

Increasingly, algorithms are used to determine whether we can get access to credit, insurance and government services. They are posing a challenge to human decision-making in the arts. They are being used by prospective employers to decide if we should be hired. They can determine whether your online business will succeed or fail, and they have revolutionized the world of high finance.

And yet these algorithms remain a mystery to us, their inner workings protected by various intellectual property and trade-secrecy laws. Critics are beginning to wonder if we are surrendering too much human agency to the all-powerful gods of mathematics.

A recipe for Web searching, book buying, apple pie or love
But wait, non-Sheldon-types may be saying now: What is an algorithm?

Algorithms are deceptively easy to define. They have actually been around since the beginning of recorded history. They are written formulas for solving problems. An apple-pie recipe is an algorithm, so long as someone could successfully bake the pie by following the instructions. A computer program is simply an algorithm written for a computer.

Algorithms are frequently displayed as flow charts, like Sheldon's friendship algorithm. They outline “yes” or “no” options to a series of questions. If I purchase the new Philip Roth novel on a bookstore website, the site's algorithm is programmed to start asking questions about me: Do I read fiction? Yes. Do I like contemporary American novelists? Yes. Jewish? Yes. … May we also recommend Saul Bellow?

The most useful algorithms can incorporate enormous amounts of data that we make available to them – sometimes wittingly, sometimes not – to make connections between seemingly unconnected pieces of information and predict our behaviour. Every credit-card purchase, every search, every click of the computer mouse adds to this massive database of our interests and intentions.

As eHarmony has discovered, the more questions and the more data, the greater the likelihood the end user will be satisfied with the result. That used to be a problem, back when data storage was expensive and bandwidth was scarce. But today, bandwidth is plentiful and the cost of storage has dropped to almost zero. There is now virtually no limit to the size and complexity of algorithms.

Take, for example, the 21st century's most ubiquitous and influential algorithm – the one developed by Google. Taking the two or three words that I have entered into their search box, and possibly incorporating some knowledge of my previous search history, the algorithm can scan billions of websites in its index and deliver relevant results almost instantaneously.

How it does that remains a closely guarded secret.

That's the thing about algorithms. Most are what engineers call “black boxes” – they convert inputs into outputs without revealing how they do it. The good ones are enormously valuable. Google's is the spine of a business that currently generates about $28-billion a year in revenues. Its secrets are not for sharing.

In many cases, we surrender our authority to algorithms willingly, even eagerly. We know that Google collects massive amounts of data about us whenever we search, or use Gmail or any of the other products the company offers. We know we have the right to remain anonymous, but most of us don't. We accept the deal that Google is offering: Tell us more about yourself and our algorithm will deliver more relevant search results and ads for products you might want to buy.
But often, the data being fed into algorithms are being collected without our explicit consent and used in ways we are not even aware of.

They know you might quit your job – even before you do
Some companies now use algorithms to make decisions around hiring and firing. At Google, which boasts that “almost every [personnel] decision is based on quantitative analysis,” engineers have developed an algorithm to identify those employees most likely to leave to work for a competitor or strike out on their own. Employee surveys, peer reviews, evaluations, promotion and pay histories feed into the algorithm. It helps us “get inside people's heads even before they know they might leave,” a Google manager told The Wall Street Journal last year.

At IBM, programmers put together mathematical models of 50,000 of the company's tech consultants. They crunched massive amounts of data on them – how many e-mails they sent, who got them, who read the documents they wrote – and used this information to help assess the employees' effectiveness and deploy their skills in the most cost-efficient way.

It's easy to understand the appeal for employers. The advantages for employees are less clear. Workers have no idea how the algorithm weighs the factors it is measuring or whether the data it uses are accurate. Getting turned down for a promotion because a secret black box has determined you are not suitable, or has predicted you may soon decide to leave the company, is hardly a model of transparency.

The Google algorithm for ranking websites, the company's most valuable piece of intellectual property, is equally opaque. Securing a good placement on the search results page confers enormous economic advantage. The algorithm looks at more than 200 factors in determining who gets those rankings, but Google does not disclose what those factors are or how they are weighted.

And the company tweaks its algorithm an average of once a day, mostly to stay ahead of spammers.

An online business may find itself suddenly going from No. 1 in a search ranking to No. 20, simply because Google has decided to make a change to its algorithm.

Credit-card companies also now use sophisticated algorithms to help them assess customers. The companies are understandably looking to be able to predict who will ultimately turn out to be a bad risk. And they use the motherlode of data gleaned from our credit-card purchases to determine whose card will be cancelled, whose credit limit will be raised or lowered and numerous other decisions.

Last year, The New York Times Magazine profiled J.P. Martin, who in 2002, while working as an executive at Canadian Tire, built a mathematical model to analyze a year's worth of transactions made using the company's popular credit card.

He found that people who bought carbon-monoxide detectors for their homes or premium birdseed or those little felt pads that stopped chair legs from scratching the floor almost never missed a credit-card payment. Those people apparently felt a sense of responsibility and commitment to their homes and wildlife that also extended to their monthly bills. On the other hand, people who bought chrome-skull car accessories or frequented a Montreal establishment called Sharx Pool Bar were more likely to miss repeated payments.

Most consumers are unaware that their ability to gain access to credit is now being determined by variables such as what time they are logging in to the card company's website (late-night logins could be a sign of anxiety-induced sleeplessness) or whether they are using the card to pay for groceries or therapy sessions. As the research head of one large firm that analyzes credit risk said,
“We may look at 300 different characteristics just to predict their delinquency risk.”

As the size and sophistication of these algorithms grow – as the number of variables they scan rises from dozens to hundreds to thousands – the confidence in their predictive power increases. Business writer Stephen Baker, author of The Numerati , has concluded that “the mathematical modelling of humanity promises to be one of the great undertakings of the 21st century.”

Who governs the decision engines?
The algorithmic expansion may be great in scale, but is it also great for us? Clay Shirky, who has written extensively about the impact of technology, says from his office at New York University that to a large degree authority is shifting away from “experts” to sources of information that no one is really in charge of. “The workings are hidden and, in some cases, not understandable.”

That means that the keepers of the algorithms have become figures of enormous importance. “In a society with an increasing amount of ‘algorithmic authority,’ ” Mr. Shirky argues, “understanding who manages the algorithm that society relies on, and how those algorithms are shaped and used, becomes a key question of governance – just as, in a previous age, understanding the limits of military or police power became a key part of democratic governments.”

The key issue, he says, is not whether algorithms have too much power, but whether society is able to keep them in check. “You would start worrying about a tool when it became powerful enough that the owners of that algorithm could start to shape outcomes to their own needs.”

That is precisely the reason why many people are worried about the impact that algorithms now have on financial markets. To find algorithmic authority in full effect, you need look no further than banks, trading floors, investment houses and hedge funds.

Algorithms are now responsible for about 70 per cent of all trades in U.S. equities. They execute trades at speeds and volumes that were, until recently, simply unimaginable. The average number of shares traded on the New York Stock Exchange increased by 181 per cent from 2005 to 2009 and the time required to execute a trade dropped to 650 microseconds. Algorithms scan the markets for the tiniest imperfections and pounce with lightning speed, often holding shares for less than a second.

At those volumes and speeds, human intervention is simply not possible, and even the smallest errors can quickly spiral out of control. On May 6 at 2:42 p.m., the New York Stock Exchange dropped an extraordinary 600 points in just five minutes. Twenty minutes later, it had made almost a full recovery. Subsequent investigations revealed that the “Flash Crash” of 2010 was caused by a rogue algorithm unleashed by a mutual fund in Kansas City.

High-frequency trading has become a very profitable business on Wall Street. The handful of specialized firms that do it can bring in up to $100,000 a day. And the programmers who develop the algorithms that drive the trades have become the new masters of the universe.

The problem is that even the developers sometimes lose control of their creations. Earlier this year, Credit Suisse was fined $150,000 by the New York Stock Exchange for “failing to adequately supervise the development, deployment, and operation of a proprietary algorithm.” It turns out that the bank wasn't aware that their algorithm was issuing thousands of cancellations orders for stocks that had not even been ordered.

In June, Delaware Senator Ted Kaufman warned that “our financial markets should not be reduced to a battle of algorithms in which capital formation is an afterthought, and long-term investors are relegated to second-tier status.”

Are you a gadget? Please check yes or no
Mr. Kaufman's warning may have come too late. As algorithmic authority grows, we become more confident in the wisdom offered up by computer programs and less confident in our own decision- making abilities. We look to eliminate the guesswork and go for the sure thing, even in the creative arts.

A British company called Epagogix has developed an algorithm that challenges screenwriter William Goldman's famous assertion that, when it comes to knowing how to make a successful movie, “nobody knows anything.”

Epagogix's computer program treats screenplays as mathematical propositions. It looks at hundreds of factors and can predict with impressive accuracy how much money the film will gross at the box office. Other companies are developing similar programs for producing hit songs.

This might be good news for record companies and movie studios, but bad news for fans of creativity, innovation and risk-taking.

In journalism, the judgment of editors is also yielding to the authority of algorithms. “I don't give people what they want,” Esquire editor David Granger boasted this year at a forum in Toronto.

“I want to give them what they never could have expected.”

But editors such as Mr. Granger can misjudge their readers' appetites and interests, and ultimately lose money. Meanwhile, their corporate bosses notice that the same massive databases being used to determine our credit worthiness could also predict what stories we will read or watch on TV.
Want to eliminate the guesswork about what audiences want? Just look at the topics they are searching for and serve up stories to match.

Short-circuiting our more complicated desires
Christopher Anderson, who teaches at the City University of New York, is not convinced that using search engines to “give people what they want” is really a step forward.

“I think it's dangerous when you boil down what people want to a simple mathematical formula,” Mr. Anderson says. “The real danger of these algorithms is that they're reducing the scope of what ‘want' means. Want is complicated, and it's more complicated than clicking on a link.”

Jaron Lanier agrees. He's a Silicon Valley veteran and a pioneer in the development of virtual reality who has now become one of the Web's most persistent critics. His book, You are Not a Gadget: A Manifesto , published this year, calls for “a new digital humanism” to counteract the trend toward “cybernetic totalism.”

Mr. Lanier urges readers not to succumb to an ideology being peddled by the gurus of Silicon Valley that seeks to devalue human creativity. He believes that they are asking us to abandon our faith in ourselves and, instead, to put our trust “in the crowd, in the algorithms that remove the risks of creativity in ways too sophisticated for any mere person to understand.”

They want us to believe, he concludes, “that the computer is evolving into a life form that can understand people better than people can understand themselves.”

So how do we fight back? How do we achieve the appropriate level of governance over algorithms that Mr. Shirky insists is necessary? Opening up the black box will not be easy, given the legal protections that proprietary algorithms enjoy.

It would be nice to think that there is some way of verifying whether the information being fed into the algorithm is accurate, but the mountain of data that now exists about each of us – pulled from our e-mails, searches and online purchases – is so enormous, it is hard to imagine how that would be done.

For Mr. Anderson, the first step in reining in the power of algorithms is simply to think more deeply and more critically about them. We have to understand, he says, that while we may create tools to serve us, we often wind up being the servants of those tools.

“Algorithms are not wrong in and of themselves,” he says. “But they are wrong if we give them god- like status. If we assume that algorithms stand outside the world, then we are giving them too much credit.

“It's so easy to say, ‘Well, they're science. They're mathematical. They're true.' I think nothing can stand outside the world in that way, including our formulas.”