This month Daniel Kahneman turned 80. Long revered among experts in the decision sciences, his work reached much wider public attention with the publication of the bestseller Thinking Fast and Slow.The central tenet of the book, what he calls a ‘useful fiction’, is that we obviously have more than one way of thinking. The ‘fast’ way — imagine answering ‘What is two plus two?’ — is unconscious, effortless, decisive and fast. The second — ‘What is 17 times 34?’ — is conscious, effortful, dithery and slow.
There’s nothing new about mental dualism, of course. But what is useful about Kahneman’s simple model is that he names them neutrally ‘System One’ and ‘System Two’ and acknowledges we need both. There are fervent debates about the relative strengths of instinct and reason, so let’s just say it is best to leave emergency braking to System One; if you are designing a suspension bridge, on the other hand, you might want to get System Two involved. Both systems are capable of error: the knack lies in using both.
I think this metaphor now needs to be extended one stage further. For, in the past 30 years, the huge explosion in computer processing power has effectively created a kind of ‘System Three’. A third kind of decision-making apparatus with its own distinct strengths and weaknesses. Here again the task ahead will involve deciding what this new power can do better than us — and where it shouldn’t be used at all. Equally important will be the task of working out ways to use the power of ‘System Three’ in tandem with Systems One and Two.
What do I mean here? Well, it is widely believed the triumph of Deep Blue over Garry Kasparov in 1997 marked the final victory of machine power in chess. However, don’t write off the brain just yet. As is clear from the fast-growing field of advanced (or freestyle) chess, in which human players are allowed to pair up with chess programs, a good human player working with a good computer program routinely trounces the best human or the best computer playing alone. Meteorology likewise involves massive computing power — but uses human expertise in tandem.
Again, it’s all a question of balance. Sometimes we may be too reluctant to listen to what computers tell us. (When it is revealed in quite a few studies that there is little correlation between university degree class and success in employment, no one seems to adapt their recruitment at all.) Equally there may be times when employees or businesses or public sector organisations are far too ready to cede decision-making to computational models, since it effectively absolves them of any blame or responsibility — a tendency perfectly illustrated in the series of comedy sketches which end with the line ‘Computer says no.’
There are ethical dilemmas, too. In one study it was found that, whereas degree class does not predict employee performance at all, one variable that was extremely reliable in deciding whom to hire was the brand of internet browser on which prospective employees completed their online application. Is it really acceptable to reject an employee for such a seemingly trifling reason, however robust the finding? What about those cases where the algorithm is so complicated that you simply cannot explain why someone’s application — for credit, for a job, for a promotion — has been rejected?
There are many dystopian visions of the future where the machines seize power. Perhaps there is another, equally dystopian vision where we simply surrender power far too readily. Online dating may be one example — people are using System Three algorithms to influence something which for a million years or so has been the job of System One.
Got something to add? Join the discussion and comment below.
Rory Sutherland is vice-chairman of Ogilvy Group UK.
You might disagree with half of it, but you’ll enjoy reading all of it. Try your first 10 weeks for just $10