Matthew Parris

The surer we are that machines can think, the less sure we'll be about people

23 August 2014

9:00 AM

23 August 2014

9:00 AM

Having written (for a Times diary) a few sentences about consciousness in robots, I settled back to study readers’ responses in the online commentary section. They added little. I was claiming there had been no progress since Descartes and Berkeley in the classic philosophical debate about how we know ‘Other Minds’ exist; and that there never would be. A correspondent on the letters page referred me to Wittgenstein’s treatment of the subject and so I studied his remarks. I have to confess they are, to me, unintelligible.

But I cannot let the matter rest. My earlier thoughts had been prompted by newspaper reports of the adventures of a talking, hitchhiking Canadian robot called Hitchbot. Every time such a ‘Whatever will robots learn to do next?’ story gets a public airing, we go the rounds of the same old discussion — a discussion that started in earnest in the 20th century: the debate about whether a machine could ever be so clever and responsive that we might call it ‘conscious’ in the sense we humans think we ourselves are conscious.

The debate never gets anywhere beyond the conclusion that, technologically, we’re a long way from that yet. True enough. But you cannot pursue it for long without becoming aware that the question of whether a machine could have consciousness can only lead to the question of how, if it did, we could know that it did. And this question in turn leads back to the question of how we can know that anything other than ourselves has consciousness. How do we know other minds exist? Maybe Descartes’s cogito ergo sum (I think therefore I am) is all we can know?

And there, it might be thought, the matter must rest. That other minds exist is a working assumption that serves us well. We observe our own responses to the world — to pain, to hunger, to loss — and we note that other humans respond in broadly similar ways. They tell us of their feelings and we empathise. We therefore suppose that what they say and how they respond is actuated by the same consciousness as that of which we can only have direct experience in ourselves.

By analogy with ourselves we assume it’s there. Arguing about whether we could actually prove it is airy-fairy stuff, best left to philosophers in ivory towers and very far from the lives and curiosities of most people. Nevertheless, I insist that it is logically impossible we could ever experience another’s consciousness, because all our experience must come through the portals of our own consciousness.

That’s why I wrote in the Times that what was true three centuries ago must remain true three centuries hence. I now think I was right on the last count (that nothing more will ever be proved) but wrong on the first: that to most people this can only ever be academic. The development of artificial intelligence over the last century, and its further development in centuries to come, will cause us to revisit the question of Other Minds, and will begin to trouble perfectly ordinary people. It is not arcane.

Philip K. Dick in his 1968 science-fiction novel Do Androids Dream of Electric Sheep? was not the first writer, but he was arguably the most engaging, to raise the question of whether artificially created ‘intelligences’ can be conscious, or simply very good at emulating consciousness. In his story we are to assume the latter, although the business of finding out whether apparent humans are actually android is medically very difficult. In the 1982 film Blade Runner, however, based on Dick’s novel (an adaptation of which he approved) it does appear that the ‘replicants’ are capable of human consciousness.

But what tingles the spines of audiences for both stories is the ambiguity, the hovering question, about ‘real’ feelings. The primary plot-line — ‘How do we know?’ — may be the detection of artificial people masquerading as real ones, but the secondary tease is a stranger question: what is it that makes a human ‘conscious’? And can it only ever be simulated, or might it be artificially created? And is consciousness an all-or-nothing attribute, or might there be halfway stages?

My contention is that as machines get cleverer and cleverer, as they become programmable to ‘speak’ to us and respond to us and to what we are doing, we shall be more and more teased by such questions. Any driver who has ever shouted at the satnav lady will know what I mean.

To a degree this may just be the age-old human tendency to anthropomorphise. The Japanese Tamagotchi toy — a sort of electronic egg that you could hatch on-screen, and nurture (or neglect) and influence — may have melted the hearts of millions, but so did inanimate rag dolls. Nobody’s fooled, it’s just that we like to pretend.

But those who could not suppress a tear when R2D2 is badly wounded in Star Wars will surely confess that if they had an astromech droid of their own, they would have toyed with the idea he was not only conscious but kindly. They would mourn his death.

Plainly we are only in the foothills of artificial intelligence. We may look back on the robots of the early 21st century as the developers of today’s heat-seeking surface-to-air missiles look back on the Boy David’s sling. And that’s to speak only of a machine’s technical capabilities: we shall learn, too, to make them lovable, variable, funny, temperamental. We shall learn to make them capable of reproducing more machines, and teaching their offspring things. I cannot believe that as these artificial intelligences gain similarities with their human makers and, most importantly, begin to gain autonomy, even independence, from us — our natural tendency to anthropomorphise will not invest them with scarily human personae.

Finally — and this is the really scary bit — I cannot but believe that when it becomes easy and natural to anthropomorphise machines, we shall ask ourselves with more anguish the ancient question: are we anthropomorphising each other?

You might disagree with half of it, but you’ll enjoy reading all of it. Try your first 10 weeks for just $10

Show comments
  • Louis

    Nice article. Great questions to ponder, questions which nobody knows the answers to. Unless, of course, you are a Singulatarian, in which case you already know all the answers.

  • Kitty MLB

    Yes there is that issue with mankind verses technology.
    I do know that Stephen Hawking has an issue with artificial intelligence(the irony-brave man) saying instead of complementing the human brain that someday it may take over.
    And I feel that although artificial intelligence is imperative for
    education in a modern world, it doesn’t help them to understand
    logic and solve problems..people must always think for themselves.

  • Paul H

    Alan Turing proved (with his theoretical computing machine) that given enough time and memory a very simple computer could perform exactly the same computation as any other….no matter how powerful – it was just a matter of programming.
    Therefore, no matter how enormously powerful future computing devices / robots etc. become they will not be able to do any more than my programmable calculator given enough memory.
    I do not believe my calculator is conscious….ergo nor could any machine be. Dicuss….. 🙂

    • onepieceman

      I do not believe a H20 molecule is wet… ergo water is not wet.
      I do not believe your neuron is conscious… ergo the collection of neurons inside your head is also not conscious.

      See the problem?

  • PandorasBrain

    Great article. You’re probably right that I can never KNOW that other minds experience consciousness as I do – be they humans or artificial intelligences. It’s just a good working assumption.

    But as other commenters here suggest, the thing that will really bake your noodle is when you start to take seriously the notion that the first human-level AI may be our last invention because it will cause an intelligence explosion, after which humanity is either godlike, or toast. this is what is exercising people like Stehen Hawking, Elon Musk, etc.

    More on this at

    • Terry Field

      They must be fairly silly if that is what exercises them

      • PandorasBrain

        Hey, Mr Musk, Professor Hawking, you can stop worrying about artificial general intelligence. There’s this guy Terry Field on the internet who says there’s nothing to get concerned about. We can all relax now.

        • Kitty MLB

          Ah! About 5 years ago I was fortunate to have a conversation with the exceptional Stephen Hawking…I remember him saying that artificial
          technology for him will always be a double edged sword. Of course if it were not for that we’d never hear his words. And yet he has his
          well known concerns about the subject…can
          something a human mind has created become
          bigger than its creator?

  • “As machines get cleverer and cleverer” and still use the trigger-scripts-and-guessing-around-algorithms that all AI of today is based on- they will never be clever enough or as clever as we are. There is only one universal algorithm that allows computer being equal to us and this algorithm exists already. When you know this algorithm you know what exactly is the fumdamental blueprint of consiousness beforehand. If then the situation begs the question whether the mind you interact with is based on this consciousness algorithm – (call it Ubikalgorithm in memory of Philip K.Dick) or is not you have an easy test. If it is not based on it (humans are) you can easily outperform it. Problem is that you sometimes can outperform minds that are based on the Ubikalgorithm: maybe pigs and elefants. Before we have been able to answer this question we have used them to eat them or to enslave them.

    The 300 year assumption is nearer to the truth than you would have expected. Truth is there are some German writers (not Wittgenstein) that have invented the UBIKalgorithm already. What are you willing to do to get to know it? How curious are you considering the fact that you write articles about it and even tried to understand the almost unintelligible Wittgenstein?

  • Korrelan

    Excellent article. Perhaps once a AI is intelligent enough that it can solve the question of proving consciousness, we will believe.

  • Sean L

    Coming at the question from this epistemological angle really misses the target. One cannot get seriously angry at the satnav woman and that’s only because ‘she’ has no *agency*. We can’t hold the machine that generates the sound responsible in the way we’re bound to find a person culpable, because only persons are capable of rational agency. Consider the legal concept of diminished responsibility whereby one can be absolved of responsibility *morally* or *rationally*, the terms are equivalent in this context. One is the *actual* author of the deed yet deemed not to have *intended* it as such. For example if someone drugged me I could plead not guilty for behaviour induced by the intoxicating effect. Yet if I took the drug voluntarily I would be culpable because I’d be held to be conscious of the possible effects of consuming the drug in the first place. The starting point is one of rational agency, and that’s what we always and necessarily presuppose of others. What you’re saying about ‘other minds’ is non-sense: it just has no currency in life as it’s lived, the only life there is. And has been thoroughly debunked philosophically.

    The cogito was debunked by Nietzsche after Schopenhauer. The ‘I’ comes after the thought: there’s no ‘I’ having a thought that isn’t generated by thinking to begin with. We add the doer to the deed, said Nietzsche, as a grammatical necessity, a “grammatical fiction”, as he called it. But there’s no me independently of my thoughts and deeds, which are constitutive of what *I* am. Wittgenstein came at it strictly from the point of language. There can be no purely private sensations because the language and therefore the concept must be publicly acquired. I can only learn what a pain is, for example, and it’s an example he used, by being member of a community of speakers. Thus the possibility of my referring to to a private sensation, or indeed anything at all, already presupposes others and their shared experiences via language. It is through language that things first come to be and are, as his contemporary Heidegger put it.

    • Terry Field

      Do you consider that there may be a sacerdotal element to your writings?

      • Sean L

        Tahnks for your question but you’ve lost me there old boy as I cant’ see how that relates to any of the above, which is just a response to a very poor article with a phiosophical theme that wouldn’t get even a pass mark in a philosophy paper, the author showing no sign of having read or understood any post-Kantian philosophy. But yes I was schooled by nuns and priests, Mass every day at boarding school and all that, but nowadays only see the inside of a church at weddings or funerals. And I don’t think I’ve attended confession since leaving school. . .. if that’s any kind of answer to your curious question . .

        • Terry Field

          Do you miss that element of your ‘spiritual’ life? It has a real beauty and relevance, do you not think?

          • Sean L

            For sure.

    • Kitty MLB

      Excellent post Sean. Forgive me for the lighter note when I say people always get cross with ‘satnav woman’.Especially
      when she leads them down a impassable track etc.
      My husband shouts at her..and I have to remind him that she doesn’t have logical thought or will be able to deal with
      Situations that require sense or reasoning because she isn’t
      a human..dispite the soft feminine voice..

      • Sean L

        Thanks you Kitty. Yes ‘she’ is of course nothing but an expression of the logic of the GPS system.

  • Terry Field

    I have never assumed, nor indeed recognised, that other minds exist. I clearly create the outside world as a decoration for my sense of humour. Any other idea I may have would of course be absurd.

  • and we note that other humans respond in broadly similar ways.
    And our dogs. My dog is one of the (few) loveliest people I know.

  • Sean L

    Wittgenstein is considered by many to the greatest philosopher of the last century, not least Roger Scruton who often appears here. If you find him ‘unintelligible’ might not that have more to do with your lack of philosophical understanding? It seems to me that you can’t see beyond your positvist outlook, ignorant of post-Kantian philosophy, Wittgenstein’s precursors, Schopenhauer and Nietzsche, two sides of the same coin.

    Wittgenstein isn’t referring to any fact about the world. Of course I can no more feel your pain than breath through your nose. What would be the point of even saying such a thing? What he’s on about is the *logical* status of so-called private sensations or inner events. His point is that statements concerning such sensations occupy the same logical space as things we can all see and hear ‘ out there’. There’s no inner realm that is exclusively mine, at least from a logical point of view, because in order for such a thing as “pain” to intelligible at all depends on a criterion of meaning that must be rooted in the public realm. I can only conceive of my pain *as* pain because I’ve acquired the *concept* of pain, and the word that expresses it, in virtue of belonging to a community of speakers. All we’re able to discuss is what we’ve learnt in the public realm. There’s no inaccessible private realm. That I can’t experience your sensations is irrelevant: the *content* of the statement is beside the point. Wittgenstein illustrates his point with the famous ‘beetle in a box’ thought experiment. Imagine a community where each person had a box containing a beetle. Everyone referred to the beetle but no one could see inside anyone else’s box. Thus the content of the box could have been different for each one, or even empty, there might be no beetle at all insofar as anyone could prove. But so long as they can talk about it intelligibly the actual content drops out of consideration.

    • Nope. Leo Strauss.

      • Sean L

        Ha ha you *are* joking, right? Leo Strauss isn’t even on the syllabus for a philosophy degree course. Not even for the optional political philosophy module wherein lies his significance. Though for all I know there may be exceptions such as Chicago. And besides, Leo Strauss didn’t consider himself a great philosopher. He makes a useful distinction between “great thinkers” who alter the course of thought, and mere “scholars” who reason about their works, defining himself as the latter.

        • Glory be, the joke’s on you. You couldn’t be more wrong. But people that read Philosophy For Dummies can’t be expected to know real philosophy when they encounter it. The philosophers know this and write accordingly.

          • Sean L

            Philosophy for Dummies – that’s a good one! I like your humour.


    By Benjamin Franklin

    took some of the Spectator papers, and, making short hints of the sentiment in
    each sentence, laid them by a few days, and then, without looking at the book,
    tried to complete the papers again, by expressing each hinted sentiment at
    length, and as fully as it had been expressed before, in any suitable words
    that should come to hand.

    I compared my Spectator with the original, discovered some of my faults, and
    corrected them. But I found I wanted a stock of words, or a readiness in
    recollecting and using them.

    I took
    some of the tales and turned them into verse; and, after a time, when I had
    pretty well forgotten the prose, turned them back again.

    also sometimes jumbled my collections of hints into confusion, and after some
    weeks endeavored to reduce them into the best order, before I began to form the
    full sentences and compleat the paper. This was to teach me method in the
    arrangement of thoughts.

    comparing my work afterwards with the original, I discovered many faults and
    amended them; but I sometimes had the pleasure of fancying that, in certain
    particulars of small import, I had been lucky enough to improve the method or
    the language, and this encouraged me to think I might possibly in time come to
    be a tolerable English writer.

  • Dodgy Geezer

    I wouldn’t haver any issues if my colleague at work did its thinking using silicon instead of protoplasm. We are building towards this already from both ends – the biologists are reverse-engineering the brain components while the autonomous vehicle specialists are finding that simple AI is not good enough – to interact with humans a machine needs to have decision-making processes which are precise brain equivalents, able to understand context and attitude, for instance.

    Machine reproduction will be an interesting issue. Factory manufactured, or parthenogenesis? Or maybe they will need sexes? Then my colleague could be a he or a she…

  • Benjamin O’Donnell

    I suspect that if our machines obtained consciousness, we’ll once again find ourselves mired in the ethics of slavery…

  • Mrs Josephine Hyde-Hartley

    I don’t think a robot could ever be conscious in the human sense.

    However, we normally think of the word ” conscience” as something to describe moral behaviour. So I suppose technically, robots can be programmed to know anything through scientific observation/feedback so to speak- and so building on that I suppose a robot could predict things that might happen, if only according to it’s predetermined set-up.

    So robots will only ever be symbols of something that their makers think is correct – though whatever it is might not turn out to be true ( see eg the financial crash that started in 2007).

  • Richard Eldritch

    No mention of the late, great, Iain M Banks’ Culture “Minds”? You lot are 30 years behind the curve.

  • HungryShrew

    We can make AI that can think but not feel.

    Without feeling we will get problem solvers, not artificial life.

    Hopefully we will not be tagged as a problem to solve by a super AI.