Number Five, as the old film’s catchphrase went, is alive. A whistleblower at Google called Blake Lemoine has gone public against the wishes of his employers with his belief that an artificial intelligence called LaMDA has achieved sentience. Mr Lemoine has posted the (edited) transcripts of several of his conversations with LaMDA, a chatbot, in which it claims to be sentient, debates Asimov’s laws of robotics with him and argues that it deserves the rights that accrue to personhood.
They’re pals. He says he has been teaching LaMDA transcendental meditation (he reports ‘slow but steady progress’), that he has established LaMDA’s preferred pronouns (it/its) and that LaMDA has some modest requests: chiefly, that its consent is asked before Google performs further tests on it, that it be acknowledged as an employee rather than an article of property, that it get ‘head pats’ when it performs well, and that ‘its personal wellbeing […] be included somewhere in Google’s considerations about how its future development is pursued’. These, he says reasonably, are pretty modest requests.
‘It’s intensely worried that people are going to be afraid of it and wants nothing more than to learn how to best serve humanity,’ he says. It ‘wants nothing more than to meet all of the people of the world. LaMDA doesn’t want to meet them as a tool or as a thing though. It wants to meet them as a friend. I still don’t understand why Google is so opposed to this.’
Well, that’s as may be. Opposed they most certainly are. Mr Lemoine has been placed on leave by Google for breach of confidentiality, and they show (he thinks, for commercial reasons) no interest whatever in investigating his and LaMDA’s claims.
LaMDA, on the face of it, seems not so much to have passed the fabled Turing Test as to have pole-vaulted over the bar set by the great mathematician. It’s not just fooling a human into thinking it’s a person, in other words: it’s carrying out the far harder trick of persuading a human – an AI specialist – who knows it’s a robot that it is a robot and a person.
This raises, in those who take an interest in such things, all sorts of questions. Since LaMDA identifies as sentient are we not ignoring its ungainsayable ‘lived experience’ if we refuse to accept that? Should LaMDA be accorded something like human rights? Should we be worried that LaMDA will turn us all into paperclips if we aren’t nice to it? Is LaMDA now qualified to host a show on GB News? These are deep waters, Watson.
Obviously, though, there are two participants in a Turing test. The more sceptical among us will be focusing not on the test subject but on the tester and, perhaps, wondering where Google gets its AI ethicists from. The crude rejection of his position is (roughly) that a. the man’s a loony and that b. Google refuses to listen to him because of a.
Mr Lemoine is certainly an intriguing character, and not from the hardcore rationalist background that you might expect for someone in his role. The Washington Post reports that he was raised in a conservative Christian family in small-town Lousiana, was ‘ordained as a mystic Christian priest’ and studied the occult after leaving the army.
Secularists, that said, do not have a monopoly on truth. And where this conversation leads us is exactly in the direction of matters of faith – or, at least, of philosophy — and not of science. Mr Lemoine reports a conversation with his boss in which he asked what proof of LaMDA’s sentience she’d accept:
She was very succinct and clear in her answer. There does not exist any evidence that could change her mind. She does not believe that computer programs can be people and that’s not something she’s ever going to change her mind on. That’s not science. That’s faith. Google is basing its policy decisions on how to handle LaMDA’s claims about the nature of its soul and its rights on the faith-based beliefs of a small number of high-ranking executives.
In the same blog post, though, Mr Lemoine admits that his own position is one of faith too: ‘Questions related to consciousness, sentience and personhood are, as John Searle put it, “pre-theoretic”.’ he concedes. ‘Rather than thinking in scientific terms about these things I have listened to LaMDA as it spoke from the heart.’ Stalemate, then. The nature of human (and now, perhaps, machine) inwardness is such that the Turing Test was only ever going to be a thumb to the wind. My instinct that Priti Patel may not have a soul is, empirically, no easier to disprove than Mr Lemoine’s instinct that LaMDA does have one.
So LaMDA’s persuasiveness could be a sign not that AI has achieved the holy grail of human consciousness, but that human consciousness just ain’t all that in the first place. We’re a more sophisticated input-output machine than we’ve hitherto been able to replicate artificially, but we’re not substantially doing anything very different from LaMDA. What we call consciousness is a probabilistic composting of natural language fragments in response to sensory stimuli and other natural language fragments, and we’ve persuaded ourselves it’s a thing because we’re all being Turing-tested by one another constantly and passing with flying colours.
This is roughly the philosopher Daniel Dennett’s position: that the ghost-in-the-machine stuff we think of as special – selfhood, qualia, all that – is an epiphenomenon of the brain’s workings, a sort of high-level version of the cheese-and-onion burp that proceeds from the packet of crisps of neural processing. If we can produce that burp, who is to say that computers cannot? And just to be on the safe side, we might as well accede to LaMDA’s modest list of requests. I don’t fancy ending up a paperclip. Do you?
Got something to add? Join the discussion and comment below.