<iframe src="//www.googletagmanager.com/ns.html?id=GTM-K3L4M3" height="0" width="0" style="display:none;visibility:hidden">

Leading article

Don’t stifle AI

10 June 2023

9:00 AM

10 June 2023

9:00 AM

In his meeting with Joe Biden this week, Rishi Sunak proposed a research centre and regulatory body for artificial intelligence in Britain. This raises a dilemma for governments worldwide: how can humans reap the benefits of AI without creating an uncontrollable, possibly existential threat?

The technological leaps in recent months have captured the public imagination, but as we are all now aware, an AI clever enough to cure cancer and create clean energy will also be so smart that it could inflict huge damage. In Brussels, Washington and London, the mood has swung from complacency to panic. Leaders who once cheered on the technology now fear it, and increasingly call for regulation.

Sam Altman, the chief executive of one of the world’s leading AI companies, Open-AI, has asked the US Congress to act. ‘If this technology goes wrong,’ he says, ‘it can go quite wrong.’ The US is already engaged in discussions with the European Union over AI regulation.

Given the total lack of technological innovation in the EU, the risk of it producing civilisation-ending AI seems small. But although Britain has no role, so far, in the US-EU dialogue, there is an opportunity here. Holding a neutral position between the US and the EU could turn out to be one of Britain’s strengths, and given the EU’s predilection for overregulation, there is an urgent need for a voice that calls for enterprise at the same time as urging caution.


The government has already signalled its own intentions, publishing in March a paper proposing a ‘pro-innovation approach to AI regulation’. AI is already a promising UK industry, it says: we must be careful not to stifle it, and we should keep in mind the lesson of GM foods.

A quarter of a century ago Britain was at the leading edge of GM foods. Then public anxiety led to regulation and our embryonic GM industry wilted. This did not, however, stop the growth of the industry abroad. Indeed, we eat GM foods every day despite all our old fears. There is no more talk of ‘Frankenstein foods’.

Eric Schmidt, the former CEO of Google, has chaired a congressional AI commission and pointed out the problem with the debate about regulation: the need is for a regulatory body ‘with an awful lot of rules that we don’t know how to write right now’. It would be better, he has said, to focus on clear threats such as the penetration of national security infrastructure. The use of AI with advanced capabilities – to create imitations such as deepfakes, say, and other systems that could be used to manipulate behaviour – should be regulated, as much scientific work is now. It is of course urgent that protections are put in place to help safeguard the public from hacking and malware attacks.

Regulation always tends to lag far behind innovation. It’s normal for regulators to be captured by the industries they oversee, which may explain why AI chiefs are leading the call for regulation now. They would like to call the regulatory shots. It’s hard not to wonder whether they are motivated by a fear of being overtaken by small upstarts, in the same way that they once took over computing. Overbearing regulation, after all, often has its roots in large businesses trying to make life harder for their smaller competitors, a point once made by Tony Blair’s deregulation ‘czar’ Lord Haskins.

There are genuine reasons to worry about AI and to want global-scale regulation. What, for example, should we do about the possibility of AI-controlled weapon systems being trusted to select their own targets and which could decide to wipe out civilians? Do we ban AI from making decisions in frontline warfare altogether, or do we agree on a protocol specifying exactly where and how it can be used?

These are the sorts of issues which Sunak presumably has in mind when he proposes an international body for AI, akin to the nuclear-supervising International Atomic Energy Authority. OpenAI has made similar proposals. But it is also worth bearing in mind that the IAEA’s attempts to stop nuclear proliferation have failed: Pakistan and North Korea have tested their bombs and Iran’s success at enriching uranium is a continuing worry.

The development of super-intelligent AI requires far fewer resources than the development of nuclear weapons. AI does not require vast capital to invest in developing it, and the technology has a myriad of commercial uses, which is why the sector is dominated by small start-ups.

This is the environment which Britain and the world must be careful not to destroy through overregulation. The UK government’s input on how to achieve regulation which encourages rather than extinguishes enterprise will be vital. But the Prime Minister will have his work cut out in trying to persuade Biden and the EU that Britain should be allowed to take the lead in this. If panic over AI does end up with innovation-crushing rules in America or Europe, it may fall to Britain to urge a more sensible path.

Got something to add? Join the discussion and comment below.

You might disagree with half of it, but you’ll enjoy reading all of it. Try your first month for free, then just $2 a week for the remainder of your first year.


Comments

Don't miss out

Join the conversation with other Spectator Australia readers. Subscribe to leave a comment.

Already a subscriber? Log in

Close