This article was originally published in The Sunday Times, 4/6/2023: https://www.thetimes.co.uk/article/on-election-day-i-fear-well-see-how-…
In 1953, Dwight Eisenhower stepped up to the podium at the UN general assembly. He told the world: “I feel impelled to speak today in a language . . . I would have preferred never to use. That new language is the language of atomic warfare.”
This general-turned-president could see that the transformational technology of atomic energy brought immense opportunities but also posed a clear danger. His speech that day, Atoms for Peace, was a wake-up call to build the institutions required to maintain peace and stability — and it kick-started the creation of the International Atomic Energy Agency (IAEA).
Seventy years later we have arrived at another critical international moment. Artificial intelligence (AI) may or may not pose the same level of existential threat but in recent months the world has woken up to its revolutionary power. Rishi Sunak, the prime minister, gets this and he will make a timely visit to the US this week with AI high on the agenda.
With the convergence of big data, cloud computing, the internet of things and AI, the unprecedented pace of technological development seen over the past three decades is about to go into hyperdrive. AI will enable tremendous positive change, including for the most pressing challenges including healthcare, economic productivity and climate change.
But the heart of the debate about AI is its “dual use” potential to enable dangerous and disruptive capabilities. These are particularly troubling when their evolution can be hard to forecast, even by their creators. In the wrong hands, future AI will be able to conduct offensive cyber operations to enable mass fraud; manipulate and generate content to undermine trust and democracy, and imitate deep and personal relationships to facilitate abuse. As a former home secretary it is also easy to see the capabilities AI brings to serious crime and hostile state activity.
I am especially worried that the UK and US elections next year will become testing grounds for some of the most harmful capabilities. The Electoral Commission, which is only now getting to grips with social media, is clearly not ready for AI. When any new technology is created, a new class of responsibilities is required. The gold rush to develop AI has so far neglected this and the moment to put in guardrails in the first phase has passed. While some form of domestic regulation will be needed, the major challenge of AI comes with its ability to cut across borders. That puts front and centre critical questions about responsibility, accountability and protections for society at an international level. At the moment there are precious few signs of the co-ordination and action we desperately need.
This will be the central challenge for policymakers in the years ahead: how to turn words of caution into meaningful policy while balancing the need to champion innovation. The speed of advancements will make a comprehensive system difficult to achieve but with enough leadership we can establish an IAEA-esque framework of principles, responsibilities and communication. Achieving this is absolutely critical before the next phase of “superintelligence” capabilities starts to roll out.
This initial work should focus on three goals. First, a system of accountability for AI developers. This would place greater expectations on them to support measures such as robust risk evaluations of AI models, open-data sharing and increased resources dedicated to AI safety.
Second, an ethical framework to protect the interests of society. This should be based on making AI deployment safe and transparent, including efforts to make all AI-generated content marked and identifiable.
Third, incentives to promote development towards shared global objectives. Bringing together a global fund to encourage capital investment in humanitarian goals should be a new focus for our overseas aid budget.
The nature of these measures will inevitably evolve but what is essential right now is to establish a framework to start the right conversations and identify realistic solutions. Britain has a long, proud, history of building and supporting institutions to strengthen the rules-based international order. There are competing approaches on how or whether to regulate AI in the US, China and the EU. This needs urgent convening — and the UK, while also harnessing our strong relationships in other emerging regions — is well placed to help get them to the negotiating table. It should be one of our highest foreign policy priorities.
We can and must be a central part of protecting some degree of order as the AI revolution accelerates. We must start to lay the foundations from which the institutions of the future are built. If we learn to speak a new shared language with shared goals, we can help to create an “AI for peace” international moment. If we don’t, the AI revolution will happen to us, not with us — and that rarely ends well.
Sajid Javid MP is a former chancellor and home secretary, and a former adviser to the US artificial intelligence firm C3 AI