World
How the world is grappling with AI: acts, pacts and declarations
This week is set to be an important one for AI with an international summit held in Korea and a sign-off on the AI Act in Brussels.
The AI Safety Summit hosted by South-Korea and attended by governments, companies and civil society (21 May) is likely to result in more voluntary commitments following an opening summit hosted by the UK last year. This initiative – with France, the UK and South Korea as key movers – is one of a growing swathe of AI rules, pacts, laws and agreements proliferating globally. Here’s a look at the key developments to watch from the perspective of European businesses and policymakers.
1. AI Safety Summit
The summit held in Seoul and co-hosted by the UK today, will build on the legacy of the first edition in November last year, to “advance global discussions on AI”. Talks will focus on AI safety and addressing the potential capabilities of the most advanced AI models.
Among last year’s attendees, besides government leaders, were Elon Musk, CEO of Tesla and owner of Twitter, as well as Sam Altman, CEO of OpenAI and Nick Clegg, president of global affairs at Meta. This edition is expected to attract fewer governments, 19 instead of 28.
Researchers praised the progress made at Bletchley Park in 2023, as well as the establishment of an AI Safety Institute in the UK to test new types of AI to address the potential harmful use of such models. Any outcomes will be taken on by France, which will host the next Safety Summit later this year.
2. AI Act
The EU’s AI Act, the world’s first risk-based legislation on the machine learning tool, will also be signed off by EU ministers tomorrow (21 May), meaning that the rules start applying in June. A big difference with all other initiatives, is that the AI Act is actual law. Companies can therefore be held accountable for breaches, and ultimately face fines.
Under the Act – put forward by the European Commission in 2021 – machine learning systems will be divided into four main categories according to the potential risk they pose to society. The general-purpose AI rules will apply from one year after entry into force, in May 2025, while obligations for high-risk systems will only start to kick in after three years. They will be under the oversight of national authorities, supported by the AI office inside the European Commission.
3. AI Pact
In a bid to help companies get ready for the AI Act, the Commission came up with the AI Pact. It aims to help so-called front-runners to test and share their AI solutions with other companies, in anticipation of the upcoming regulatory framework.AI Pact | Shaping Europe’s digital future (europa.eu)
Lucilla Sioli, director of AI at the Commission, told a European Business Summit last week (15 May) that the Pact is not intended as means of compliance enforcement by the EU executive, but more as a sandbox where businesses can see if the rules are fit for purpose. “More than 400 companies signed up. We organise a monthly workshop, which we will continue, for companies to be able to prepare well,” Sioli said.
4. OECD
Moving away from the EU: the Organisation for Economic Co-operation and Development (OECD), first published its AI principles in 2019. Those have become a global reference point for AI policymaking: the EU, the Council of Europe, the US, and the UN use the OECD’s definition of an AI system and lifecycle in their legislative and regulatory frameworks.
An updated version was adopted earlier this month (3 May) to take into account recent developments in AI, such as the emergence of general-purpose and generative AI tools, including programs like ChatGPT. The list now addressed AI-related challenges around privacy, intellectual property and information integrity.
Audrey Plonk, Head of the OECD’s Digital Economy Policy Division, who spoke at the same EBU conference as Sioli, said that while the EU has shown a leadership role in regulating first, “all democracies around the world will ultimately have an AI law”, and there is a lot of similarity in the objectives. The OECD has some 38 members including EU countries but also Canada, Japan, Australia, Norway, the UK and the US.
5. Council of Europe
The Council of Europe (CoE) – an international organisation aimed to promote and protect human rights and democracy – comprises some 46 countries, including all EU member states plus countries like Albania and Turkey.
Last week (16 May), the CoE adopted a treaty that covers the entire lifecycle of AI systems and addresses the risks they may pose, while promoting responsible innovation. It aims to ensure that human rights and the rules of law are incorporated in situations where AI systems assist or replace human decision making.
There is one caveat with these international rules, each country can decide whether or not to sign the convention.
6. G7
The smaller G7 group of countries – Italy, Canada, France, Germany, Japan, UK, and the US – will meet in Italy next month to discuss AI. It will include a special visit from Pope Francis, who has called for the development of ethical AI.
Last year, Japan launched the so-called Hiroshima process under its G7 presidency, with the aim of promoting safe, secure, and trustworthy AI. The 11 guiding principles and the voluntary Code of Conduct aim to complement, at international level, the legally binding EU AI Act.
Ursula von der Leyen, President of the European Commission who attended the meeting, said that with this, the EU also contributes “to AI guardrails and governance at global level.”
7. United Nations
A more symbolic approach was taken by the United Nations (UN), which adopted a US-led draft resolution last March to highlight the respect, protection and promotion of human rights in the design, development and use of AI. The text was backed by more than 120 out of the 193 member states.