What Europe’s AI regulation moment will mean for the world
Melbourne (Australia), Jul 31 (360info) The European Union’s AI regulation has some predicting a spate of Brussels copycats. Close, but not quite.
“It is the AI moment.”
So went the declaration from International Telecommunications Union Secretary-General Doreen Bodgan-Martin at the conclusion of a UN summit in Geneva on 7 July 2023.
At a historic UN Security Council meeting 11 days later, Secretary-General Ant nio Guterres agreed. So did nations and regulators.
A desire has emerged from powerful quarters to protect citizens from the potential harms of AI issues that are known (discrimination, privacy violations, copyright theft) and those which are not. Yet.
Most nations have approached issues like this by allowing sectors to individually regulate AI, such as aircraft design and flight safety. The infamous Boeing 737 MAX which was grounded for over 18 months following two crashes within five months that killed 346 people is one egregious example of regulatory failure.
Other fields that have proactively regulated on AI include medical information (presiding over robot surgery and scan analysis), automated vehicles (the yet-to-materialise Tesla robot taxis and ‘Full Self Drive’ [sic]) and policing social media networks to protect against harms like disinformation.
Some countries, such as the US, Japan and the UK, don’t see the need for regulation to go beyond the combination of adaptive sectoral regulation and potential international agreements supplementing more speculative risks discussed in the so-called G7 Hiroshima Process.
Others want to go further.
More can be done. Generic laws could regulate AI across broader society. China has already published its law governing AI as part of its social control measures, which includes internet filtering through its ‘Great Firewall of China’ and a social credit scoring system.
China intends to strictly control the use of AI much like it has with social media, banning Facebook, Google and TikTok from operating inside its borders (even though the latter has a Chinese parent company).
Liberal democracies will not adopt the Chinese approach but may go further than the US, UK and Japan. The largest consumer market, the European Economic Area, is planning to adopt the so-called AI Act’, which is actually a European Regulation on AI.
Over two years since it was first proposed, the Act is locked in negotiations within the EU. It may take until April 2024 to properly pass. But it’s not possible to simply lift the EU’s AI Act and implant it in a different jurisdiction: it is part of a series of laws in European institutions and such an Act would be lost in translation.
There’s a name for when EU law is adopted and adapted in other nations: the ‘Brussels Effect’, named after the city which hosts the EU’s headquarters.
It most often is invoked when describing the reaction to the EU’s General Data Protection Regulation (GDPR), which came into force in 2018 and has been widely hailed as setting a much-copied global standard for data protection.
But an unnuanced analysis of the Brussels Effect is problematic. Many countries did not adopt the GDPR, but instead a separate law (the Convention 108+) from the Council of Europe, a Strasbourg-based 47-member human rights organisation which predates the EU.
In 2023, a group of interdisciplinary experts unanimously concluded that the Brussels Effect either wasn’t possible or, if it was, would be limited.
They found the AI Act would sit within the ‘digital acquis’, a large body of previously agreed laws with an interlocking web of powers and powers, all of which would need reproducing to make sense of the additions the AI Act provides.
If such a Brussels Effect in AI is unlikely or highly constrained, there is a model nations could adopt.
Despite the Organisation for Economic Cooperation and Development (OECD) and UN Education Social and Cultural Organisation (UNESCO) both agreeing declarations of AI ethics principles, these are non-binding.
That leaves Strasbourg’s Council of Europe.
Unlike EU laws, Conventions by the Council of Europe do not take direct effect in national law. Other nations beyond the Council’s 47 members can sign onto Conventions through international agreement.
For instance, the Council’s Convention 108+ has 55 members, including Canada and countries across Latin America and Africa.
The Council of Europe has brokered beyond its members going back decades, notably a 2001 cybercrime treaty that included not only Canada, but Japan and the US.
The Convention 108+ is proof of what Oslo University’s Lee Bygrave has described as a ‘Strasbourg Effect’, an alternative to the Brussels phenomenon.
The Strasbourg Effect could fuel the way forward on AI. The Convention will likely be similar to the EU’s AI Act, but with key distinctions. The Convention is being negotiated with the US, UK and Japan and is likely to adopt a more flexible approach, with more co-regulation with industry and independent experts where appropriate.
As the Council of Europe is primarily a human rights organisation, it is also likely to pay more attention to the human rights implications of AI deployment.
The Convention also has the advantage of being created in mid-2023, as opposed to the EU’s Act which started in 2021. This means it can better address the foundational Large Language Models that emerged in early 2023, such as ChatGPT, Bard and others.
In 2024, as the EU’s AI Act and the Council’s AI Convention are finalised, other liberal democracies, such as Australia, UK, Brazil, Japan and US, are expected to adopt and adapt these laws.
When the rush starts, there is more likely to be a Strasbourg Effect of nations copying the Convention than any Brussels Effect.
AI’s regulation ‘moment’ that Bodgan-Martin heralded in July will last for years and be an exercise in international legal coordination. It best be comprehensive and careful to ensure the power of AI is deployed for the good of humanity.
Chris Marsden is Professor of Artificial Intelligence, Technology and the Law at Monash University and an expert on Internet and digital technology law, having researched and taught in the field since 1995.
George Christou is Professor of European Politics and Security at the University of Warwick, UK. He has published widely on European Politics and Security and he is the Editor for Palgrave’s New Security Challenges Series.