CCI Mooseworks: Hot AI Summer Pt. 2: The EU and China
It’s August — the birds are chirping, the sun is shining, the beer is cold, the water’s warm. And folks, the AI summer? It’s hot. In July, we looked at the way the United Kingdom and the United States are approaching artificial intelligence regulation. Next month, we’ll be taking a deep dive into Canada’s approach.
For now, though, let’s take a look at Europe and China.
The EU: Regulation With A Twist
The European Union has not shied away from making law and proceeding with an ambitious Digital Strategy. The General Data Protection Regulation is often hailed as the global gold standard for privacy protection. When you compare it with the United States, where they have only ventured into tech regulation piecemeal and somewhat haphazardly, the difference is like night and day.
So, perhaps unsurprisingly, the EU has moved early and quickly to regulate AI. The Artificial Intelligence Act began to wind its way through the tortuous European legislative process in 2021 and passed a crucial second reading vote in June of this year. This was strikingly fast — for context, in the 2014–2019 legislative term, bills took an average of 40 months to pass. While this isn’t the end of the process before the law comes into force, it has probably jumped the tallest hurdle in its path before it takes effect in 2026.
With that context out of the way, let’s dive into the details.
The law orients itself around four tiers for AI systems based on their risk to human safety, livelihoods and rights: unacceptable, high, low and minimal. (Minimal-risk systems are essentially not regulated at all under the new framework, and only have to adhere to existing privacy and consumer protection rules).
The first category includes Orwellian government uses like ‘social scoring’ or constant facial recognition tracking in public, and such uses are simply banned, along with private sector applications that cause people to harm others.
The meat of the new law deals with applications and systems deemed to be high risk. This is a broad category, including any systems used in products covered by EU product safety laws, and use cases prescribed in an annex to the law that can be expanded by the European Commission (the executive arm of the EU). These currently include uses that match up with decision-making that is directly consequential to people, with the potential to unfairly discriminate or cause harm — including job applications, admission to educational institutions, biometric identification, and law enforcement applications, for example.
High-risk systems, per the law, must have their own risk management systems to identify, evaluate and mitigate these impacts, as well as technical documentation that demonstrates compliance. In the name of transparency, high-risk systems must also maintain logs of decisions and transparently provide basic information about use parameters and risks to users. The EU law also mandates human oversight and adequate cybersecurity, and providers of high-risk AI services must notify national governments that they are making it available.
At the other end of the spectrum, AI systems deemed (by their exclusion from the other two categories) to be of limited risk, will not be as meticulously governed. Instead, they will require simple notification and transparency to users that they are AI systems (e.g. for chatbots.) This would also apply to AI systems that generate images or other content, when they portray real people in such a way that viewers would assume it was real — think Pope Francis in a puffer jacket. If the EU AI Act had been in effect when that AI-powered papal fit went viral, it would have had to have a big ‘NOT REAL’ notification with it. That might seem frivolous or silly, but this also applies to more nefarious ‘deepfakes.’
The Act creates a limited carveout to promote innovation through the creation of regulatory sandboxes, supervised tests of regulatory changes to either collect evidence for a new regulation or to develop them for novel goods and services in a collaborative way aimed at getting innovations to market quickly and safely. The EU law assigns priority access to this tool to start-ups and ‘small-scale providers’ and directs member states to set fees for access to these programs proportionately to their scale. Finally, the Act creates a European AI Board, made up of national AI authorities and the EU Data Protection Supervisor, to advise the Commission on AI issues and promote best practices.
The Act is comprehensive and intended to be technologically neutral (i.e. it governs types of use and risks, rather than specific products and services). Executives from 150 European business giants publicly criticized the law in a letter ahead of its passage as a threat to Europe’s competitiveness in AI applications, particularly generative AI, and called instead for legislation to “[state] broad principles in a risk-based approach” with details left to regulators able to iterate upon.
A weakness of the EU’s tier-based approach to risk is that it does not leave much room for the inherent slipperiness of a family of technologies and models whose fundamental value-add is their ability to continuously take on new information and adjust their outputs accordingly. One way to deal with this is to provide for the recognition of standards, which the law does do — this is actually a major strength compared to Canada’s current draft AI law. The other way is amendment, and the EU has already begun to amend the law — a successful first reading vote on a huge package of amendments was held on the same day as the second reading vote on the law itself. But as we saw above, this is not a nimble process.
We’ll look in more detail at the trade-offs inherent between prescription, certainty and flexibility next month, when we take a closer look at our own AIDA.
A last note on the EU: while legislation is slow, agreements can be faster. In late May, the EU and US agreed to draft a (voluntary) code of conduct for AI that will be a starting point for international standards to be approved by G7 governments (including Canada). While this won’t have real teeth, it will be interesting to see how a code of conduct shapes other countries’ approaches and how effective it can be in setting actual standards.
China: Strategic Control
I’ll wrap up Hot AI Summer with a quick word on China, an enormous player in the AI world whose policies have not gotten as much attention. To some extent, fair enough. China isn’t a democracy, so there are not as many direct lessons applicable to our political and institutional context.
On the other hand, we ignore developments there at our peril. China is not only a titanically huge market, but the country also has the lion’s share of global AI patents: Chinese inventors and companies filed 389,571 patents over the last decade, 74.7% of the global total. At the same time, US efforts to lock China out of advanced semiconductor capacity could complicate their efforts to capitalize on that — the need for compute will only grow more acute as AI technology matures and uses proliferate.
Without the space to go in to too much detail here, I recommend this piece from an expert on AI governance in China. There are interesting points of commonality — labelling of AI image outputs and registration of AI systems for instance — but also of deep distinction (and practical difficulty), such as requirements for both training data and outputs of generative AI products to be “true and accurate.”
Enjoy the rest of your summer, and we’ll see you back in September for a closer look at Canada’s AIDA.
CCI Mooseworks is the Canadian innovation and economic policy newsletter of the Council of Canadian Innovators — a national business council dedicated exclusively to high-growth scale-up technology companies.
You can read a few of our recent posts here:
- Hot AI Summer Pt. 1: The US and the UK
- Canada could be a cybersecurity powerhouse with this one little trick
- The Three Deadly Innovation Traps of FDI
- A Fascinating View Into a Tech Tax Credit
If you want to receive future updates from CCI straight to your inbox, subscribe here.