Europe’s grand AI legislative experiment was always going to be a high-risk, high-reward strategy. But few people imagined it’d end up as a massive legal self-own. The EU should be rightly proud of its rights-first legislative approach via the GDPR, the General Data Protection Regulation introduced in 2018 – even if you hate the cookie banners that it caused.

The GDPR is perhaps the most consequential bit of tech law in history, spawning copycat rules around the world, including in California with its own California Consumer Privacy Act (CCPA).

But Brussels has carved holes in that groundbreaking law to make its new AI Act workable for industry. The Commission’s simplification package, unveiled just a few weeks back, eased AI compliance for big tech companies in Europe and pushed back how long they had to meet European demands. It’s a massive pivot, prompted by fears of losing the race to the United States and China, and after relentless lobbying by big tech firms, who are spending more than ever in Europe, according to Corporate Europe Observatory data.

The simplification package shored up what has long been an ill-fitting AI Act politically, but it also undercuts the one major contribution the EU has made to global tech – and completely destroys any claim to upholding strong privacy rules.

The move to create a legal basis for training AI models using people’s data runs counter to nearly a decade of European attitudes to data protection and something that Europe could rightly claim to lead the world on. What we’re now likely to see is the normalised scraping and processing of personal data – often from public sources – to improve models, putting the control in the hands of AI firms and away from individuals. Privacy advocates say this means the EU is dismantling the GDPR to save the AI Act’s promise of European AI.

How did we get here? Sam Altman’s global roadshow of capitals, including in Europe in recent years, has made existential risk and competitive peril forefront in the mind of legislators drafting AI law. European lawmakers appear to have recognised that Europe has borne down on AI too much, to the extent that it could strangle its own startups while American and Chinese rivals race ahead, and completely ignore Europe as a market for its latest AI tools.

Meta has publicly balked at rolling out some advanced models in the EU, citing legal uncertainty, while OpenAI has also limited the release of some tools, including early versions of Sora. Rather than resolving the issue with guidance and targeted case law, the Commission seems ready to lower the bar to entry. Mario Draghi “handed giant U.S. and Chinese firms a great gift, and [has] done Europe a great disservice,” writes Johnny Ryan of the Irish Council for Civil Liberties.

The AI Act was too restrictive for startups in Europe. But flinging open Europe’s data privacy law to rewrites isn’t the solution. Destroying meaningful consent for data usage within the GDPR and narrowing the definition of sensitive data to aid AI training is importing the worst of others’ rulebooks while destroying its own ones.

Europe is right to revisit its AI rules, which were hastily rewritten to try and encompass the ChatGPT moment. But it’s wrong to pretend that loosening the GDPR is a harmless housekeeping exercise. If Europe wants competitive AI on European terms, it has to prove that rights-preserving innovation is not an oxymoron. There was no need to sacrifice two bits of law to try and save one.

How did we get here? Sam Altman’s global roadshow of capitals, including in Europe in recent years, has made existential risk and competitive peril two main considerations when drafting AI law. European lawmakers appear to have recognised that Europe has borne down on AI too much, to the extent that it could strangle its own startups while American and Chinese rivals race ahead, and completely ignore Europe as a market for its latest AI tools.

Meta has publicly balked at rolling out some advanced models in the EU, citing legal uncertainty, while OpenAI has also limited the release of some tools, including early versions of the vide app Sora. Rather than resolving the issue with guidance and targeted case law, the Commission seems ready to lower the bar to entry. “If these changes are implemented, Draghi will have handed giant U.S. and Chinese firms a great gift, and done Europe a great disservice,” writes Johnny Ryan of the Irish Council for Civil Liberties.

Yes, the AI Act was too restrictive for startups in Europe. But flinging open Europe’s data privacy law to rewrites isn’t the fix. If the bloc guts meaningful consent for data usage within the GDPR and narrows the definition of sensitive data to aid AI training, it’s simply importing the worst of others’ rulebooks while destroying its own.

Europe is right to revisit its AI rules, which were hastily rewritten to try and encompass the ChatGPT moment. And its startups need support. But it’s wrong to pretend that loosening the GDPR is a harmless housekeeping exercise.

If Europe wants competitive AI on European terms, it has to prove that rights-preserving innovation is not an oxymoron. Rather than trying to salvage the AI Act by sacrificing the one bit of tech legislation that actually does what is intended, and is begrudgingly admired even by its haters, it ought to do the harder, duller work of fully fixing the AI Act in the first place with targeted tweaks and credible enforcement.

There’s no need to sacrifice two bits of law to try and save one.

Reply

or to participate

Keep Reading

No posts found