EU lawmakers are facing an onslaught of new AI rules. 

EU lawmakers are struggling to agree on new landmark AI laws as rapid technological advances complicate efforts, according to reports this week. 

The European Commission proposed draft rules nearly two years ago in a bid to protect citizens from the potential dangers of emerging AI technology, which has experienced a boom in investment and consumer popularity in recent months. 

However, as the industry continues to innovate and new systems are released, regulation has become a real challenge.

Lawmakers are working through more than 3,000 tabled amendments, covering everything from the creation of a new AI office to the scope of the new AI Act's rules. 

Legislators have sought to strike a balance between encouraging innovation while protecting citizens’ fundamental rights. 

This has led to different AI tools being classified according to their perceived risk level: from minimal through to limited, high, and unacceptable. 

High-risk tools will not be banned, but will require companies to be highly transparent in their operations.

The discussions have raised concerns for companies on how regulations might affect their business and whether they would be at a competitive disadvantage against rivals from other continents. 

Behind the scenes, big tech companies, who have invested billions of dollars in the new technology, have lobbied hard to keep their innovations outside the ambit of the high-risk clarification that would mean more compliance, more costs and more accountability around their products.

Representatives from tech companies have pushed back against such moves, insisting their own in-house guidelines are robust enough to ensure the technology is deployed safely, and even suggesting the Act should have an opt-in clause, under which firms can decide for themselves whether the regulations apply.

The EU discussions have raised concerns for companies - from small startups to Big Tech - on how regulations might affect their business and whether they would be at a competitive disadvantage against rivals from other continents. 

A recent survey by industry body appliedAI showed that 51 percent of the respondents expect a slowdown of AI development activities as a result of the AI Act.

Some Members of the European Parliament (MEPs) say the Act will be subject to regular reviews, allowing for updates as and when new issues with AI emerge. 

But, with European elections on the horizon in 2024, they are under pressure to deliver something substantial the first time around.

“Different AI tools are being classified according to their perceived risk level, but the pace at which new systems are being released makes regulation a real challenge,” says Daniel Leufer, a senior policy analyst at rights group Access Now. 

“It's a fast-moving target, but there are measures that remain relevant despite the speed of development: transparency, quality control, and measures to assert their fundamental rights.”

Daniel Ek, chief executive of audio streaming platform Spotify, said AI technology is “a double-edged sword” and his team is working actively with regulators to ensure it benefits as many people as possible and is as safe as possible. 

MEPs say the AI Act will be subject to regular reviews, allowing for updates as and when new issues with AI emerge.