The recent Paris summit of global leaders to discuss artificial intelligence has thrust AI into the spotlight once again. The February 2025 Paris Artificial Intelligence Action Summit represented an attempt to strengthen international action in favour of a more sustainable AI serving collective progress and general interest.

Also, on 2 February 2025, the EU’s cautious regulatory approach was demonstrated with the coming into force of the first aspects of the EU AI Act. This amounts to an outright ban on AI that poses an 'unacceptable risk', whether these systems were placed on the market before or after that date.

The purpose of the EU AI Act is to lay down a uniformed legal framework for the development and use of AI systems, whilst ensuring a high level of protection of public interests including health, safety and fundamental rights.

The contrast of the EU and the US approach to AI is vivid. Speaking at the summit in Paris, US vice president JD Vance said: 'We believe that excessive regulation of the AI sector could kill a transformative industry just as it’s taking off.'

With America arguably leading the AI revolution, such position could have long term repercussions.

The EU AI Act came into force on 1 August 2024, but with different provisions scheduled to apply later. Most provisions will come into effect on 2 August 2026.

The coming into effect of the first stages of the EU AI Act will not only impact EU-based companies, but also foreign businesses. This is because Article 2 of the Act sets out its extraterritorial effect, where the output of an AI system is used within the EU.

It is also anticipated that the EU AI Act could become something of a global regulatory benchmark, much like GDPR. Some may therefore seek to harmonise with the EU AI Act as a matter of best practice and risk management.

The EU AI Act takes a risk-based approach tailored to the system's level of risk:

  • Unacceptable risk AI systems, that deploy manipulative, exploitative, social control or surveillance practices, are now banned.
  • High-risk AI systems, which create a significant risk of harm to natural person's health, safety or fundamental rights, are regulated.
  • Limited risk AI systems, such as chatbots or deepfakes, are subject to transparency obligations.
  • Minimal risk AI systems, such as spam filters, is unregulated by the EU AI Act but are still subject to applicable regulations like the GDPR.

The European Commission published two sets of guidelines on AI system definition and on prohibited AI practices to increase legal clarity and ensure uniform application of the EU AI Act.

Yet, these broad guidelines are 'non-binding' and the interpretation of the EU AI Act is a matter for the Court of Justice of the European Union. Precisely, how broadly the EU AI Act will be interpreted remains to be seen.

In practice, UK businesses trading into the EU, whether providers, deployers, importers, distributors, or representatives of IA systems, should ensure that they are not using AI systems that provide social scoring of natural persons or conduct untargeted scraping of facial images to populate facial recognition databases. These systems are now prohibited in the EU.

The penalties for non-compliance are substantial, with infringements relating to prohibited AI potentially attracting 7% of the offender’s global annual turnover or €35 million – whichever is greater.

There are exceptions to the AI Act, notably regarding AI systems developed for private use, scientific research and development purposes, or if they are released under free and open-source licences (except when they qualify as high-risk). In addition, the AI Act supports innovation by enabling the creation of AI regulatory sandboxes (providing for a controlled environment that fosters innovation by facilitating development, training and testing of innovative AI system) and providing for a framework for the testing of high-risk AI systems in real-world conditions. Specific measures for small and medium-sized enterprises and start-ups were also passed to help them enter the AI market and become competitors of already implemented businesses.

Will this be enough to make the EU an attractive destination for AI research and startups or will these various provisions stifle innovation? One could say that such a legal framework for the development and use of AI systems in accordance with EU values and fundamental rights may help increase users' trust in AI, which will boost demand in this field. To drive innovation, French president Emmanuel Macron announced some €100 billion in AI related investments in France during the recent Summit in Paris. However, any kind of incentive will not relieve businesses of the burden of complying not only with the EU AI Act, but also with other applicable European regulations. For example, any AI system processing personal data will have to comply with the GDPR. In this respect, the French Data Protection Authority (CNIL) has already published recommendations for ensuring compliance with the GDPR.

It remains to be seen whether the EU will emerge as a true AI innovator, but the race to lead the AI revolution is widely regarded as a two-horse race between the US and China.

 

Counsel Mathilde Gérot and associate Inès Aramouni, Signature Litigation