Artificial intelligence (AI) is reshaping industries at an unprecedented pace, bringing both opportunities and regulatory challenges. With the EU AI Act introducing a risk-based framework and the UK government taking a more flexible, sector-led approach, businesses must carefully assess their obligations.
This article explores the evolving AI regulatory landscape, clarifying key legal considerations for UK businesses, the importance of adopting effective AI assurance mechanisms and the increasing role of AI-specific contractual clauses.
Broad reach of EU AI Act
On 2 February, the first provisions of the EU AI Act came into effect. The regulation classifies AI systems into four risk categories:
- Unacceptable risk – AI systems considered a threat to people, such as social credit scoring or untargeted facial recognition, are banned.
- High risk – AI systems that could negatively affect safety or fundamental rights, including those used in recruitment and employee monitoring, are subject to strict regulations.
- Limited risk – AI systems with transparency concerns must ensure that AI-generated content is clearly identifiable and that solutions are effective, interoperable, robust and reliable.
- Minimal risk – AI systems classified as minimal risk are not subject to additional regulatory obligations.
Although the EU AI Act is an EU regulation, its impact extends beyond EU borders. UK businesses may still fall within its scope if they provide AI systems that are used within the EU or produce effects there. For example, a UK-based AI software provider with European customers may be required to comply, even if it has no physical presence in the EU.
With hefty fines for non-compliance – up to €35m or 7% of global turnover – UK businesses must assess whether they are subject to these rules.
Regulation in the UK
Unlike the EU’s legislative approach, the UK government has opted for a more flexible, principles-based framework. Rather than a single AI act, the UK relies on existing laws and sector-specific guidance to regulate AI.
For example:
- The Information Commissioner’s Office has issued draft guidance on automated decision-making in recruitment and has been scrutinising AI-powered systems for data protection risks.
- The Department for Science, Innovation and Technology has introduced a Responsible AI Toolkit to help businesses ensure responsible AI use.
While this approach provides businesses with flexibility, it also creates uncertainty. UK organizations must navigate a patchwork of legal requirements rather than a single comprehensive AI law, which can make compliance more complex.
Assurance is no silver bullet
Given these regulatory complexities, businesses must ensure their AI systems are compliant and ethical. This is where AI assurance comes in.
AI assurance involves using various mechanisms to evaluate and verify AI systems. These can be broadly categorised as:
- Qualitative assessments – Used in high-uncertainty areas, such as assessing ethical concerns, fairness and societal impact.
- Quantitative assessments – Applied to measurable criteria, such as model performance, accuracy and compliance with legal standards.
No single technique will provide complete assurance – organizations must combine multiple methods throughout the AI lifecycle. For example, an AI system used in recruitment may require bias assessments, fairness audits and technical performance evaluations.
To enhance consistency, businesses should align AI assurance mechanisms with global technical standards, such as those developed by the International Organization for Standardization. These standards provide a structured framework for evaluating AI safety, robustness and compliance.
Contractual considerations
As AI adoption grows, organizations must also consider how AI-related risks are managed within commercial contracts. AI technologies create new and unique risks that must be reflected in contractual agreements to mitigate potential liabilities.
In addition to standard provisions found in service or technology agreements – such as warranties regarding fitness for purpose, satisfactory quality and assurances that upgrades will not negatively impact functionality – AI-specific contractual clauses are becoming increasingly important. These may include:
- AI training and data use – contracts should specify how AI models are trained, what data sources are used and who holds ownership over training datasets.
- Ethical use and compliance – AI solutions should comply with ethical guidelines and legal requirements, including transparency obligations and human oversight mechanisms.
- Bias and non-discrimination – AI systems must be assessed for bias, and contracts should outline how fairness audits are conducted and what remediation steps will be taken if bias is detected.
- Explainability and transparency – Businesses deploying AI should require vendors to provide explanations for AI-driven decisions, particularly in high-risk applications.
- Liability and indemnity – contracts should clarify responsibilities for AI-related failures, including errors, data breaches or harm caused by biased or incorrect outputs.
As AI becomes the subject of more transactional contracts, businesses must work closely with legal and compliance teams to ensure these agreements adequately address AI-specific risks.
Key takeaways for UK businesses
- Check if the EU AI Act applies to your business. Its broad scope means UK organizations could be affected, even if they operate outside the EU.
- Understand the UK’s regulatory landscape. While there is no AI-specific legislation, existing laws (such as data protection laws) still apply to AI-related activities.
- Adopt a structured AI assurance approach. Businesses should combine qualitative and quantitative assessments, using global technical standards where applicable.
- Monitor regulatory developments. The UK’s approach may evolve, with increasing calls for sector-specific AI regulations.
- Incorporate AI-specific contractual clauses. Organizations must ensure their agreements reflect the unique risks posed by AI technologies, including liability allocation, ethical use and compliance with emerging regulations.
- Engage legal and compliance teams early. Lawyers play a crucial role in ensuring AI systems align with evolving legal and ethical expectations.
Conclusion
AI offers immense potential, but businesses cannot afford to take a reactive approach to regulation. By understanding their obligations, implementing robust AI assurance mechanisms and addressing AI-related risks within contractual agreements, UK organizations can navigate the complexities of AI compliance while fostering innovation.
Winona Chan is legal counsel at Aldermore Bank plc, London
No comments yet