tech image

UK's Context-Based AI Regulation Framework: The Government's Response

Article
|
6 min read

White & Case Tech Newsflash

In March 2023, the UK government published the AI Regulation White Paper (the White Paper), setting out its proposed regulatory framework for AI.1

Unlike the EU's AI Act, which will create new compliance obligations for a range of AI actors (such as providers, importers, distributors and deployers),2 the UK government is developing a principles-based framework for existing regulators to interpret and apply within their sector-specific domains.

Following the White Paper's publication, the government hosted a consultation. The government received more than 400 written responses and engaged with more than 300 roundtable and workshop participants. Separately, the government also hosted the first international AI Safety Summit at Bletchley Park in November.3

After considering this feedback, the government published its response to the consultation on 6 February 2024 (the "Response").4 We discuss the key points from the Response below.

Regulatory framework

The government confirmed its commitment to move ahead with a "proportionate, context-based approach" to regulating AI, noting that this approach is both "pro-innovation" and "pro-safety". 

The government emphasised that it received strong support for its five cross-sectoral principles for regulators to interpret in the course of their existing sector-specific domains:5

  • Safety, security and robustness
  • Appropriate transparency and explainability
  • Fairness
  • Accountability and governance
  • Contestability and redress

The Response acknowledged the call for expanding these principles to explicitly include human rights, operational resilience, data quality, international alignment, systemic risks and wider societal impacts, sustainability, and education and literacy. The government responded to this feedback, noting that aspects of these values and rules are largely already enshrined in the law. The Response also acknowledged that the majority of respondents to its consultation disagreed that the implementation of the principles through existing legal frameworks would fairly and effectively allocate legal responsibility for AI across the life cycle. The Response notes that the government will continue to iterate the AI regulation framework and will consider introducing measures to effectively allocate accountability and fairly distribute legal responsibility to those in the life cycle best able to mitigate AI-related risks.

The government further noted that several regulators have started taking action in line with the proposed regulatory approach. For example, the Competition and Markets Authority (CMA) has published a review of foundation models, and the Information Commissioner's Office (ICO) has issued guidance on data protection and AI. Additional guidance is expected in the coming months, as the government has asked a number of UK regulators to publish updates on their strategic approach to AI by 30 April 2024. The government will continue its engagement with regulators to continue to learn from their existing work in the context of AI regulation and life cycle accountability.

In line with the White Paper, the Response also notes that regulators are not yet under a statutory duty `to have due regard for the cross-sectoral principles noted above. The government will continue to monitor whether such a duty is necessary.

Future responsibilities on the developers of highly capable general-purpose AI systems

Although the Response does not introduce or propose any new laws or regulations, it notes that the government expects to introduce legislation if the exponential growth of AI capabilities continues and the industry's voluntary measures6 are disproportionate to the risks involved.

To develop a proportionate regulatory approach that effectively addresses the risks posed by the most powerful AI systems, the government currently distinguish between:

  1. Highly capable general-purpose AI – Foundation models7 that can perform a wide variety of tasks and match or exceed the capabilities present in today's most advanced models. 
  2. Highly capable narrow AI – Foundation models that can perform a narrow set of tasks, normally within a specific field such as biology, with capabilities that match or exceed those present in today's most advanced models.
  3. Agentic AI or AI agents – An emerging subset of AI technologies that can competently complete tasks over long timeframes and with multiple steps

The Response notes that highly capable general-purpose AI poses a particularly tough challenge to the UK's context-based regulatory approach. Accordingly, the government aims to publish an update on its work on new responsibilities for developers of highly capable general-purpose AI later this year.

AI governance landscape and the new central AI risk function

As proposed in the White Paper, the government has set up a new central function to monitor and assess AI risks across the economy, support regulator coordination and address potential regulatory gaps. This will be supported by a new steering committee, including regulator representatives. The government will also conduct targeted consultations on its AI risk register and will continue to assess the regulatory framework.

In addition (and among other efforts), the government and the Digital Regulation Cooperation Forum have launched a pilot AI and Digital Hub. The hub brings together four key regulators of AI and digital technologies: the CMA; the ICO; Ofcom; and the Financial Conduct Authority.

Abandonment of work on the code of practice on copyright and AI

While the Response was mostly forward-looking when discussing various risks, the government provided an important update on copyright and AI.

The UK Intellectual Property Office (IPO) had convened a working group made up of copyright holders and AI developers, which aimed to produce a new code of practice on copyright and AI that strikes the right balance between the interests of the two groups.

However, the Response confirms that the IPO working group could not agree on an effective voluntary code of practice, and that the government will now take the lead on this matter. As a result, issues relating to the use of copyrighted materials in AI models remain unresolved for the time being.

Next steps

The Response lists the UK government's 2024 roadmap of next steps, which include continuing to develop a domestic policy position on AI regulation, monitoring and evaluating a plan to assess the efficacy of its regime as AI technologies change, promoting AI opportunities while also tackling AI risks, and supporting international collaboration on AI governance.

Accordingly, we expect the UK government, Parliament, and various UK regulators to publish frequent updates on AI issues this year.

1 A pro-innovation approach to AI regulation
2 See
The pre-final text of the EU’s AI Act leaked online
3 See
Guardians of the AI Galaxy: Lessons from Bletchley Park
4
A pro-innovation approach to AI regulation: government response
5 For more information about these principles, please see
White Paper, Section 3.2.3 – A principles-based approach; and Annex A – Implementation of the principles by regulators
6 The Response notes, for example, that following the voluntary commitments brokered by the White House, the UK government wrote to seven frontier AI companies prior to the AI Safety Summit requesting that they publish their safety policies, and all of them published their policies.
7 The government has defined "foundation models" as a type of machine learning model trained on very large amounts of data that can be adapted to a wide range of tasks. However, the Response clarifies that highly capable AI systems could be underpinned by other technologies as well.

White & Case means the international legal practice comprising White & Case LLP, a New York State registered limited liability partnership, White & Case LLP, a limited liability partnership incorporated under English law and all other affiliated partnerships, companies and entities.

This article is prepared for the general information of interested persons. It is not, and does not attempt to be, comprehensive in nature. Due to the general nature of its content, it should not be regarded as legal advice.

© 2024 White & Case LLP

Top