UK officials from the Department of Science, Innovation, and Technology are drafting legislation to regulate AI models, but it is not yet known how long it will take for the UK government to make it official.
Business news provider Bloomberg reported on the development, along with the news that the UK’s copyright rules may be changed to make it easier to opt out of training datasets.
Following the first global AI Safety Summit at Bletchley Park in November last year, the UK set up the AI Safety Institute, which has been working on evaluating AI models for safety so far.
Some technology companies have requested further information on what would happen if they failed to meet the institute’s standards. Without regulations in place, their powers are evidently limited.
Shortly before the AI Safety Summit took place, UK Prime Minister Rishi Sunak spoke about the major risks associated with AI. AP News quoted Sunak as saying:
[AI developers who] don’t always fully understand what their models could become capable of [should not be] marking their own homework.”
“Only governments can properly assess the risks to national security. And only nation-states have the power and legitimacy to keep their people safe.”
At the same time, however, Sunak also warned against trying to introduce AI regulations too quickly before fully understanding the risks associated with them. While it is a global priority to ensure the survival of the human race, which AI has the potential to threaten, there are also numerous positive transformations that the technology could offer.
Technology companies central to AI innovation, including Meta, Google DeepMind, and OpenAI, also said they would allow regulators to test their products before they are released to market. While this may slow the progress of AI development, it will also reduce the possibility of hazardous technology being released.
Without proper regulations, AI tech companies are more or less free to shoot first and ask questions later, so to speak.
The EU has already reached a provisional agreement for its own AI regulatory framework, which will allow it to fine companies that violate its safety standards.
The question is, has the EU been too quick to act, or is the UK’s decision to go slow a sign of complacency?
UK and US Sign AI Safety Agreement
One proactive step the UK has taken regarding AI safety is an agreement to collaborate with the US on safety research for AI models, including a commitment to conduct at least one joint safety test.
The partnership, signed by US Commerce Secretary Gina Raimondo and UK Technology Secretary Michelle Donelan, involves both nations aligning their scientific approaches to accelerate and iterate robust evaluations of AI models, systems, and agents.
The partnership will begin immediately, enabling both organisations to collaborate seamlessly. Recognising the rapid advancement of AI, both governments acknowledge the imperative to proactively address emerging risks through a shared approach to AI safety.
Additionally, as they enhance their collaboration on AI safety, they pledge to establish similar partnerships with other nations to advance AI safety worldwide.