UK and US Sign First-Of-Its-Kind Agreement On AI Safety

The UK and US governments signed a memorandum of understanding this week to collaborate on AI safety and research

4
UK and US Sign Formal Agreement On AI Safety
CollaborationLatest News

Published: April 3, 2024

Kieran Devlin

The UK and US governments have formally agreed to collaborate on AI safety and research.

Signed by US Commerce Secretary Gina Raimondo and UK Technology Secretary Michelle Donelan, the partnership entails both nations aligning their scientific approaches to accelerate and iterate robust evaluations for AI models, systems, and agents.

Confirmed as a memorandum of understanding, signed on Monday, April 1, this landmark declaration builds upon the foundations of the UK AI Summit, held in November.

“AI is the defining technology of our generation,” said Raimondo. “This partnership is going to accelerate both of our Institutes’ work across the full spectrum of risks, whether to our national security or to our broader society. Our partnership makes clear that we aren’t running away from these concerns – we’re running at them.”

Because of our collaboration, our Institutes will gain a better understanding of AI systems, conduct more robust evaluations, and issue more rigorous guidance. By working together, we are furthering the long-lasting special relationship between the U.S. and UK and laying the groundwork to ensure that we’re keeping AI safe both now and in the future.”

The US and UK AI Safety Institutes have outlined plans to establish a unified approach to AI safety testing and to exchange capabilities to address these risks effectively. Their initiatives include conducting joint testing exercises on publicly accessible models and exploring personnel exchanges to leverage a collective pool of expertise.

The partnership will begin immediately, enabling both organisations to collaborate seamlessly. Recognizing the rapid advancement of AI, both governments acknowledge the imperative to proactively address emerging risks through a shared approach to AI safety. Additionally, as they enhance their collaboration on AI safety, they pledge to establish similar partnerships with other nations to advance AI safety worldwide.

“This agreement represents a landmark moment, as the UK and the United States deepen our enduring special relationship to address the defining technology challenge of our generation,” added Doneland. “We have always been clear that ensuring the safe development of AI is a shared global issue. Only by working together can we address the technology’s risks head-on and harness its enormous potential to help us all live easier and healthier lives.”

What Happened At Last Year’s AI Summit?

At th end of October, British Prime Minister Rishi Sunak announced the UK’s own AI Safety Institute, which anticipated the UK AI Summit where figures including US Vice-President Kamala Harris, X owner Elon Musk, Google DeepMind’s Demis Hassabis and OpenAI CEO Sam Altman attended the event in Bletchley Park to collaborate and discuss AI and its potential impact on our futures.

The mission statements of the US and UK Safety Institutes were outlined in their aims to evaluate open and closed-source AI systems.

In a statement named the “Bletchley Declaration,” signatories, including the US, the UK, the EU, China, and dozens of other nations, stated they would seek to develop:

Respective risk-based policies across our countries to ensure safety in light of such risks, collaborating as appropriate while recognising our approaches may differ based on national circumstances and applicable legal frameworks. This includes, alongside increased transparency by private actors developing frontier AI capabilities, appropriate evaluation metrics, tools for safety testing, and developing relevant public sector capability and scientific research.”

How Will This Potentially Impact UC and Collaboration?

While there aren’t concrete specifics yet on how the regulation will function, there are several assumptions we can make about how this bilateral agreement will impact the UC and collaboration space. While there will be inevitable concerns about regulation impeding innovation, this will likely be a positive step for the industry.

The introduction of stricter testing requirements for AI products in UC&C implies that businesses will need to undergo government-approved testing to safeguard against unlawful data collection. This may demand investments in secure data storage solutions and improved encryption methods to ensure user privacy. Clear consent policies around data collection and AI usage will likely be required from users.

Government-mandated security assessments can lead to the development of more secure AI applications, enhancing the protection of user data and communications. Transparency regarding AI algorithms’ decision-making processes will be vital to the US and UK AI Safety Institutes’ principles.

These stricter guidelines could drive the establishment of industry standards for AI in UCC, promoting interoperability across different platforms. This focus on standardisation may encourage greater consistency in the user experience across various UC and collaboration services, further enhancing the sector’s development.

Going further, the unified approach to AI safety testing could attract attention and participation from other countries and international organisations, including the EU. This could lead to the formation of global partnerships and collaborations within the UC&C space, further advancing the development of communication and collaboration technologies with a focus on AI safety.

Lastly, collaboration between US and UK Institutes could promote discussions around the ethical implications of AI technologies within the UC&C space. This could lead to the development of guidelines or principles for responsible communication and collaboration in the context of AI, ensuring that these technologies are used ethically and responsibly.

The US and UK Aren’t Alone in Pursuing AI Collaboration and Regulation

While the UK and US’s bilateral agreement on AI safety is pioneering on its own terms, the pair of governments aren’t on an island in tangibly pursuing greater AI collaboration and regulation.

In December, the European Union (EU) reached a provisional agreement on a groundbreaking series of rules to govern AI in Europe.

The EU’s Artificial Intelligence Act (AI Act) was the world’s first comprehensive set of rules to govern AI and may signal future guidelines for other governments and bodies considering AI regulatory laws — including the US and UK once they establish more concrete rules and regulations.

The agreement mandates that AI systems must adhere to specific regulatory standards, such as incident reporting, risk assessments, and adversarial testing. It also requires transparency in AI systems, compelling the production of technical documents and summaries outlining the utilisation of user-generated content for AI training.

Additionally, EU citizens will have the legal entitlement to lodge complaints about AI systems and receive explanations regarding “high-risk” decisions made by AI companies with their systems.

Artificial IntelligenceGenerative AIPublic Sector
Featured

Share This Post