Microsoft and Meta Expand AI Partnership

At Inspire, Meta and Microsoft announced support for the Llama 2 family of large language models (LLMs) on Azure and Windows

2
Microsoft and Meta Expand AI Partnership
CollaborationLatest News

Published: July 18, 2023

Kieran Devlin

Microsoft and Meta have expanded their AI partnership by announcing support for the Llama 2 family of large language models (LLMs) on Azure and Windows.

Llama 2 intends to empower developers and organisations to build generative AI-powered tools and experiences. Meta and Microsoft have stated a shared commitment to democratising AI and its benefits, as illustrated by Meta’s open and commercial approach with Llama 2.

John Montgomery – Corporate Vice President, Azure AI at Microsoft, wrote in an accompanying blog post:

Today’s announcement builds on our partnership to accelerate innovation in the era of AI and further extends Microsoft’s open model ecosystem and position as the world’s supercomputing platform for AI.”

Meta and Microsoft have been long-term partners in developing AI, beginning with a collaboration to integrate ONNX Runtime with PyTorch to create a refined developer experience for PyTorch on Azure and Meta’s choice of Azure as a strategic cloud provider.

Now Microsoft Azure customers can customise and deploy the 7B, 13B, and 70B-parameter Llama 2 models more straightforwardly and securely on Azure — which Montgomery stated was the platform for “the most widely adopted frontier and open models”.

Furthermore, Llama will be optimised to run locally on Windows. Windows developers could use Llama by targeting the DirectML execution provider through the ONNX Runtime. This would produce a seamless workflow, integrating generative AI experiences in their apps.

Llama 2 is the latest addition to Microsoft’s growing Azure AI model catalogue. The model catalogue is currently in public preview. It serves as a hub of foundation models to allow developers and machine learning professionals to discover, analyse, personalise and deploy pre-built large AI models.

“The catalogue eliminates the need for users to manage all infrastructure dependencies when operationalising Llama 2,” Montgomery wrote. “It provides turnkey support for model fine-tuning and evaluation, including powerful optimisation techniques such as DeepSpeed and ONNX Runtime, that can significantly enhance the speed of model fine-tuning.”

Windows developers can build new experiences using Llama 2, accessible through GitHub Repo. With Windows Subsystem for Linux and capable GPUs, developers can fine-tune LLMs to meet their specific requirements on their Windows PCs.

Artificial IntelligenceCorporate FinanceCustomer ExperienceDigital TransformationFuture of WorkGenerative AISmall BusinessUCaaS

Brands mentioned in this article.

Featured

Share This Post