Are Generic AI Models Failing Your UC Setup? Dialpad AI Officer Talks the Growth of Industry-Specific LLMs

Dialpad Chief AI Officer Jim Palmer tells UC Today what businesses lose out on by using generic LLMs for their AI copilots and what specialized LLMs bring

5
Are Generic AI Models Failing Your UC Setup? Dialpad AI Officer Talks Why Pick Industry-Specific LLMs
Unified CommunicationsInsights

Published: May 5, 2025

Kristian McCann

As AI increasingly dominates the business landscape, companies are racing to latch on to anything with an AI tag in a bid to achieve coveted optimizations.

Many AI features a company uses come through its use of UC solutions, like Copilot on Teams or Slack AI on Slack.

However, in a bid to be quick, companies may be trying to use a spade for a job that could be better done by a shovel.

In other words, their AI setups work, but are they really optimized for their specific business?

“While general-purpose LLMs are impressive, they lack industry-specific contexts that UC solutions rely on for accurate transcriptions and summarizations of business conversations,”

Jim Palmer, Dialpad Chief AI Officer, told UC Today.

Although not immediately causing issues to flare up, these problems chip away slowly over time, leading to a lack of fluidity in the AI tools you’re using.

The Issue with Generic LLMs in Your UC

LLMs are the building blocks that make up advanced AI systems.

They are trained on vast datasets, enabling them to handle a wide range of tasks such as text generation, summarization, code creation, and sentiment analysis across multiple industries.

Taking into account billions of bits of data gathered from all across the online world, their versatility makes them powerful all-rounders, useful to anyone from marketing to retail.

However, this generalist nature can become a limitation when applied to highly specific business domains.

“These models are not trained on specialized business terminology, so there’s more room for misinterpretations and errors when handling industry-specific language,”

Palmer said.

This limitation presents a significant challenge for businesses operating in specialized sectors, where accuracy in communication can directly impact customer relationships or even regulatory compliance.

For example, in the financial services sector, LLMs might misinterpret complex terms like “swaption,” leading to inaccurate responses or flawed document summaries.

If a bank’s worker uses Copilot with its generic LLM to provide a client with information, it might provide them with incorrect information about financial products or regulatory obligations.

This underscores the importance of supplementing LLMs with domain-specific training or integrating them with knowledge bases tailored to the industry, ensuring both accuracy and reliability in business-critical communications.

Despite these limitations, many companies continue to leverage general-purpose models.

“General LLMs offer a quick implementation path – companies can rapidly bolt on AI capabilities without significant investment in specialized AI talent or infrastructure. It’s the path of least resistance,”

Palmer said.

This approach, while expedient, should represent a starting point rather than an optimal long-term solution.

However, the ease of just staying put and dealing with the issues of a general LLM may seem to some to be easier than starting again with AI.

“Building and maintaining specialized LLMs requires substantial expertise, data, and computational resources that not every company has access to,” Palmer explained.

Yet with 72% of organizations looking to implement these technologies, soon using AI will not be the advantage it initially was to gain the edge over your competitors.

Thus, industry-specific LLMs may become a critical consideration for businesses seeking to gain a competitive edge.

The Economic Case for Industry-Specific AI

The economics of specialized AI present an evolving landscape. While the initial investment may be higher, the return on investment becomes increasingly compelling as these models deliver tangible business outcomes.

“While upfront investment in specialized models remains higher, the ROI for industry-specific LLMs becomes more compelling when measured against real business outcomes: agent efficiency, reduced call handling times, and enhanced customer satisfaction,”

Palmer noted.

“As these specialized models continue to deliver measurable business value, the economic equation will increasingly tip in their favor over generic solutions.”

This shift in the cost-benefit analysis is already playing out at companies like Dialpad, where Palmer explains they are measuring the performance benefits for customers against the investment in specialized models.

Equally, advancements in hardware and ML architecture continue to improve processes like fine-tuning and pre-training, helping organizations find the right balance between training costs, accuracy, and scalability.

These LLMs have drawbacks, however. Due to being more niche, they lose the broad understanding capabilities that generic LLMs have.

However, when evaluating model performance, Palmer believes that domain expertise proves more valuable in the long run: “In my experience, domain expertise often outweighs raw model size in practical business applications. At Dialpad, we’ve consistently found that a moderately sized model trained on relevant, high-quality business conversations far outperforms larger general models for our specific, as well as few-shot and zero-shot, use cases.”

This insight challenges the common assumption that bigger is always better when it comes to AI models.

For many specialized applications, targeted training on high-quality, relevant data proves more valuable than raw computational scale.

Building for the Future of Business Communications

As organizations are stuck in an AI stalemate with their competitors, those that consider the transition from general-purpose to specialized LLMs could find what they need to break through.

But Palmer warns against going gung-ho.

Instead, he advocates that “organizations should consider specialized LLMs only when they have clear, domain-specific use cases where general models consistently underperform. Some of these key indicators include access to substantial relevant training data, identifying concrete business outcomes that would benefit from improved accuracy, facing industry-specific compliance or security requirements that demand data sovereignty, and clear measurement frameworks to evaluate the ROI of more specialized AI compared to general-purpose alternatives.”

This approach helps businesses avoid premature investment in specialized solutions when general models might suffice, while also ensuring they don’t miss opportunities for significant improvements through specialized AI.

Equally, as AI becomes increasingly customer-facing through AI agents and automated interactions, the quality of a company’s models will directly influence brand perception and customer loyalty.

“We’re seeing a clear correlation between thoughtful AI implementation and business success, especially as AI matures,”

Palmer said.

Although specialization seems like the next step from generalized LLMs, Palmer envisions a complementary ecosystem rather than an either/or choice between general and specialized models.

“I see a complementary ecosystem developing, where general-purpose models handle broad knowledge retrieval while industry-specific models manage domain-critical tasks requiring specialized expertise. We’re already seeing this with retrieval-augmented generation (RAG) techniques that combine general knowledge with specific data sources.”

Although the future of AI-powered communications is not entirely in specialized LLMs, the companies that embrace both that and general LLMs will have the best-in-breed ability to utilize their use of AI fully.

Agentic AIArtificial IntelligenceGenerative AIMicrosoft TeamsSecurity and Compliance

Brands mentioned in this article.

Featured

Share This Post