AI’s Biggest Problem Isn’t The Tech – It’s Your Messy Files

Most corporate AI fails not because of the technology, but because messy, scattered information feeds models “confident nonsense.”

3
AI
Unified Communications & CollaborationFeature

Published: December 5, 2025

Christopher Carey

When businesses talk about artificial intelligence, they tend to picture sleek algorithms humming away like digital oracles, spinning gold from the data they already possess.

But according to Philip Brittan, CEO of knowledge-management firm Bloomfire, this vision is dangerously misleading.

“People think AI is magic,” he says.

“They expect it to understand them perfectly and give perfect answers. But AI only knows what you give it. If the information is messy, the output will be messy – just said more confidently.”

This is at the heart of a problem quietly undermining corporate AI: many companies don’t just have the right data.

They have decades of duplicated, outdated, contradictory information scattered across intranets, shared drives, cloud folders, regional databases and long-abandoned wikis.

When a language model tries to reason over this sprawl, it can produce answers that sound authoritative but are simply wrong.

Brittan calls it “credible nonsense” – essentially hallucinations with impeccable grammar.

The Failure Rate No One Wants To Mention

Behind the scenes, executives are discovering that slapping a gen-AI interface on top of unstructured corporate information doesn’t deliver value. It often breaks things.

Brittan says he has met many firms who proudly built a “pilot chatbot” only to watch it fail in testing.

Some were even more candid: they’d built pilots three times, each with a different LLM, and still didn’t know why the answers were unreliable.

The reason is remarkably simple – the models were given contradictory information and forced to guess.

“Traditional software breaks visibly,” Brittan explains.

“If you give an old system the wrong input, it crashes or gives an obvious error. But LLMs don’t crash. They just give you a beautiful paragraph of something that seems plausible. That’s what makes them dangerous. They can be wrong without looking wrong.”

The problem is not a lack of clever algorithms, it’s a lack of clean, reliable, version-controlled knowledge for those algorithms to use.

The Coming Divide

A quiet shift is underway inside many enterprises.

The winners in the next wave of AI adoption may not be those with the biggest models or the most GPUs.

They could in fact be the companies that are willing to do the boring work: cleaning, labelling and structuring the information they already have.

Brittan argues that firms which invest in proper knowledge curation see an immediate difference.

“We’ve seen hallucinations drop from around 25 percent to almost zero once organisations remove duplication and ensure the AI is only reading validated content,” he says.

“It’s not glamorous work, but it’s the most important thing.”

This perspective challenges the dominant narrative of the AI boom.

In the consumer world, LLMs appear magical – chatty, flexible, endlessly creative. But in the enterprise world, where accuracy matters, they can’t be allowed to improvise.

“Enterprises need repeatability,” Brittan says. “If I ask the same question on Tuesday and Friday, the answer needs to be the same.”

AI Won’t Replace People – It Needs Them

The hype around AI has sparked speculation about mass job cuts and automated workforces. But Brittan says many firms that tried to replace staff too early are quietly reversing their decisions.

“We’ve seen companies that made layoffs because they expected AI to fill the gap,” he notes.

“But they quickly realised that AI needs knowledgeable people to supervise it, feed it good information and interpret its output. Without that, productivity actually drops.”

Instead of replacing staff, AI is essentially reshaping roles.

Companies are discovering they need people who understand not just the content they manage, but the workflows, context and meaning behind it.

These are things an LLM cannot infer automatically.

The more realistic future, Brittan suggests, is one where workers become AI literate: able to collaborate with models, verify their output and maintain clear knowledge boundaries.

“AI becomes a partner,” he says, “not a wizard.”

The Real AI Transformation

Strip away the hype, and a familiar truth emerges: successful AI is built on the same foundations as successful IT.

Clear goals and systems, reliable inputs, proper governance/maintenance, and skilled people.

But unlike past waves of enterprise tech, AI exposes weaknesses brutally – messy content can lead to confident misinformation.

Outdated documents resurface as if current, policies can conflict, and the model arbitrates blindly.

The firms that will thrive are those that treat knowledge not as a dusty archive but as a living system.

“AI forces companies to confront the state of their information,” Brittan says. “It’s not tidy. But once they clean it up, the benefits are enormous.”

The revolution, in other words, may be less about AI itself and more about the organisational spring-cleaning it has forced upon the corporate world.

Agentic AIAgentic AI in the Workplace​AI AgentsAI AssistantAI Copilots & Assistants​Autonomous Agents
Featured

Share This Post