You’ve got pilots that never scale, tools no one uses, and execs demanding “more AI” without a clear goal.
Welcome to AI Purgatory—the uncomfortable place between expectation and execution. In this piece, we look at how companies fall into the AI trap, but also how they can escape!
A recent Reddit thread on r/ITManagers revealed the very real, very relatable frustrations of tech leaders trying to get AI off the ground. This post pulls together the most potent insights and quotes—plus fundamental strategies to break free.
The 6 Symptoms of AI Purgatory
No Clear ROI
“Actual meaningful ROI is the biggest issue.” – u/OracleofFl
It’s not that AI doesn’t work; the returns often don’t justify the cost, complexity, and time involved. Tools like chatbots or customer support copilots might save a few hours or headcount, but they rarely transform the bottom line.
“Big whoop eliminating a few dozen headcounts… while having to add some high-dollar employees to run the AI systems.” – u/OracleofFl
In short, you’ve likely lost the plot if your AI effort needs a team of ML engineers to support something that saves a few seconds per task. ROI must be tangible, not theoretical.
Stuck in Pilot Hell
“AI isn’t actually as good under scrutiny as it may seem at first glance.” – u/potatoqualityguy
The story is familiar: someone in leadership gets excited about AI, a flashy pilot is launched, but after some testing, results don’t justify scaling it—and the project fades into obscurity. Multiply that across five or ten projects, and you’ll have an innovation graveyard.
The issue often isn’t failure—it’s a lack of commitment, or worse, a lack of clarity on what success looks like.
The Use Case Vacuum
“Any time I ask ‘what problem do we have that this solves,’ the room is eerily quiet.” – u/Mindestiny
AI can’t be a solution in search of a problem. Even the most advanced tools will feel gimmicky without a real pain point.
Before you spend your precious budget on AI, ask: What are we trying to improve? What’s the inefficiency, the friction, the blocker? If you can’t articulate the need, you’re not ready to implement.
Looking for Copilot use cases? Check out our latest article – Which Copilot? Choosing an AI Copilot for Business Use Cases
The Hype Hangover
“AI is a bubble.” – u/sonofalando
“Just the same s*** like the metaverse. Wait until it’s over.” – u/swissthoemu
AI is having its “blockchain in 2018” moment. The hype is immense, but implementation is lagging. The result? Disillusionment.
IT teams are caught between unrealistic expectations from above and actual limitations from below. The hype wave sets the bar sky-high, then leaves IT to explain why we aren’t seeing miracles.
Security & Privacy Paralysis
“With a database, we could delete data. With AI, how does it unlearn confidential information?” – u/BlueNeisseria
For many organizations, the biggest blocker isn’t capability—it’s risk. GenAI tools create questions about data exposure that haven’t been answered. If ChatGPT or Copilot gets fed sensitive financial data, can you prove it’s not stored, reused, or leaked?
Some teams have gone ultra-conservative, banning public AI use entirely. Others are experimenting carefully in sandboxes. However, trust and compliance must be baked in for a large-scale rollout.
Cisco recently launched its security solution for enterprise users, read it here – Cisco Unveils AI Defense: End-to-End Security for Enterprise AI Use
Misunderstood by the Workforce
“40% think copilots are supposed to think for them. 40% think it’s just faster Googling.” – u/Kitchen-Buddy6758
User expectations are often wildly off. Some think AI will replace their job. Others think it’ll fetch data faster. The truth is somewhere in between.
In the latest Techtelligence AI Adoption report, ‘Employee Resistance’ was cited as the biggest barrier to AI adoption.
AI isn’t a replacement for critical thinking—it’s a tool that amplifies it. If your users don’t know how to ask good questions or verify answers, you’ll have poor adoption and even worse output.
The UC Today team recently attended Enterprise Connect to talk about some of the pain points surrounding AI adoption, you can catch up with that coverage here.
Breaking Free from AI Purgatory
Escaping purgatory doesn’t require a moonshot. It requires focus, discipline, and empathy. For a public sector use case of how AI can generate clear ROI see our latest interview with Bath & North East Somerset Council.
Here’s how smart teams are doing it:
Start with Real, Painful Problems
“Staff: ‘Can’t we automate this role?’
IT: ‘Map out what they do… oh, they talk to suppliers about quote nuances? What AI can handle that?'” – u/mattis_rattis
The best AI use cases solve boring, annoying problems—not glamorous ones. Use it for:
- Repetitive document formatting
- Email drafting
- Data lookup and summarization
- Code generation
- Meeting transcription
Don’t start by trying to replace a senior engineer. Start by saving them two hours a week.
Make ROI Measurable (and Modest)
“If it’s shorter time to resolution… then it is adding value.” – u/Chumphy
Time saved, not heads cut. That’s the real win. For example:
- AI in ticket triage = faster routing = happier users
- AI for coding = fewer Stack Overflow rabbit holes
- AI summaries = less time spent reviewing long documents
ROI isn’t about magic—it’s about micro-improvements that compound. Check out these success stories were businesses have got their Copilot deployments right.
Choose the Right AI for the Job
“We use ML models extensively for forecasting and predictive maintenance.” – u/ScheduleSame258
You don’t need GenAI for everything. Traditional AI (machine learning, OCR, computer vision) has been quietly solving business problems for years.
Don’t let the latest trend blind you. Use the best tool for the job, whether it’s an LLM, a classifier, or a good old spreadsheet macro.
Secure It—or Keep It Local
“How possible is it to be actually 100% local? Like, confirmed zero outbound traffic local?” – u/Kitchen-Buddy6758.
If you’re in a compliance-heavy environment, cloud-based AI can be a deal-breaker. Open-source models like LLaMA 2, DeepSeek, or Mistral can run locally, offering transparency and control.
This gives you AI superpowers—without sending sensitive data into the unknown.
Teach People How to Use It
“Changing to an AI-enabled way of thinking requires critical thinking, which many lack.” – u/ScheduleSame258
Success with AI doesn’t depend just on the tool—it depends on the people using it. Invest in basic prompt engineering training. Teach staff how to verify, iterate, and validate.
If you want value, you need curious, literate users—not just licenses.
Find Low-Hanging Fruit and Scale What Works
“One client uses a smart chatbot to triage leads… low-hanging fruit, and it works.” – u/OracleofFl
Start with a simple win, prove it works, socialize the result internally, and then expand. AI is best implemented in stages, not sprints.
Think of it as “agile AI”: test, iterate, improve.
Smarter > Sooner
“AI just isn’t yet to the point where it’s beneficial, hype notwithstanding. Let your competitors waste time and money on it.” – u/iheartrms
Or… learn from their mistakes and deploy smarter.
AI isn’t magic. It’s a tool. It won’t replace your team but can supercharge them if you do it right.
If you’re stuck in AI purgatory, don’t panic. Breathe. Refocus. Solve a real problem. Then, solve another.
That’s how you escape.
Want to join the conversation on social and share your experiences? Join the conversation here.