Across offices and Slack channels everywhere, a quiet paradox is emerging.
Generative AI tools are being rolled out with fanfare, hailed as the next great leap in workplace productivity.
Employees are using them enthusiastically, leaders are investing heavily – and yet, for most organisations, the payoff simply isn’t there.
AI use at work has nearly doubled since 2023, and the number of companies running fully AI-led processes has doubled too. Yet a recent MIT Media Lab report found that 95 percent of organisations see no measurable return on their AI investments.
The problem isn’t exactly with the technology itself – it’s with how people are using it.
When Good Work Looks Good, But Isn’t
In a joint study between BetterUp Labs and the Stanford Social Media Lab, researchers identified a phenomenon spreading quietly through offices, AI workslop.
Borrowed from “AI slop” – a phrase for low-quality, machine-generated content flooding social media – workslop refers to AI-created output that looks like quality work but lacks the depth, accuracy, or context to actually move a project forward.
It’s the report that’s beautifully formatted but factually thin, or the summary that sounds smart but misses the point.
“AI has made sloppy work faster,” says Jamie Aitken, VP of HR Transformation at Betterworks told UC Today.
“It’s not that AI creates the problem – it just accelerates it.”
The result is a new kind of burden: the illusion of productivity.
The creator feels efficient, but the receiver inherits the confusion. Someone has to decode, correct, or redo what the machine produced, shifting the cognitive load downstream.
The Workslop Tax
BetterUp’s research suggests the problem is far from rare.
In a recent survey of more than a thousand full-time US employees, 40 percent reported receiving workslop in the past month.
Those incidents cost an average of one hour and 56 minutes each – or roughly $186 per employee per month in wasted time – nearly $9 million a year for a 10,000-person organisation.
But the hidden costs go beyond the balance sheet – receiving AI-generated low-effort work leaves more than half of employees feeling annoyed, confused, or even offended.
Forty-two percent say they view the sender as less trustworthy afterward, while a third say they’d rather not work with that person again.
“Low-effort AI work isn’t just inefficient – it’s corrosive,” Aitken says. “It breaks down trust and collaboration, which are exactly what you need for AI to work well.”
Over time, these small acts of artificial efficiency can quietly undermine the human connections that make work function.
Each instance of workslop carries an invisible tax – not just in time, but in relationships.
Pilots and Passengers
Behind these behaviours lies a deeper divide in mindset. BetterUp’s researchers describe two archetypes emerging in the age of AI: pilots and passengers.
Pilots use AI purposefully – to explore ideas, sharpen creativity, and accelerate meaningful progress.
They treat AI as a collaborator, not a crutch. Passengers, on the other hand, rely on it to avoid the hard parts of thinking. They copy, paste, and move on.
Pilots use AI 75 percent more often at work and 95 percent more often outside of it – but with intention.
Their output enhances human insight, but passengers’ output hides the absence of it.
“The best employees are pilots, not passengers,” Aitken says.
“They’re curious, creative, and responsible. They use AI to elevate their thinking – not replace it.”
From Policy to Practice
Most leaders know they need an AI strategy, but fewer know what that actually looks like.
And many have had to respond with sweeping mandates to use AI everywhere.
But indiscriminate imperatives often lead to indiscriminate use, and when people don’t know when or how AI should be applied, they tend to apply it in places where it doesn’t belong.
Aitken suggests that curiosity and experimentation are essential, but they must be matched with accountability.
“AI should be your coach, not your crutch,” she says.
“Teams need room to explore, but they also need guardrails to prevent carelessness from becoming culture.”
The most successful organisations, Aitken argues, are those that maintain continuous, honest dialogue between managers and employees – not just annual performance reviews, but ongoing check-ins about goals, quality, and accountability.
“That’s how you catch workslop early and build trust at the same time,” she says.
The New Collaboration
Generative AI was supposed to streamline teamwork, but in reality, it’s making collaboration more complex. Prompts, feedback, and fact-checking are all now part of the process – and they require even tighter coordination. Workslop exposes what happens when AI enters that mix without shared norms: efficiency turns into confusion, and innovation turns into cleanup.
The future of work will demand a new kind of collaboration – not just between humans, but between humans and machines. Seamless teams will be those that integrate AI into workflows with intention, using it to advance shared outcomes rather than dodge responsibility.
“AI doesn’t fix bad habits; it just scales them,” Aitken says. “If you push speed above quality, you’ll get work slop.”
Be the Pilot
AI is rewriting the rules of productivity, but not always for the better. The organisations that thrive in this new era will be those that stay human – and thoughtful – at scale.
That means modelling discernment from the top, setting clear norms for how AI should be used, and holding “bionic” human–AI work to the same standards of excellence as human-only work.
“AI isn’t the enemy,” Aitken says. “It’s how we use it that matters. Be the pilot, not the passenger.”