It ought to be a corporate dream: machines that shoulder the drudgery, leaving employees to think bigger and work smarter – yet reality is proving far less seamless.
According to the new 2025 Global State of AI at Work Report by Asana’s Work Innovation Lab, employees expect to offload over a third of their tasks to AI within a year – but only 27 percent feel ready to do so today.
The hurdle? Reliability. Nearly two-thirds (62 percent) of employees say AI agents are unreliable, with many reporting that the tech ignores feedback or confidently shares incorrect information.
When things do go wrong, accountability is also murky – with a third of workers saying either “no one” is responsible, or they don’t know who is.
Speaking to UC Today, Asana’s Senior Director of Product Strategy Victoria Chin says part of the problem is that the technology often lacks the basic context required to be useful.
“If AI doesn’t know who is supposed to do what, by when, and why, it’s not going to deliver the outcomes that you need,” Chin explained.
“Without proper governance models and policies in place, without proper training in place, that’s one of the challenges we see.”
All in on AI
Yet despite this uncertainty, adoption is surging.
The report found over three-quarters of employees (77 percent) already use AI agents, and 76 percent view them as a transformative shift – not just another productivity tool.
The most popular tasks included meeting notes (43 percent), document organisation (31 percent), and scheduling (27 percent).
And 70 percent would rather delegate certain repetitive tasks to AI than a human colleague.
But without proper training or clear rules, AI is stuck at the “admin” level, and could be creating more work than it saves.
Over half of workers said agents force teams to redo outputs due to mistakes. Nearly half also noted that AI lacks context about team priorities, which can amplify inefficiencies rather than reduce them.
“From the technology perspective, AI is incredibly powerful – but it still can’t do everything, Chin added.
“AI still makes mistakes, and people have seen these mistakes and make certain assumptions.”
The Training Gap
The training gap is also stark: while 82 percent of employees say training is essential for effective AI use, only 38 percent of organisations provide it.
Many employees are asking for clearer boundaries between human and AI responsibilities and formal usage guidelines – but companies aren’t keeping pace.
The report says the solution lies in treating AI agents like teammates, not tools. That means giving them proper context, defining responsibilities, embedding feedback loops, and training employees to work alongside them.
“AI agents are reshaping work, but trust and accountability haven’t kept pace,” said Mark Hoffman, Ph.D., Work Innovation Lead at Asana.
“Without guardrails, companies risk missing out on the real productivity gains these agents can unlock.”
The Stakes for AI at Work
Without trust and clear governance, AI risks becoming an underutilised experiment rather than a productivity revolution – and may even amplify workplace dysfunctions like inconsistent accountability, information overload, and uneven access to resources.
Employees who distrust AI can feel stressed, disengaged, or fearful for their job security, seeing it as an unpredictable colleague that makes mistakes but never takes the blame.
This divide could create a two-speed workforce: early adopters pulling ahead while others resist or underuse the technology, leaving productivity gains uneven.
Companies that deploy AI without clear structures risk “AI debt” – inefficiencies, compliance issues, and reputational fallout from preventable mistakes.
The future of AI at work will depend less on algorithmic sophistication and more on the systems around it.
Firms that define responsibilities, embed training, and build feedback loops will gain a competitive edge, while those that don’t may see employees sidelining AI tools.
In the short term, workers manage AI outputs; in the long term, systems could handle complex tasks with minimal oversight.
The overall message is clear: AI is no longer optional and, as Chin warns, the cost of hesitation could be high.
“What’s going to separate the winners from the losers is the ability to move things forward, to understand why AI stalled and which teams did it stall, for which use cases did it stall… It’s this experimentation mindset that enables innovation.
“It’s the ability to be comfortable with something a little bit uncertain, a little bit risky, that is going to enable organisations to move forward.”