For decades, the modern workplace has been defined by the quiet clatter of keyboards.
Open-plan offices may have replaced cubicles, laptops may have replaced desktops, but the core interface of work has barely changed.
That, according to Paul Sephton, is about to feel as antiquated as dial-up internet.
Sephton, Head of AI and Digital Innovation at Jabra, believes we are standing on the edge of a fundamental shift in how humans interact with technology at work – one that will see voice overtake keyboards as the primary way we engage with AI within the next three years.
And it’s not just about convenience. It’s about trust, productivity, and making technology feel human again.
“Voice,” Sephton told UC Today, “is becoming the operating system of work.”
Voice Is Already Everywhere
The idea that voice could replace keyboards might sound radical, but in many ways, it’s already happening – just not where we spend most of our working hours.
Jabra’s research shows that in certain situations, particularly on-the-go environments, voice interaction is already deeply embedded. Up to 70 percent of employees report using voice in specific contexts such as commuting, multitasking, or working away from a desk.
The hesitation, Sephton explains, isn’t about capability. It’s about culture.
“We haven’t normalised talking to technology in the office in the same way we have elsewhere,” he says. “And at the same time, the technology still has some distance to go before it becomes truly seamless.”
To understand what might come next, Jabra conducted a behavioural lab-based study focused not on current usage, but preference. The question wasn’t how often people use voice today, but whether they want to.
The answer was striking.
“There’s a very promising signal,” Sephton says. “One that suggests that within the next three years, voice interaction will become the primary mode of engagement with AI platforms, agents, and copilots.”
People Trust AI More When They Speak To It
Perhaps the most surprising finding from Jabra’s research was not about speed or efficiency, but emotion.
When participants interacted with AI using voice rather than typing, trust increased by 33 percent.
That jump, Sephton believes, comes down to instinct.
“Typing is not a natural human behaviour,” he says. “It’s not how we communicate with one another. Speech is.”
Voice carries tone, pacing, sentiment – all the subtle cues that make conversation feel alive. When AI responds in kind, the interaction feels less like using a machine and more like engaging with a collaborator.
“When people speak to AI and can have a back-and-forth that feels lifelike, trust goes up,” Sephton explains. “Typing keeps the experience firmly in ‘computer mode’. Voice moves it closer to human conversation.”
That emotional shift could prove critical as AI moves from being a tool we occasionally consult to something that works alongside us every day.
Will Keyboards Really Disappear?
Ask whether keyboards will become obsolete by 2028 and Sephton is quick to clarify one misconception.
“Text isn’t going away,” he says. “Screens aren’t going away. What’s changing is how we interact with them.”
Even in a voice-first future, text will remain essential as a record – a ledger of work, decisions, and communication. What will fade, he argues, is the keyboard’s dominance as the primary interface.
Sephton likens the moment to the introduction of the computer mouse.
“When Bill Gates first demonstrated a mouse on stage, it looked unnatural,” he says. “But it solved a problem. It made computing more ergonomic.”
The keyboard, he suggests, served a similar purpose – but it was never the most intuitive option.
“As we move into a screen-first, AI-powered era, voice becomes the most natural interface,” he says. “The keyboard won’t vanish overnight, but it will become more of a ‘nice to have’ than a ‘need to have’.”
Much like the mouse today – useful, but no longer essential.
From Time Saved To Value Created
In the early days of workplace AI, productivity has largely been measured in minutes shaved off tasks: drafting emails faster, automating meeting notes, managing calendars.
That framing, Sephton believes, is short-sighted.
“For the first few years, we’ve talked about AI as a time-saving tool,” he says. “But that’s not how we evaluate our colleagues.”
Instead, as AI becomes more intelligent – and as voice makes interaction more fluid – organisations will begin to assess AI by value created, not time saved.
“When you start speaking to AI like a teammate rather than a tool, the mindset changes,” Sephton explains. “You stop asking how much time it saved and start asking what it helped you achieve.”
That shift opens the door to a new kind of hybrid workforce – one where teams are made up of humans and AI agents working side by side.
The Obstacles: Behaviour, Privacy And Hardware
Despite the promise, Sephton is clear that voice-first work won’t happen by accident. It requires deliberate change.
First, there’s behaviour.
“A few years ago, wearing earbuds while speaking to someone felt rude,” he notes. “Now it’s normal.”
Talking to AI in an office setting needs a similar cultural reset – one led from the top.
“Leaders need to model the behaviour,” he says. “They need to set etiquette and make it feel as normal as talking on a Teams call in an open office.”
Then there’s privacy – perhaps the biggest concern surrounding voice data.
“There’s a growing awareness that voice feels more personal than typing,” Sephton acknowledges. “People want to know who’s listening and how that data is used.”
The solution, he argues, lies in radical transparency – from tech vendors, software providers and organisations alike. Clear communication around data usage, alongside investment in edge and on-device processing, will be essential to building trust.
Finally, there’s the practical matter of sound quality.
“If voice capture is poor, AI output will be poor,” Sephton says bluntly. “You can’t separate the software from the hardware.”
Professional-grade headsets, microphones and noise-cancelling technology will become critical infrastructure, not accessories, particularly as open offices grow noisier.
“This isn’t just an IT rollout,” he adds. “It’s an ecosystem change.”
Why Voice Beats Text For Generative AI
For generative AI tasks in particular, voice has two decisive advantages.
The first is speed.
“It’s around four times quicker to speak than to type,” Sephton says. “That alone has huge productivity implications.”
But speed is only part of the story. Voice also aligns better with how ideas form.
“Thoughts aren’t linear,” he explains. “Typing forces them to be.”
Speaking allows people to unpack ideas at a natural pace, refine them, explore tangents and then – if needed – edit the output on screen afterwards.
That flexibility mirrors broader changes in how we work.
“Our best ideas don’t always happen at a desk at nine in the morning,” Sephton says. “Voice enables work to happen whenever and wherever inspiration strikes.”
What A Voice-First Workplace Could Look Like By 2026
Trying to picture a fully voice-integrated workplace is difficult – and that, Sephton suggests, is the point.
He references a quote often attributed to Henry Ford: if I had asked people what they wanted, they would have said a faster horse.
“Voice AI is like that,” he says. “We haven’t fully reimagined work processes yet.”
By the end of 2026, Sephton envisions workplaces where AI agents are seamlessly embedded into daily workflows, enhancing rather than interrupting human effort. Meetings are fewer but more focused. Employees are more engaged. Technology fades into the background.
“Every major technological shift has brought uncertainty,” he says. “But historically, it’s been overwhelmingly positive for workers in the long run.”
The goal, he emphasises, isn’t more technology – it’s more human work.
Can Voice Really Replace Multitasking?
One common concern is whether voice can match the multitasking capabilities of keyboards, especially in meetings.
Sephton acknowledges the challenge, but says innovation is already underway.
“There’s a lot of R&D happening around invoking AI agents while you’re on a call,” he reveals. “Sidebar conversations, multiple voice interactions – it’s coming.”
At the same time, he questions whether constant multitasking should be the goal at all.
“We’ve normalised doing other work during meetings because we have too many of them,” he says.
A future powered by voice and AI, he suggests, could enable fewer, better meetings – where people are fully present, supported by intelligent agents when needed, rather than half-listening while typing emails.
“That,” he says, “would be the real prize.”
Technology That Feels Less Like Technology
Ultimately, Sephton believes the success of voice AI will be measured by how invisible it becomes.
“The more advanced technology gets, the less apparent it should feel,” he says.
Rather than creating more tech-dependent workplaces, innovation in voice and AI could allow computing to recede into the background, enabling people to focus on meaningful work in the most natural way possible.
“If we do this right,” Sephton says, “technology won’t make work more mechanical. It will make it more human.”
And when that happens, the sound that defines the office of the future won’t be the tapping of keys, but the quiet hum of conversation.