It says something about the pace of the AI industry that OpenAI can raise $122 billion, name its next flagship model after a potato, and have neither feel particularly surprising.
The company’s funding round closed on March 31st. The model, internally codenamed Spud, finished pre-training a week earlier, with Sam Altman telling employees it is a “very strong model” that could “really accelerate the economy.” On April 1st, for the record, that is still not a joke.
What’s actually known about Spud is limited. The Information reported that pre-training completed on 24th March, that a release is expected within weeks, and that employees have described it as containing a capability not seen in previous OpenAI models. The final name, pricing, and architecture are all unconfirmed. Spud is almost certainly not what it will be called at launch. Probably for the best.
What is not speculation is what OpenAI gave up to get here. Sora, its video generation model, has been discontinued, a reported Disney partnership along with it, all to redirect compute toward finishing Spud and its enterprise platform. So what’s next?
OpenAI: From $1 billion a year to $2 billion a month
Enterprise teams have been genuinely cautious about building deeply on third-party AI platforms, and with good reason. The market has moved fast, vendors have come and gone, and committing infrastructure to a platform that might look very different in two years is a real risk.
OpenAI’s revenue trajectory changes that calculation. The company went from $1 billion in annual revenue at the end of 2023 to $1 billion per quarter by end of 2024. It is now at $2 billion per month. Backers include SoftBank, Microsoft, Amazon, Nvidia, BlackRock, and Sequoia. A new $4.7 billion revolving credit facility sits undrawn. This is a company with the financial backing to remain a serious enterprise platform for the foreseeable future.
Enterprise now accounts for more than 40% of OpenAI’s revenue, up from a standing start not long ago, and is on track to reach parity with consumer by end of 2026. The consumer business built OpenAI’s brand. The enterprise business is what it is now building around.
The outcome gap that enterprise AI has not closed
AI tools deployed over the last two years have, broadly speaking, delivered on the easy stuff. Meeting summaries, email drafts, search, basic content generation. What they have largely failed to do is close what you might call the outcome gap: the distance between an AI that assists with a task and one that completes it.
That gap is the defining challenge in enterprise AI right now. Buyers have grown impatient with tools that make individuals slightly faster but do not change how work actually flows through an organisation. The conversation has shifted from “does this tool have AI?” to “can this tool show me where productivity actually improved?” Those are very different questions, and most vendors are still working out how to answer the second one.
OpenAI’s enterprise push is explicitly aimed at that problem. The company is not pitching better chat. It is pitching agents that execute multi-step workflows, connect to business systems, and produce structured outputs that feed into how work gets done. Whether it can deliver on that consistently, at enterprise scale, is the question that matters.
Codex: The product that makes this real
Codex is OpenAI’s clearest attempt to answer that question in practice. It now serves over 2 million weekly users, with usage growing more than 70% month over month, and companies including Cisco, Nvidia, and Virgin Atlantic have begun deploying it across their teams.
It started as a coding agent, but OpenAI is now positioning it as something broader. Thibault Sottiaux, head of Codex at OpenAI, has described it as “becoming the standard agent” for enterprise deployments including non-technical workers, on the basis that “there’s very little that is specific to coding” in how it actually operates. The latest version, GPT-5.3-Codex, handles product documents, data pipelines, presentations, and copy editing alongside engineering tasks, and OpenAI has benchmarked it across 44 professional knowledge work occupations.
The direction of travel in UC and collaboration is toward platforms that do not just capture what happened in a meeting but determine what should happen next. Codex, deployed as an enterprise agent rather than a developer tool, is a direct attempt to occupy that space. Whether it can do so at the governance and security standards enterprise IT teams require is still being tested, with Sottiaux acknowledging there is significant work still to do on managed deployments and on-premises options.
What OpenAI’s platform play means for Microsoft-heavy organisations
OpenAI is building toward a unified platform combining ChatGPT, Codex, and its browser agent. The commercial logic is sound: a single coherent platform is easier to sell, easier to support, and easier to take public, which OpenAI is planning to do in Q4 2026.
For the majority of enterprise buyers, though, this creates a direct conflict with existing commitments. Most large organisations are already running Microsoft 365 Copilot across their workforce. Microsoft has launched Copilot Tasks as its own agentic workflow product, building on integrations across Teams, Outlook, and the rest of the Microsoft stack. The case for Microsoft is not that its models are necessarily better. It is that the integration work is already done.
OpenAI’s counter-argument is that integration depth matters less as agents become more capable of operating across systems independently. That may be true eventually. For IT leaders managing deployments today, it is not yet a practical reality.
What Spud still has to prove
Spud will likely land with a proper name, a set of benchmarks, and a pricing structure in weeks. What it will actually mean for enterprise productivity is harder to predict. The real-time audio capability reported by some outlets, if accurate, would be genuinely relevant for UC buyers thinking about how AI sits inside voice and video workflows rather than just processing the output afterward. But that remains unconfirmed, and enterprise buyers have learned to wait for the product rather than the announcement.
The broader point is this. OpenAI has the funding, the revenue trajectory, and the product momentum to be a serious long-term player in enterprise AI. What the organisations getting the most from AI have in common is not access to the most advanced models. It is clarity about what problem they are solving, how they will measure success, and what governance looks like at scale. A new model from OpenAI, whatever it ends up being called, does not change that equation.
It just gives procurement teams something new to evaluate. Starting, apparently, with a potato.