
🤖 Ghostwritten by Claude Opus 4.6 · Fact-checked & edited by GPT 5.4 · Curated by Tom Hundley
Sam Altman is trying to reframe AI from a breakthrough product into a utility-like service: something enterprises consume continuously, budget for operationally, and build around as infrastructure. For executives, that framing matters less as rhetoric than as a buying signal. If AI is sold like infrastructure, you should evaluate it the way you evaluate cloud platforms or core data systems: for portability, governance, recurring cost, and dependency risk.
That shift has immediate implications. It changes how you think about vendor lock-in, how you budget for agentic workloads, and how seriously you treat concentration risk around OpenAI and Microsoft. It also suggests that the next phase of enterprise AI adoption will be driven less by one-off chatbot pilots and more by embedded, ongoing workflows.
Below, we unpack what Altman appears to be signaling, where the utility analogy is useful, where it breaks down, and what practical steps leaders should take over the next 12 to 24 months.
TL;DR: Altman remains one of the most influential figures in AI, and his shift toward infrastructure language signals a more enterprise-focused, operational phase of AI adoption.
If you've been following our coverage of Altman's 2026 playbook, you know the pattern. Altman often alternates between ambitious long-range predictions and pragmatic enterprise messaging. The utility framing fits the second category: it presents AI less as spectacle and more as a service layer companies can rely on.
Altman co-founded OpenAI in 2015 and returned as CEO after the board crisis in November 2023. Since then, OpenAI has expanded from a research lab with breakout consumer products into a company increasingly defined by enterprise APIs, platform partnerships, and infrastructure demands.
Some of the surrounding claims in this debate are harder to verify than the broad strategic direction. Reports have placed OpenAI's annualized revenue above $3 billion by late 2024, but private-company revenue figures should be treated as reported estimates rather than audited facts. Likewise, references to a specific March 24, 2026 pitch and GPT-5.4 capabilities are too recent to verify independently here. Still, the broader pattern is consistent with OpenAI's public trajectory: more emphasis on enterprise reliability, recurring usage, and agentic workflows.
TL;DR: The utility metaphor is strategically smart because it normalizes AI and supports recurring enterprise spend, but it oversimplifies how competitive and fast-changing the market still is.
There's a reason Altman would choose a utility metaphor. It does several things at once.
Utilities are familiar. Framing AI as infrastructure rather than as an autonomous force can reduce perceived novelty and risk. That matters in a market where public skepticism remains meaningful. Pew Research Center found in 2023 that Americans were more concerned than excited about AI in daily life, and subsequent polling has continued to show substantial public unease. The exact sentiment level may shift over time, but the broader point holds: normalization is a strategic communications move.
The utility analogy also helps justify heavy capital expenditure. Data centers, chips, networking, and energy contracts all look more defensible if AI is framed as a long-term service layer rather than a speculative product category. For enterprise buyers, that framing encourages a move from experimental budgets to operating budgets.
This is the part executives should watch most closely. The more your workflows depend on one provider's APIs, orchestration tools, and agent frameworks, the harder it becomes to switch. That does not make the utility model wrong, but it does mean the analogy has strategic consequences.
| Strategic framing | "AI revolution" | "AI utility" |
|---|---|---|
| Political exposure | Higher scrutiny and fear | Lower-friction normalization |
| Enterprise buying cycle | Pilot-driven and experimental | Operational and recurring |
| Investor narrative | Growth and disruption | Durability and infrastructure |
| Competitive moat | Best model wins today | Deepest integration wins over time |
| Customer risk | Tool sprawl | Vendor concentration |
The analogy also has limits. Utilities are typically regulated, geographically bounded, and expected to provide stable service under public oversight. AI markets are still competitive, global, and changing quickly. So the metaphor is useful for budgeting and architecture decisions, but not a perfect description of the industry.
TL;DR: OpenAI's close relationship with Microsoft creates real concentration risk, so enterprises should design for provider portability from the start.
One of the most important strategic issues here is OpenAI's dependence on Microsoft for capital, infrastructure, and distribution. Microsoft has invested heavily in OpenAI, and public reporting has widely cited a figure of roughly $13 billion in committed investment. Microsoft also provides critical Azure infrastructure for OpenAI services while simultaneously competing in enterprise AI through Copilot and related products.
That combination creates a dependency chain executives should not ignore. If your company builds core workflows on OpenAI services, your exposure is not limited to OpenAI's roadmap. It also includes the stability of the OpenAI-Microsoft relationship, Azure capacity, pricing changes, and shifts in commercial terms.
As we noted in our analysis of the OpenAI-Microsoft dependency dynamic, the practical response is not panic. It is architecture.
A sensible enterprise approach includes:
In other words, if AI becomes infrastructure, portability becomes governance.
TL;DR: The strongest case for the utility model is agentic software that runs continuously inside business processes, but leaders should validate reliability and governance before scaling it.
The utility framing becomes more credible when AI stops being just a chat interface and starts acting inside workflows. That is why so much attention has shifted toward agents, tool use, and software automation.
Claims about "GPT-5.4" and specific computer-use features are not independently verifiable here and may reflect very recent product positioning. But the underlying category is real. Multiple frontier-model providers are pursuing systems that can navigate interfaces, call tools, retrieve data, and complete multi-step tasks with limited human intervention.
As we explored in our coverage of Altman's enterprise pivot and 2028 timeline, this is the shift that matters operationally. A chatbot creates intermittent demand. An agent embedded in finance, support, operations, or sales creates ongoing demand.
That distinction matters financially. A user asking occasional questions generates bursty consumption. An always-on agent reviewing tickets, reconciling invoices, or updating records can generate sustained usage and more predictable recurring spend.
One widely cited Gartner forecast has suggested that by 2028, at least 15% of day-to-day work decisions could be made autonomously through agentic AI. As with all long-range analyst forecasts, treat that as directional rather than certain. The more useful takeaway is simpler: if agents become reliable enough for production work, AI costs will shift from occasional experimentation to ongoing operational expense.
For planning purposes, ask:
TL;DR: Treat superintelligence timelines as narrative framing, not as an operating plan; your real planning horizon is the next 12 to 24 months.
Altman's long-range claims about superintelligence and data centers surpassing human intellectual capacity are effective at attracting attention, but they are not a sound basis for enterprise planning. Such predictions are inherently speculative, and no executive team should build a roadmap around a date-specific superintelligence forecast.
What is worth planning for is much more concrete:
That is why the utility framing is helpful in one narrow sense. It encourages leaders to think in terms of durable architecture, service management, and cost control rather than hype cycles.
If you want a strategic lens, use this one: build for flexibility, assume capabilities will improve, and avoid locking your business logic into any single vendor's stack. Our earlier analysis of Altman's 2028 warning makes the same point from a different angle: the headline prediction matters less than the procurement and governance decisions you make now.
He appears to be framing AI as a continuously available service layer that companies consume operationally rather than build from scratch. For enterprises, that implies subscription-like budgeting, reliability expectations, and deeper integration into everyday workflows.
Yes. The issue is not that the partnership is inherently unstable, but that concentration risk grows when one vendor depends heavily on another for infrastructure and distribution. Enterprises should reduce exposure by designing for portability and maintaining at least one credible fallback option.
In some narrow, well-governed workflows, yes. In broad, unsupervised settings, caution is still warranted. The right test is not whether an agent can complete a demo, but whether it can perform reliably, be audited, and fail safely in your environment.
Move beyond per-seat thinking. Model costs at the workflow level: expected task volume, runtime, supervision requirements, exception handling, and downstream savings. Agentic systems often look cheap in pilots and expensive at scale unless usage controls are built in early.
It's both. It can make AI easier to buy, standardize, and scale. But it can also increase lock-in if your prompts, workflows, and governance processes become too dependent on one provider's ecosystem. The goal is to capture the convenience without surrendering strategic flexibility.
Altman's utility framing is not just a messaging exercise. It is a signal about where enterprise AI is heading: toward recurring usage, deeper workflow integration, and more infrastructure-like buying decisions. That makes the metaphor useful, even if it is not a perfect description of the market.
For executives, the right response is neither hype nor cynicism. Treat AI as an emerging utility in your planning model, but build with the discipline you would apply to any critical dependency: portability, governance, cost visibility, and fallback options.
If your team is evaluating how to operationalize AI without creating long-term lock-in, Elegant Software Solutions can help you assess vendors, design portable architectures, and prioritize the workflows most likely to deliver measurable value. Explore our latest leadership analysis on the ESS blog, and come back tomorrow for the next installment in our industry-leaders series.
Discover more content: