AI could be sold like electricity on a usage meter, OpenAI CEO Sam Altman says

March 16, 2026 OpenAI CEO Sam Altman said artificial intelligence may eventually be sold as a metered utility similar to electricity or water. The model would charge customers based on usage of “tokens,” the units used by AI systems to process input and generate responses.

Altman outlined the idea during remarks at the BlackRock Infrastructure Summit in Washington, D.C., where he described how AI providers may structure their business models as demand for computing capacity grows.

“We see a future where intelligence is a utility like electricity or water and people buy it from us on a meter and use it for whatever they want to use it for,” Altman said.

The concept reflects how generative AI services are already priced internally. Each request to an AI model consumes compute resources measured in tokens, which represent chunks of processed text or data used to generate responses.

Altman said the availability of compute, the infrastructure required to train and run AI systems, will ultimately determine how widely these services can be distributed. If companies cannot build enough computing capacity to meet demand, access could become constrained or prices could rise significantly.

Technology companies are investing heavily in infrastructure to expand that capacity. Industry estimates suggest hundreds of billions of dollars will be spent on AI data centres and computing systems in the coming years as demand for generative AI tools accelerates.

The scale of computing required is growing rapidly. At CES 2026, AMD CEO Lisa Su said global AI systems may require more than 10 yottaflops of computing power within five years For context, that is roughly 10,000 times greater than global AI capacity in 2022.

That expansion presents practical challenges. Large AI data centres can consume electricity comparable to that of small cities, and infrastructure constraints such as grid capacity, transformer shortages and slow permitting for power transmission projects could affect how quickly new compute facilities are built.

Inside technology companies, access to computing resources is already tightly managed. Engineers often compete for GPU capacity to train models or run experiments, and AI compute budgets are increasingly becoming part of internal planning.

According to Altman, the long-term goal for AI providers is to move away from a “capacity constrained” environment by building enough infrastructure to support widespread use of AI systems.



Top Stories

Related Articles

April 6, 2026 Oracle began large-scale layoffs on March 31, 2026, cutting employees across multiple countries with immediate effect. Analysts more...

April 3, 2026 The CEO of NYC Health + Hospitals says artificial intelligence could replace a significant portion of radiology more...

April 3, 2026 OpenAI has signed Smartly as its first dedicated adtech partner to refine how advertising appears in ChatGPT. more...

April 2, 2026 Researchers from California Institute of Technology and start-up Oratomic have demonstrated a new error-correction approach that could more...

Jim Love

Jim is an author and podcast host with over 40 years in technology.

Share:
Facebook
Twitter
LinkedIn