AI could be sold like electricity on a usage meter, OpenAI CEO Sam Altman says

March 16, 2026 OpenAI CEO Sam Altman said artificial intelligence may eventually be sold as a metered utility similar to electricity or water. The model would charge customers based on usage of “tokens,” the units used by AI systems to process input and generate responses.

Altman outlined the idea during remarks at the BlackRock Infrastructure Summit in Washington, D.C., where he described how AI providers may structure their business models as demand for computing capacity grows.

“We see a future where intelligence is a utility like electricity or water and people buy it from us on a meter and use it for whatever they want to use it for,” Altman said.

The concept reflects how generative AI services are already priced internally. Each request to an AI model consumes compute resources measured in tokens, which represent chunks of processed text or data used to generate responses.

Altman said the availability of compute, the infrastructure required to train and run AI systems, will ultimately determine how widely these services can be distributed. If companies cannot build enough computing capacity to meet demand, access could become constrained or prices could rise significantly.

Technology companies are investing heavily in infrastructure to expand that capacity. Industry estimates suggest hundreds of billions of dollars will be spent on AI data centres and computing systems in the coming years as demand for generative AI tools accelerates.

The scale of computing required is growing rapidly. At CES 2026, AMD CEO Lisa Su said global AI systems may require more than 10 yottaflops of computing power within five years For context, that is roughly 10,000 times greater than global AI capacity in 2022.

That expansion presents practical challenges. Large AI data centres can consume electricity comparable to that of small cities, and infrastructure constraints such as grid capacity, transformer shortages and slow permitting for power transmission projects could affect how quickly new compute facilities are built.

Inside technology companies, access to computing resources is already tightly managed. Engineers often compete for GPU capacity to train models or run experiments, and AI compute budgets are increasingly becoming part of internal planning.

According to Altman, the long-term goal for AI providers is to move away from a “capacity constrained” environment by building enough infrastructure to support widespread use of AI systems.



Top Stories

Related Articles

March 16, 2026 Accenture has told employees that proficiency with the company’s artificial intelligence tools will be required for promotion. more...

March 16, 2026 BMW has begun deploying AI-powered humanoid robots in electric vehicle production at its Leipzig iFACTORY, expanding earlier more...

March 13, 2026 Artificial intelligence systems such as ChatGPT consume significantly different amounts of energy depending on the complexity of more...

March 13, 2026 Iran has identified offices and infrastructure linked to major U.S. technology companies as potential targets as tensions more...

Jim Love

Jim is an author and podcast host with over 40 years in technology.

Share:
Facebook
Twitter
LinkedIn