OpenAI is reportedly running out of GPUs almost every day, as the company struggles to balance explosive user growth with limited computing resources.
Until its new “Stargate” supercomputer or additional compute capacity goes live next year, OpenAI is expected to face ongoing pressure on its infrastructure.
Growing Demand, Limited Compute

OpenAI’s user base has skyrocketed — far surpassing Google or Meta in terms of daily AI interactions. However, its access to GPUs remains significantly smaller compared to these tech giants.
This growing imbalance is pushing OpenAI’s systems to the limit, especially as the company continues to release powerful, GPU-heavy features.
Recent updates like the new GPT-5 “thinking model” for free users have intensified the problem. The model allows deeper reasoning and more advanced responses but consumes much more compute power per query.
While the update has boosted engagement and user satisfaction, it has also placed massive strain on OpenAI’s GPU infrastructure.
Key points:
- OpenAI runs out of GPUs almost daily.
- The GPT-5 “thinking model” uses much more compute.
- Stargate supercomputer, expected in 2026, may ease the shortage.
- OpenAI has more users than Google or Meta, but less GPU power.
- Sora and Other AI Features Add More Pressure
Also read about: OpenAI & Broadcom Partner for Custom AI Chips in 2026
Another major source of GPU demand is Sora, OpenAI’s video generation model. The tool requires significant GPU resources to render complex video and generate AI-based animations.
Together with GPT-5, Sora is forcing OpenAI to carefully manage compute allocation between models, prioritizing stability over new releases.
Insiders suggest that OpenAI is rationing compute power across different features to ensure uptime for its most popular models.
Until the new Stargate infrastructure or other large-scale compute solutions come online, OpenAI’s engineers will continue to face tough choices about resource allocation.
Despite the challenges, OpenAI remains committed to expanding access and maintaining model performance — even as the OpenAI GPU shortage highlights the growing limits of current AI hardware capacity.
More news to read: