OpenAI, AWS ink $38bn deal to power next-generation AI systems

Joy Agwunobi 

OpenAI has entered into a multi-year, $38 billion partnership with Amazon Web Services (AWS) to access and scale massive computing infrastructure required for its next-generation artificial intelligence (AI) workloads.

Under the agreement, OpenAI will leverage AWS’s Amazon EC2 UltraServers, which feature hundreds of thousands of chips and can scale to tens of millions of CPUs, to support its rapidly growing AI operations  from training frontier models to serving global users of ChatGPT and other generative AI systems.

In a joint statement, both companies said the deal will span seven years, giving OpenAI immediate and expanding access to AWS’s high-performance infrastructure comprising hundreds of thousands of NVIDIA GPUs. This will enable the company to efficiently run complex AI models at unprecedented scale while benefiting from the security, reliability, and performance that AWS’s cloud infrastructure is known for.

AWS said it will deploy a sophisticated architecture designed to optimize AI processing and performance, with GPU clusters  including the new NVIDIA GB200 and GB300 units — connected through Amazon EC2 UltraServers on a single, high-speed network. This configuration will allow OpenAI to run training and inference workloads with low latency and maximum efficiency.

The infrastructure rollout, according to the statement, will be completed by the end of 2026, with room for further expansion into 2027 and beyond.

Commenting on the development, Sam Altman, co-founder and CEO of OpenAI, said the partnership reinforces OpenAI’s mission to make advanced AI accessible to all.

“Scaling frontier AI requires massive, reliable compute. Our partnership with AWS strengthens the broad compute ecosystem that will power this next era and bring advanced AI to everyone,” he said.

On his part, Matt Garman, CEO of AWS, described the collaboration as a significant alignment between the two technology leaders.

“As OpenAI continues to push the boundaries of what’s possible, AWS’s best-in-class infrastructure will serve as a backbone for their AI ambitions,” Garman stated, adding  “The breadth and immediate availability of optimised compute demonstrates why AWS is uniquely positioned to support OpenAI’s vast AI workloads.”

The deal underscores the intensifying race among tech giants to secure computing power for large-scale AI development, amid surging global demand for infrastructure capable of training and deploying frontier models. AWS currently operates clusters exceeding 500,000 chips, positioning it as one of the few providers capable of delivering such large-scale AI compute reliably and securely.

The companies also noted that their collaboration extends beyond infrastructure. Earlier this year, OpenAI’s open-weight foundation models were made available on Amazon Bedrock, AWS’s platform for generative AI, giving millions of developers and enterprises access to OpenAI’s capabilities. The models have since become some of the most popular on the service, with clients such as Comscore, Peloton, Thomson Reuters, Triomics, and Verana Health using them for tasks ranging from scientific analysis and mathematical problem-solving to coding and agentic workflows.

Through this new phase of partnership, OpenAI is expected to accelerate the development of its frontier models, deepen AI adoption across industries, and ensure its systems continue to deliver high-quality performance to millions of users globally, all while anchored on the scale, security, and reliability of AWS’s global cloud ecosystem.

Leave a Comment

OpenAI, AWS ink $38bn deal to power next-generation AI systems

Joy Agwunobi 

OpenAI has entered into a multi-year, $38 billion partnership with Amazon Web Services (AWS) to access and scale massive computing infrastructure required for its next-generation artificial intelligence (AI) workloads.

Under the agreement, OpenAI will leverage AWS’s Amazon EC2 UltraServers, which feature hundreds of thousands of chips and can scale to tens of millions of CPUs, to support its rapidly growing AI operations  from training frontier models to serving global users of ChatGPT and other generative AI systems.

In a joint statement, both companies said the deal will span seven years, giving OpenAI immediate and expanding access to AWS’s high-performance infrastructure comprising hundreds of thousands of NVIDIA GPUs. This will enable the company to efficiently run complex AI models at unprecedented scale while benefiting from the security, reliability, and performance that AWS’s cloud infrastructure is known for.

AWS said it will deploy a sophisticated architecture designed to optimize AI processing and performance, with GPU clusters  including the new NVIDIA GB200 and GB300 units — connected through Amazon EC2 UltraServers on a single, high-speed network. This configuration will allow OpenAI to run training and inference workloads with low latency and maximum efficiency.

The infrastructure rollout, according to the statement, will be completed by the end of 2026, with room for further expansion into 2027 and beyond.

Commenting on the development, Sam Altman, co-founder and CEO of OpenAI, said the partnership reinforces OpenAI’s mission to make advanced AI accessible to all.

“Scaling frontier AI requires massive, reliable compute. Our partnership with AWS strengthens the broad compute ecosystem that will power this next era and bring advanced AI to everyone,” he said.

On his part, Matt Garman, CEO of AWS, described the collaboration as a significant alignment between the two technology leaders.

“As OpenAI continues to push the boundaries of what’s possible, AWS’s best-in-class infrastructure will serve as a backbone for their AI ambitions,” Garman stated, adding  “The breadth and immediate availability of optimised compute demonstrates why AWS is uniquely positioned to support OpenAI’s vast AI workloads.”

The deal underscores the intensifying race among tech giants to secure computing power for large-scale AI development, amid surging global demand for infrastructure capable of training and deploying frontier models. AWS currently operates clusters exceeding 500,000 chips, positioning it as one of the few providers capable of delivering such large-scale AI compute reliably and securely.

The companies also noted that their collaboration extends beyond infrastructure. Earlier this year, OpenAI’s open-weight foundation models were made available on Amazon Bedrock, AWS’s platform for generative AI, giving millions of developers and enterprises access to OpenAI’s capabilities. The models have since become some of the most popular on the service, with clients such as Comscore, Peloton, Thomson Reuters, Triomics, and Verana Health using them for tasks ranging from scientific analysis and mathematical problem-solving to coding and agentic workflows.

Through this new phase of partnership, OpenAI is expected to accelerate the development of its frontier models, deepen AI adoption across industries, and ensure its systems continue to deliver high-quality performance to millions of users globally, all while anchored on the scale, security, and reliability of AWS’s global cloud ecosystem.

[quads id=1]

Get Copy

Leave a Comment