
Meta has entered into a major long-term agreement with Amazon Web Services to significantly expand its computing infrastructure using Amazon’s in-house designed processor chips. The multi-year deal, valued in the billions, will involve Meta deploying tens of millions of ARM-based Graviton CPU cores across its data centers. These chips will primarily support AI workloads like inference and large-scale data processing rather than model training. The move is aimed at reducing dependence on traditional GPU suppliers while improving efficiency and cost at scale.
The agreement marks one of the largest enterprise adoptions of custom cloud silicon to date. Instead of relying exclusively on third-party chipmakers, the social media behemoth is increasingly integrating vertically with cloud providers that offer tightly optimized hardware-software stacks. For example, last year, Meta signed a more than $10 billion, six-year cloud infrastructure deal with Google Cloud to secure additional AI computing capacity. And now, AWS’s Graviton processors, built on ARM architecture, are known for delivering strong performance per watt, allowing companies to run high-volume workloads at lower energy and operational costs compared to conventional x86-based systems.
“Graviton is built on the AWS Nitro System, which uses dedicated hardware and software to deliver high performance, high availability, and high security. The Graviton5 chip features 192 cores and a cache that is five times larger than the previous generation, which reduces delays in how quickly those cores communicate with each other by up to 33%,” Amazon noted.
Notably, Meta’s demand for computing power has surged as it aggressively expands its AI capabilities across its ecosystem, including Facebook, Instagram, WhatsApp, and its growing suite of generative AI products. The company is investing heavily in AI agents, recommendation systems, and real-time content generation tools, all of which require not just high-end GPUs for training but massive fleets of CPUs for deployment and scaling. In production environments, CPUs play a critical role in handling inference workloads, managing distributed systems, processing data pipelines, and coordinating AI services in real time.
By committing to tens of millions of CPU cores, Meta is effectively building one of the largest ARM-based compute footprints in the industry. The deal also highlights a strategic effort by the company to diversify its hardware dependencies. While GPUs – particularly those from Nvidia – remain central to training large language models, supply constraints and rising costs have prompted major tech firms to explore alternatives. Meta continues to collaborate with other semiconductor players like AMD and Broadcom, as part of a broader multi-vendor strategy. Even recently, the Mark Zuckerberg-led firm announced a $60 billion AI chip agreement with AMD, which includes an option that could allow Meta to acquire up to a 10% stake in the chipmaker.