More

    Meta locks in 1GW chip deal with Broadcom as CEO exits board

    Published on:

    [Meta](https://meta.com) just made its biggest bet yet on custom silicon, committing to deploy one gigawatt of in-house MTIA chips co-designed with [Broadcom](https://broadcom.com) in a sweeping multiyear deal announced Tuesday. The move signals Meta’s aggressive push to reduce dependence on [Nvidia](https://nvidia.com) while scaling its AI infrastructure to unprecedented levels. In an unexpected twist, Broadcom CEO Hock Tan will step down from Meta’s board to avoid conflicts as the partnership deepens.

    [Meta](https://meta.com) is going all-in on custom chips. The social media giant announced Tuesday it will deploy a full gigawatt of its Meta Training and Inference Accelerator (MTIA) chips, designed in partnership with semiconductor powerhouse [Broadcom](https://broadcom.com). The scale is staggering – one gigawatt of compute power could theoretically handle inference for hundreds of millions of AI queries daily, positioning Meta to run its entire AI stack on proprietary silicon.

    The partnership comes with a significant governance shakeup. [Broadcom](https://broadcom.com) CEO Hock Tan will leave Meta’s board of directors to avoid conflicts of interest as the commercial relationship balloons into what industry observers are calling one of the largest custom chip deals in tech history. Tan joined Meta’s board in 2021, but the expanding partnership makes his position untenable under typical corporate governance standards.

    Meta’s MTIA chips target AI inference workloads – the process of running trained models to generate responses, recommendations, and content across Facebook, Instagram, and WhatsApp. While [Nvidia](https://nvidia.com) dominates the AI training market with its H100 and upcoming Blackwell GPUs, Meta is betting it can build more cost-effective inference chips tailored specifically to its workloads. The company first unveiled MTIA prototypes in 2023, but this Broadcom deal represents the first industrial-scale deployment commitment.

    The one-gigawatt figure isn’t just marketing speak. For context, that’s roughly equivalent to the power consumption of a small city or enough to run approximately 350,000 high-end Nvidia H100 GPUs simultaneously. Meta plans to deploy these chips across its data center footprint over the next several years, with [Broadcom](https://broadcom.com) handling the intricate chip design work, packaging, and manufacturing coordination with foundry partners like TSMC.

    This isn’t Meta’s first rodeo with custom silicon. The company already designs its own video transcoding chips and network processors, but MTIA represents its most ambitious semiconductor project yet. By working with [Broadcom](https://broadcom.com) rather than going fully in-house like [Google](https://google.com) did with TPUs, Meta gets access to world-class chip design expertise without building an entire semiconductor division from scratch.

    The timing couldn’t be more strategic. As AI costs spiral and Nvidia’s GPUs remain in short supply despite massive production increases, hyperscalers are racing to develop alternatives. [Amazon](https://amazon.com) has its Trainium and Inferentia chips, [Google](https://google.com) keeps refining TPUs, and [Microsoft](https://microsoft.com) recently announced its own Maia AI accelerators. Meta was conspicuously behind in this race – until now.

    Broadcom benefits enormously from the arrangement. The company has positioned itself as the go-to partner for hyperscalers wanting custom AI chips without building internal expertise. Similar deals with [Google](https://google.com) for TPU development have generated billions in revenue for [Broadcom](https://broadcom.com). The Meta commitment likely represents several billion dollars in design fees and ongoing royalties over the contract’s lifetime.

    But the deal raises questions about Meta’s [Nvidia](https://nvidia.com) strategy going forward. The company has already ordered tens of thousands of H100 GPUs and is expected to be a major customer for Nvidia’s next-gen Blackwell architecture. Industry sources suggest Meta will run a hybrid approach – using Nvidia chips for cutting-edge AI training and research while routing production inference workloads to cheaper, more efficient MTIA silicon.

    The board departure adds an intriguing subplot. Hock Tan’s exit eliminates potential conflicts but also removes a key semiconductor industry voice from Meta’s governance. Tan brought deep hardware expertise to a board otherwise dominated by software and social media veterans. His departure suggests the Broadcom partnership will be substantial enough that his continued board presence would raise serious independence concerns.

    Meta hasn’t disclosed the financial terms, but comparable custom chip partnerships typically involve upfront design fees in the hundreds of millions plus per-chip royalties. With a gigawatt of deployment, even modest per-chip costs add up fast. Analysts estimate the total contract value could easily exceed $5 billion over its full term, though Meta’s vertical integration should still deliver savings compared to buying equivalent Nvidia hardware at retail prices.

    The announcement sent ripples through the semiconductor sector Tuesday afternoon, with [Broadcom](https://broadcom.com) shares climbing on the news while [Nvidia](https://nvidia.com) experienced modest pressure as investors recalibrated hyperscaler dependence assumptions. The custom chip trend clearly threatens Nvidia’s long-term dominance, even if the GPU giant remains unchallenged for cutting-edge training workloads.

    Meta’s gigawatt-scale chip commitment with Broadcom marks a turning point in the AI infrastructure wars. The deal proves custom silicon has moved from experimental side project to core strategic imperative for hyperscalers drowning in AI compute costs. While Nvidia isn’t going anywhere – training cutting-edge models still demands its GPUs – the inference market is clearly up for grabs. For Meta, success means running its AI empire on chips purpose-built for its workloads at a fraction of retail GPU costs. For the industry, it’s another data point in the slow but inevitable fragmentation of AI hardware away from Nvidia’s near-monopoly. Watch whether other tech giants follow with similar gigawatt-scale custom chip announcements in coming months.

    Related