Mirror Review
November 28, 2025
Meta Platforms, known for building the world’s biggest social apps, is making headlines due to its Meta AI chips and their potential shift toward Google’s TPUs.
Reports say Meta is in talks to spend billions to use these chips in its data centers by 2027, and may rent TPUs from Google Cloud as early as next year.
This is a serious effort to diversify beyond NVIDIA, which has long dominated the AI hardware market.
Yet, this development raises an important question.
As Meta builds its own silicon while exploring Google’s chip ecosystem, is the company positioning itself to challenge NVIDIA on cost, scale, and efficiency?
This article explains what is driving the shift and why Meta may be better positioned for the next phase of AI infrastructure.
Why Meta Is Moving Beyond a Single Supplier
AI has become a capital-intensive race, and Meta is among the biggest spenders.
The company expects data-center and AI-related spending to reach $70 billion to $72 billion this year.
Relying solely on NVIDIA for such a massive buildout creates several challenges.
- First, supply constraints: NVIDIA GPUs remain in short supply due to global demand from big tech, startups, and governments. This limits how fast Meta can scale its models.
- Second, rising costs: NVIDIA’s latest Blackwell chips deliver top-tier performance, but they are expensive and require substantial power and cooling infrastructure.
- Third, flexibility: Meta’s AI workloads range from recommendation systems to foundation models. A single-chip architecture cannot optimize all these tasks efficiently.
This is why Meta is exploring a multi-chip strategy. A deal with Google would provide TPUs as an additional training and inference platform, reducing dependence on NVIDIA and increasing Meta’s control over future infrastructure.
A Google spokesperson said Google Cloud is seeing “accelerating demand for both our custom TPUs and NVIDIA GPUs.”
This reflects a wider industry shift toward mixing chip types rather than choosing a single supplier.
Meta’s Own Silicon Strategy: The Missing Piece
Beyond external suppliers, Meta is building its own silicon.
Its MTIA line (Meta Training and Inference Accelerator) is designed to handle Meta’s core models efficiently while lowering cost and power consumption.
Meta AI chips bring several immediate benefits:
- They are optimized for Meta’s ranking and recommendation workloads.
- They cut energy use in production systems.
- They reduce the need to buy GPUs for tasks that don’t require extreme compute.
By combining MTIA with NVIDIA GPUs and potentially Google TPUs, Meta gains flexibility and long-term stability.
This structure mirrors other tech giants that reached hyperscale, like Google with TPUs, Amazon with Trainium and Inferentia, and Microsoft with Azure Maia.
Meta is building the same kind of vertical control that lets companies innovate faster and reduce operational uncertainty.
Why Google’s TPUs Change the Competition
Google’s latest TPU generation, Ironwood, is one of the most discussed chips in the industry.
The company says Ironwood is nearly 30 times more power-efficient than the first-generation TPU launched in 2018. It also powers Google’s new model, Gemini 3, which received strong endorsements from the developer community.
Salesforce CEO Marc Benioff praised Gemini 3 after testing it, saying “the leap is insane”, highlighting how TPU-tuned models can outperform expectations.
Until now, Google has mainly used TPUs internally or offered them through Google Cloud. Allowing another giant like Meta to deploy TPUs at scale in its own data centers marks a major shift in Google’s strategy and signals that Google wants to compete directly for the hyperscale chip business.
Such a partnership would strengthen Google’s position in the AI hardware market and give Meta a proven alternative to GPUs during a critical expansion phase.
What’s Actually in Meta’s Favour
Meta’s position in the chip race is stronger than it appears. The company benefits from several structural advantages that could enable it to challenge NVIDIA’s AI Infrastructure dominance over time.
- A multi-chip strategy reduces long-term risk
Meta is now building an architecture that uses:
- NVIDIA GPUs for large-scale training
- Google TPUs for scalable workloads
- Meta AI chips for internal inference
This layered approach ensures Meta is never constrained by a single supplier, supply shock, or pricing cycle.
- Custom silicon brings efficiency gains
Meta AI chips work best for specific tasks that don’t require the flexibility of GPUs. Purpose-built ASICs like MTIA often deliver:
- lower energy use
- more throughput per watt
- cheaper inference at scale
These efficiency gains translate to billions saved yearly when deployed across Meta’s global data centers.
- Meta learns from operating two advanced ecosystems
Running both TPUs and GPUs allows Meta to compare performance, optimize its internal models, and inform future MTIA designs. The company benefits from experiencing two leading-edge chip platforms, something few companies have at this scale.
- Market momentum is shifting toward diversification
Google recently struck a deal with Anthropic to supply up to one million TPUs, which analysts called “powerful validation.” If Meta signs a similar deal, it will accelerate a multi-vendor approach across the industry and weaken NVIDIA’s tight grip on the accelerator market.
NVIDIA has acknowledged the rising competition, stating it remains “a generation ahead of the industry” in both flexibility and performance. While that remains true today, the momentum shows that major customers now prefer multiple chip paths.
Where Meta Faces Challenges Against NVIDIA
Despite the progress, NVIDIA is not easily displaced. The company still holds more than 90% of the AI chip market. Its CUDA software ecosystem, built over nearly two decades and used by over four million developers, is difficult to replace.
Meta will also need years to mature its own silicon for high-end model training. While MTIA is effective for inference and ranking workloads, it is not currently designed to replace NVIDIA at the cutting edge of frontier model development.
Future Scenarios for the AI Chip Race
Based on current developments, three outcomes appear most probable:
- Shared leadership: NVIDIA dominates training, while TPUs and Meta AI chips take more of the inference market.
- Hybrid infrastructure becomes standard: Meta and other giants split their workloads across GPUs, TPUs, and custom ASICs.
- Custom chips become the long-term core: Meta’s future generations of MTIA may eventually power most of its internal workloads.
The second scenario is the most likely in the next five years.
Conclusion: Will Meta AI chips win?
Meta AI chips won’t beat NVIDIA alone, but they don’t need to.
The real shift is that Meta is building a flexible infrastructure powered by multiple silicon partners.
With MTIA, Google TPUs, and NVIDIA GPUs all playing distinct roles, Meta is reducing dependence, improving efficiency, and preparing for a future where no single company dominates AI hardware outright.
Meta AI chips are not winning the race today, but they are ensuring that NVIDIA is no longer running it alone.














