
Meta, the parent company of Facebook, Instagram, and WhatsApp, has started testing its first in-house chip for training artificial intelligence (AI) systems in order to reduce dependency on external chipmakers like Nvidia, according to sources related to the matter.
Meta has started small-scale deployment of the chip and plans to expand production if tests are successful. With this new initiative, Meta is planning to capitalize on a long-term strategy to control its AI infrastructure costs, as it forecasts total expenses of up to $119 billion in 2025, including $65 billion for AI-related infrastructure.
Meta Enters into AI Hardware Industry
One source revealed that Meta’s chip is a dedicated AI accelerator, designed to handle AI-specific tasks more efficiently than general-purpose GPUs. By creating its own silicon, Meta will optimize performance and power consumption for AI workloads. The chip is being manufactured by Taiwan’s TSMC, which is a big player in global semiconductor production.
The test follows Meta’s first “tape-out” of the chip, which is a crucial milestone in semiconductor design. Tape-outs typically cost millions and take months to complete, with no guarantee of success. On the other hand, if the test fails, Meta will need to troubleshoot and redo the process.
This chip is part of Meta’s Training and Inference Accelerator (MTIA) series, which has faced setbacks in the past. However, last year, Meta successfully deployed an MTIA inference chip to power content recommendations on Facebook and Instagram. The company now aims to extend its in-house chips to train AI models, with an eye on generative AI applications like its chatbot, Meta AI.
Meta previously scrapped a failed inference chip and later spent billions on Nvidia GPUs. While still a major Nvidia customer, Meta’s latest move signals a push for greater self-reliance in AI computing.
Source: https://www.reuters.com/technology/artificial-intelligence/meta-begins-testing-its-first-in-house-ai-training-chip-2025-03-11/
Latest Stories: