Meta launches in-house AI chips weeks after massive deals with Nvidia and AMD


Meta’s 5-gigawatt Hyperion data center under construction in Richland Parish, Louisiana, January 9, 2026.

Courtesy of Meta

Goal on Wednesday unveiled four custom in-house chips designed for AI-related tasks as part of the company’s massive data center expansion plans.

The specialized silicon is part of the Meta Training and Inference Accelerator, or MTIA, family of chips, which it first publicly revealed in 2023 before introducing a second-generation version in 2024.

Meta’s vice president of engineering, Yee Jiun Song, told CNBC that by designing custom chips, which are then manufactured by Taiwan SemiconductorsThe social media giant can get more price-for-performance across its entire fleet of data centers instead of relying solely on vendors.

“This also gives us greater diversity in terms of silicon supply and, to some extent, protects us from price changes,” Song said. “This means a little more influence.”

The first new chip, MTIA 300, was deployed a few weeks ago and is intended to help train smaller AI models that underpin Meta’s core ranking and recommendation tasks, Song said. These types of tasks include showing people relevant content and online ads within the company’s family of apps, such as Facebook and Instagram.

The upcoming chips, MTIA 400, MTIA 450 and MTIA 500, are aimed at inference tasks related to more advanced generative AI, such as creating images and videos based on people’s written prompts. The chips will not be used to train gigantic language models, Song said.

One Meta data center rack will include 72 Meta internal MTIA 400 chips, optimized to accelerate AI inference. MTIA 400 has completed the testing phase and is expected to be deployed in Meta data centers soon.

Courtesy: Meta

Meta said in a blog post that it had finished testing the MTIA 400 and is “on track to deploy it in our data centers,” while the other two chips will be operational in 2027.

“It’s unusual for any silicon company or team to release a new chip every six months. It’s a very fast cadence,” Song said. “And the main reason for this is that we are building capacity so quickly right now and spending so much on CapEX, that at any given moment we want to have the next-generation chip to deploy.”

Song said the company expects the chips to have a “standard lifespan of more than five years.”

Meta’s AI spending spree includes a giant data center in Louisiana and two others in Ohio and Indiana. Meta is also reportedly looking to lease space at the Stargate site in Texas after OpenAI and Oracle scrapped plans to expand the AI ​​data center site, according to Bloomberg.

Tech giants like Google have been developing their own in-house silicon to fill data centers in recent years, as they look for an alternative to expensive, supply-limited GPUs. NVIDIA and amd.

These hyperscalers have been creating so-called application-specific integrated circuits, or ASICs, which are smaller and cheaper than general-purpose GPUs but limited to performing a more limited set of tasks.

Google was first in the ASIC game and launched its first Tensor Processing Unit in 2015. Amazon was next, with its first custom chip announced in 2018. While these tech giants incorporate their AI chips as part of their respective cloud computing platforms so that customers can access them, Meta’s MTIA chips are used exclusively for internal purposes.

Meta’s next-generation MTIA 400 custom accelerator has completed the testing phase and is expected to be deployed soon in Meta’s data centers.

Courtesy: Meta

The upcoming MTIA chips will contain more high-bandwidth memory, or HBM, to help drive GenAI-related inference tasks.

The tech industry’s AI mega push has led to a shortage of memory chips in the broader market, meaning Meta’s ambitious silicon roadmap could face future supply chain constraints.

“We are absolutely worried about HBM’s supply,” Song said. “But we believe we have secured supply for what we plan to build.”

Memory is typically a cyclical business with chip makers sourcing product from suppliers such as Samsung, SK Hynix and Micron with short-term contracts.

Song declined to comment on whether the company has signed longer-term contracts with memory suppliers to protect against shortages, but said Meta has a “diversified” approach to its supply chain and silicon strategy.

In recent weeks, Meta has signed deals to fill its data centers with millions of Nvidia GPUs and up to 6 gigawatts of AMD GPUs over several years.

“Workloads are changing so rapidly that we want to make sure we have options,” Song said, referring to chip deals.

Meta’s new internal chips are manufactured by Taiwan semiwhich operates primarily from Taiwan and has a large new chip manufacturing campus in Arizona.

Meta declined to comment on whether the chips will be manufactured in Arizona.

The majority of Meta’s “substantial team,” made up of hundreds of engineers who worked on silicon, is based in the United States, Song said. Of Meta’s total 30 operational and planned data centers, 26 of them are in the US.

LOOK: Breaking down AI chips, from Nvidia GPUs to Google and Amazon ASICs.

Breaking down AI chips, from Nvidia GPUs to Google and Amazon ASICs
Choose CNBC as your preferred source on Google and never miss a moment from the most trusted name in business news.

Add Comment