Meta has introduced the latest iteration of its proprietary chips dedicated to AI tasks. The Meta Training and Inference Accelerator (MTIA) v2 chips, developed internally, provide twice the processing power and memory bandwidth of their predecessors, the v1 chips.
These chips will be implemented across Meta’s data centers to support AI applications, notably enhancing deep learning recommendation systems that boost user engagement on its platforms.
Meta has highlighted that these new chips are adept at managing both simple and complex ranking and recommendation algorithms, essential for advertising on platforms like Facebook and Instagram. Meta asserts that by managing both the hardware and software components, it can outperform standard commercial GPUs in terms of efficiency.
“We are already seeing the positive results of this program as it’s allowing us to dedicate and invest in more compute power for our more intensive AI workloads,” reads Meta’s related post.
Meta launched its inaugural proprietary chip last May, tailored specifically for the company’s unique computational demands. As Meta intensifies its focus on AI development, the need for specialized hardware has grown. The company recently displayed the AI infrastructure it utilizes for training its advanced AI models, such as Llama 3, which currently relies exclusively on Nvidia components.
According to Omdia research, Meta was one of Nvidia’s major customers last year, acquiring a substantial volume of H100 GPUs for AI model training. Meta has clarified that its custom silicon initiative is designed to complement, rather than replace, the Nvidia hardware already in use within its existing systems.
“Meeting our ambitions for our custom silicon means investing not only in compute silicon but also in memory bandwidth, networking and capacity, as well as other next-generation hardware systems,” Meta stated.
The MTIA chips are set for additional development, with Meta planning to enhance the hardware to support generative AI tasks. The introduction of the MTIA v2 represents Meta’s most recent foray into custom chip technology, mirroring a broader industry trend where major technology companies are crafting their own hardware solutions.
“We currently have several programs underway aimed at expanding the scope of MTIA, including support for GenAI workloads. And we’re only at the beginning of this journey.”
-Meta
For instance, just last week, Google Cloud launched its inaugural Arm-based CPU during the Google Cloud Next 2024 event. Similarly, Microsoft has developed its Maia and Cobalt in-house CPUs, and Amazon is utilizing its AWS-engineered Graviton and Trainium chip families to facilitate generative AI applications.
Featured image credit: Laura Ockel/Unsplash