Elon Musk Plans Largest-Ever Supercomputer Four Times Larger Than Metas For xAI Startup: Reports

New Delhi: Elon Musk, CEO of Tesla and Space X, is planning to give tough competition to Alphabet’s Google and Microsoft-backed OpenAI by building a supercomputer, termed the “gigafactory of compute”. 

The billionaire tech mogul has planned this move to support the development of his artificial intelligence startup xAI, as per reports. 

Elon Musk also told investors that he wants to get the proposed supercomputer running by the fall of 2025. He will also hold himself personally responsible for delivering it on time. Adding further, Musk mentioned that xAI may partner with Oracle to develop the massive computer. 

During a presentation to investors this month, Elon Musk said that he will connect groups of GPU chips – Nvidia’s flagship H100. This will make the anticipated semiconductor would be “at least four times the size of the biggest GPU clusters that exist today” including those utilized by Meta for AI model training. 

To recall, Elon Musk said training the Grok 2 model took about 20,000 Nvidia H100 GPUs, adding that the Grok 3 model and beyond will require 100,000 Nvidia H100 chips.  

Meanwhile, Elon Musk’s artificial intelligence startup xAI has developed a Grok AI chatbot and the company will need 100,000 specialised semiconductors to train and run the next version of its conversational Al Grok reportedly. 

Notably, Elon Musk is one of the world’s few investors with deep enough pockets to compete with OpenAI, Google or Meta on AI. The field of AI has become intensely competitive, with major players like Microsoft, Google, and Meta, and startups such as Anthropic and Stability AI vying for dominance after the debut of OpenAI’s generative AI tool, ChatGPT, in 2022.

To recall, Elon Musk co-founded OpenAI in 2015 but left in 2018, later saying he was uncomfortable with the profit-driven direction the company was taking under the stewardship of CEO Sam Altman.