User X claims that large language models are using each other's data to train AI, and Elon Musk admits this is a sad reality.
On July 1, user Beff-e/acc commented that using each other's data would cause a "centipede" effect - a phrase that comes from a horror movie in which a giant centipede is created using How to connect many people together.
In response, Elon Musk said that it will take a lot of effort to change this reality, which means making the training of large language models (LLM) separate from data on the Internet.
"Grok 2 launched in August will make great improvements in this aspect," the billionaire revealed.
Grok is a large language model developed by the xAI company founded by Musk, taking advantage of huge data sources from social network X and currently has version Grok 1.5. A few minutes later, he continued to mention the next version: "Grok 3 will be released at the end of the year, after training with 100 thousand H100 will definitely be something special."
This is not the first time the South African billionaire mentioned 100,000 H100 GPUs. In May, The Information cited information from Elon Musk's meeting with investors, in which he said the xAI startup needed this number of specialized graphics cards to connect into a supercomputer and train the next version. according to chatbot Grok.
The repetition of the number suggests that this idea may be about to come true. According to Insider, Musk's intentions also show the costliness of an LLM project. With the average price of H100 GPUs currently on the market being about 30-40 thousand USD, or cheaper if purchased in large quantities, the amount of money spent to buy chips can be up to 3-4 billion USD, not including other costs.
However, this is not the largest number. In January, Meta co-founder Mark Zuckerberg said he would buy about 350,000 Nvidia H100 GPUs by the end of 2024, bringing the number of chips he owns to 600,000, including products from other companies Nvidia.
As the AI race becomes increasingly intense, whichever company owns more specialized GPUs will gain the upper hand. From startups to large technology corporations in the world, all are actively collecting AI chips. Currently, Nvidia GPUs are the most ordered, while names like AMD are also starting to launch similar products.
According to internal documents obtained by Insider, Microsoft plans to triple the number of available GPUs. By the end of the year, the company aims to own 1.8 million AI chips, mostly manufactured by Nvidia, but can also buy more from other partners. Meanwhile, Meta claims to be "ready to build an AI training system at a scale that could be larger than any other single company". Last year, Musk ordered 10,000 H100 chips for xAI. Chinese companies are also looking to purchase high-end chips from Nvidia, in parallel with developing domestic specialized chips so as not to be left behind in the AI race.