Open in App
  • Local
  • U.S.
  • Election
  • Politics
  • Crime
  • Sports
  • Lifestyle
  • Education
  • Real Estate
  • Newsletter
  • Windows Central

    A brisk look at Elon Musk's Cortex AI supercluster project demonstrates an overreliance on the world's most profitable chip brand for GPUs and high demand for cooling water and power

    By Kevin Okemwa,

    6 hours ago

    https://img.particlenews.com/image.php?url=2DKZ6h_0vCYWycK00

    What you need to know

    • Elon Musk shared the progress of Tesla's supercluster, Cortex AI, in Austin, Texas.
    • The project will ship with 50,000 NVIDIA H100s and an additional 20,000 of the company's custom Dojo AI hardware to foster autonomous driving, energy management, and more.
    • Cortex AI will require up to 130 megawatts (MW) of cooling and power to launch, with projections of 500 MW by 2026.

    Tech enthusiast Elon Musk recently shared the progress of Tesla's Cortex AI supercluster on X . The project is domiciled in Tesla's headquarters in Austin, Texas. It's packed with 70,000 AI servers and will require up to 130 megawatts (MW) of cooling and power to launch, with projections of 500 MW by 2026.

    Cortex AI will help improve and enhance Tesla's AI models, prompting growth. For context, the company leverages AI tools for autonomous driving, energy management, and more.

    Tesla's supercluster is arguably the largest training cluster, packed with 50,000 NVIDIA H100 enterprise GPUs and an additional 20,000 of the company's hardware. However, Musk had previously indicated that the Cortex AI would ship with 50,000 units of Tesla’s Dojo AI hardware.

    In the interim, Musk is seemingly dedicated to improving Tesla's custom Dojo supercomputer and wants to enhance its capabilities with a targeted training capacity of 8,000 H100 equivalents by the end of the year.

    Per the video shared, it's evident that there's still work to be done before the supercluster becomes fully operational. According to Electrek , the cluster is running on a temporary cooling system. Additionally, Tesla requires more network feeders; all factors considered, the cluster could potentially be ready by October, which incidentally aligns with the much-anticipated launch of the Robotaxi.

    According to a post shared by Musk on X earlier this year, Tesla will spend up to $10 billion this year “in combined training and inference AI.” Interestingly, leaked emails between Musk and NVIDIA reveal that the billionaire asked the chipmaker to prioritize shipments of processors to X and xAI ahead of Tesla (via CNBC ).

    Consequently, veering off the goal to make Tesla a key player in the AI landscape. Musk's request to let X skip the line ahead of Tesla delayed the company's shipment of over $500 million processors by months.

    AI projects are becoming a tad expensive

    https://img.particlenews.com/image.php?url=4ZNhjx_0vCYWycK00

    Cloud servers (Image credit: Microsoft)

    Tesla's Cortex AI project echoes the growing concern around generative AI and its exorbitant resource demands. Amid claims that AI is a fad that has already reached its peak, with projections of 30% of AI projects being abandoned by 2025 after proof of concept , investors in the sector have highlighted their frustrations over the high water demand required for cooling (1 bottle of water per query).

    This is coupled with the high power demand. Projections indicate that despite being on the verge of the biggest technological breakthrough with AI, there won't be enough electricity to power AI advances by 2025 . As it stands, Google and Microsoft's electricity consumption surpasses the power usage of over 100 countries .

    🎒The best Back to School deals📝

    Expand All
    Comments / 0
    Add a Comment
    YOU MAY ALSO LIKE
    Local Austin, TX newsLocal Austin, TX
    Most Popular newsMost Popular

    Comments / 0