Open in App
  • Local
  • U.S.
  • Election
  • Politics
  • Crime
  • Sports
  • Lifestyle
  • Education
  • Real Estate
  • Newsletter
  • Tom's Hardware

    AMD's biggest AI GPU booster adds support for Nvidia, too — Lamini AI preps memory tuning for Nvidia hardware

    By Anton Shilov,

    7 days ago

    https://img.particlenews.com/image.php?url=4Vs5p0_0uVWKnBF00

    Lamini AI was among the first companies to obtain AMD's Instinct MI300X AI chips and has gained fame for being an all-AMD AI shop , a rarity in an Nvidia-dominated world. The CEO has even roasted Nvidia over its GPU shortages , highlighting that the company already had AMD GPUs in stock. However, it looks like one cannot really ignore the market leader when it controls the lion's share of the AI hardware market, so this month, Lamini AI announced that it would optimize its software stack for Nvidia hardware as well.

    "Really looking forward to collaborating and partnering with Nvidia to further optimize our stack on Nvidia hardware," wrote Sharon Zhou, the founder and chief executive of Lamini AI, in an X post .

    When asked to clarify the company's stance on AMD's hardware, Zhou indicated that while Lamini AI has collaborated with AMD, the partnership is not exclusive. Furthermore, the company is also open to working with other hardware providers. In fact, Zhou even said that collaborations with other hardware developers might be good for AMD as it might eventually lead certain end users with Nvidia GPUs to migrate to AMD Instinct.

    "We are partnered with AMD — they are incredible — I fully believe in Lisa," the head of Lamini AI responded . "The partnership is deep and not exclusive. It’s hard to see from the outside, but strategically this helps us deploy to more customers who already have Nvidia GPUs and from there help them get onto AMD GPUs without code changes as well. Hope we will be partnering with even more compute providers in the future."

    Lamini AI is an enterprise platform designed to help organizations develop, fine-tune, and deploy large language models (LLMs) efficiently. The platform offers a comprehensive solution for managing the entire lifecycle of LLMs, from model selection and tuning to deployment and inference.

    For example, with advanced memory tuning techniques, Lamini ensures over 95% factual accuracy, significantly reducing hallucinations and enhancing the reliability of its models. Also, Lamini AI's high-performance capabilities allow it to handle up to 52 times more queries per second compared to other solutions, reducing latency and improving user experience.

    Given the specialization of Lamini AI, it is pretty hard for the company to ignore Nvidia's hardware, as the majority of its potential clients might believe that Nvidia's H100 and H200 GPUs are the best fit for their workloads. To that end, the platform supports deployment on various environments, including on-premise, private data centers, and public clouds, and is compatible with both AMD and Nvidia GPUs (although the company made jokes about Nvidia's GPUs in the past ). It is noteworthy that the company's chief technology officer, Greg Diamos, is a former Nvidia CUDA software architect, so the company has some experience with CUDA.

    Expand All
    Comments / 0
    Add a Comment
    YOU MAY ALSO LIKE
    Most Popular newsMost Popular

    Comments / 0