Open in App
  • U.S.
  • Election
  • Newsletter
  • Tom's Guide

    Luma drops Dream Machine 1.5 — here’s what’s new

    By Ryan Morrison,

    20 hours ago

    https://img.particlenews.com/image.php?url=2wDYSo_0v46wiaq00

    Luma Labs Dream Machine has had an upgrade, although you might not see a difference on the surface. The change is an upgrade to the underlying model, taking us to version 1.5 and offering better realism, motion following, and prompt understanding.

    The startup shook up the AI video landscape when it launched out of stealth in June this year, offering better prompt adherence, more realistic motion, and improved text-to-video photorealism. It quickly became a favorite of AI video creators.

    Since its inception, and showing just how fast the AI landscape moves , we’ve seen upgrades from Runway, Kling's launch, and upgrades to Haiper’s AI video model and platform. Pika has also seen an update to its image-to-video model.

    With version 1.5, Luma AI is showing that it has no intention of resting on its early success, bringing its text-to-video up to similar levels of realism as Runway Gen-3 Alpha and Kling AI .

    Text rendering in Dream Machine v1.5

    https://img.particlenews.com/image.php?url=35qjkL_0v46wiaq00

    (Image credit: Luma Labs Dream Machine/Future AI)

    Luma has also improved prompt adherence, text-to-video generation, and more realistic human movement, giving its model better text rendering capabilities.

    This means it could generate logo screens, end boards, or even graphics you could drop into a PowerPoint presentation, all from a simple text prompt.

    Getting legible text from Dream Machine is the same as getting legible text from Midjourney or any AI image generator; just put the words in double quotes and be descriptive.

    The results can be hit and miss, especially if you try to push it. I asked Dream Machine to generate the word "Cats in Space," bouncing on the moon with cats in space suits on either side.

    It did exactly what I asked, but I wasn't descriptive enough to have it generate the words in a line rather than stack them. It also didn't bounce; I made a weird zoom motion instead. It copied this same motion when I asked it to show my name, appearing letter-by-letter out of the sand.

    Even though the motion of the text or layout wasn’t exactly what I wanted, the words were fully legible in every test I ran. If you want it to reflect a specific style, I’d use an image as the prompt.

    Improved quality in Dream Machine v1.5

    https://img.particlenews.com/image.php?url=42tRwy_0v46wiaq00

    (Image credit: Luma Labs Dream Machine/Future AI)

    Before I get on to the quality, I should also point out that Dream Machine v1.5 is significantly faster than its previous version. It can generate five seconds of video in about two minutes.

    The most noticeable change is the level of realism, both the visual and motion quality. I ran a few different tests, including of an old woman underwater, a tiger in the snow and a drone flythrough of a castle and in each case, while there were issues, it was better than v1.

    The final noticeable upgrade is in character consistency across the video generation, including through extensions. This includes motion consistency and adherence to real-world physics.

    One thing to note with Dream Machine. It is very good at enhancing your prompt, but if you give it a long, descriptive prompt, make sure to untick the enhance prompt box, or it can get confused or overcomplicate things.

    Overall, it isn’t an upgrade on the scale of Runway Gen-2 to Runway Gen-3, as that was a massive leap across the board. Still, it is significant enough to be noticeable and help keep Dream Machine in the top ranks of generative AI video platforms.

    More from Tom's Guide

    Expand All
    Comments / 0
    Add a Comment
    YOU MAY ALSO LIKE
    Most Popular newsMost Popular

    Comments / 0