Open in App
  • Local
  • U.S.
  • Election
  • Politics
  • Crime
  • Sports
  • Lifestyle
  • Education
  • Real Estate
  • Newsletter
  • Tom's Guide

    I tried Haiper 1.5 — the latest Sora challenging AI video model

    By Ryan Morrison,

    2 days ago

    Haiper , the artificial intelligence video lab has released version 1.5 of its generative model, offering up eight-second initial clips and improved visual quality.

    This is the latest update from a growing number of AI video platforms all chasing the realism, natural movement and clip duration capacity of OpenAI's yet to be released Sora model .

    I put Haiper 1.5 to the test with a series of prompts and it feels more like an upgrade to the generation one model than a significant step change that we saw between Runway Gen-2 and Gen-3 or with the release of Luma Labs Dream Machine .

    That isn’t to say that Haiper isn’t an incredibly impressive model. It is and offers some of the best value of all of the AI video platforms. It’s just that it has yet to reach the motion quality of Runway Gen-3 or solve the morphing and distortion problems found in Haiper 1.0.

    What makes Haiper 1.5 different?

    Haiper is the brainchild of former Google Deepmind researchers Yishu Miao and Ziyu Wang. Based in London, the company is focused on building foundation AI models and working towards Artificial General Intelligence.

    The video model has been designed to be particularly good at understanding motion, so the tool hasn’t been built with motion controls like Runway or Pika Labs as the AI predicts what is needed. I have found it better if you leave off specific motion instructions from the prompt.

    The startup first emerged from stealth with a ready-to-go model just four months ago and already has 1.5 million users. The previous maximum video length was 4 seconds, but for most users, it only went to 2 seconds — basically a gif. The new model can start with 8-second clips.

    It is one of the most easy-to-use AI models with a strong community built around creation. It offers a range of examples and prompt ideas and can be used to animate text or video.

    Creating prompts to test Haiper 1.5

    With Haiper 1.5 clips can be up to eight seconds long, although I noticed it does occasionally slow down the footage rather than create more movement.

    You can also now produce up to eight-second clips in high definition, where previously high definition was reserved for very short two-second shots.

    As with Pika Labs you can. upscale or extend any of the videos generated using Haiper. Each generation adds four seconds to the original.

    1. The koi pond

    https://img.particlenews.com/image.php?url=0mkery_0uVKWwt800

    (Image credit: Haiper/Future AI)

    First test is to see how well it handles the motion of multiple creates and I'll say it's done a surprisingly good job. Not too much warping or merging of the fish, although one looks like it is swimming above the pond.

    The prompt: "A serene koi pond in a Japanese garden, with colorful fish swimming beneath floating lotus flowers."

    2. A city street at night

    https://img.particlenews.com/image.php?url=3hkaJY_0uVKWwt800

    (Image credit: Haiper/AI Future)

    Next was a test of a complex visual environment, in this case a busy city with bright lights and lots of people, as well as the degree of animation. The gif reflects just how slowly the people moved in the final video . You'd have to 2x the speed.

    This was the simple prompt: "A bustling city street at night, neon signs flickering, and people hurrying past in the rain."

    3. Making sushi

    https://img.particlenews.com/image.php?url=2dbtYH_0uVKWwt800

    (Image credit: Haiper/Future AI)

    Hands are a nightmare for AI models and unfortunately, Haiper is no different. While initially, it looks like its cracked it, the next five seconds after what is shown in the gif turns into a weird nightmarish mush. Full video on the Haiper website .

    "A close-up of a chef's hands preparing sushi, carefully slicing fish and rolling rice."

    4. Blooming flower

    https://img.particlenews.com/image.php?url=3curLU_0uVKWwt800

    (Image credit: Haiper/Future AI)

    This was the only outright fail of the test prompts. I think it may have either needed more specific instructions to capture the movement or even simpler instructions. Every AI video model works slightly differently so it's a tough call.

    The prompt I used was: "A time-lapse of a flower blooming, petals unfurling in vibrant colors." I tried the same prompt with Luma Labs and while the result was more realistic, it also failed to show the time-lapse.

    5. Astronaut in space

    https://img.particlenews.com/image.php?url=4XCsp7_0uVKWwt800

    (Image credit: Haiper/Future AI)

    I love using space prompts because it often confuses models when it comes to motion or it will generate multiple Earths. Haiper did a good job here and even showed the astronaut slowly moving. It's worth viewing the full video .

    I used this prompt: "An astronaut floating in space, with Earth visible in the background and stars twinkling."

    6. Steam punk city (image)

    https://img.particlenews.com/image.php?url=0eBDyU_0uVKWwt800

    (Image credit: Haiper/Future AI)

    The next test was of the Haiper image-to-video model rather than just simply text. I started by generating an image of a steampunk city and offering it with a motion prompt to Haiper. It did a good job of showing the unusual scene.

    Prompt for Ideogram, the AI image generator: "Steampunk cityscape with airships and clockwork mechanisms". The motion prompt for Haiper alongside the image: "Gears turning, airships slowly moving across the sky."

    7. The northern lights (image)

    https://img.particlenews.com/image.php?url=2dehls_0uVKWwt800

    (Image credit: Haiper/Future AI)

    Finally the Northern Lights. This is a useful test for all AI video models and usually one where you start with text but I wanted to see how it'd animate an image. It did a very good job and the full eight-second video is worth viewing .

    Prompt for Ideogram, the AI image generator: "Northern lights dancing over a snowy mountain landscape." The motion prompt for Haiper alongside the image: "Aurora borealis shifting and swirling in the night sky."

    Final thoughts

    Haiper 1.5 is a clear improvement on Haiper 1.0 as well as models like Runway Gen-2 and Pika Labs 1.0 but it is very much an interim upgrade. If they've achieved this with a 1.5 model, I can't wait to see what Haiper 2.0 is like.

    Clips were sometimes slowed down or suffered from morphing but overall it was a big improvement in both photorealism, movement and consistency. This was in part due to the doubling length of the clips.

    More from Tom's Guide

    Expand All
    Comments / 0
    Add a Comment
    YOU MAY ALSO LIKE
    Most Popular newsMost Popular

    Comments / 0