Open in App
  • Local
  • U.S.
  • Election
  • Politics
  • Sports
  • Lifestyle
  • Education
  • Real Estate
  • Newsletter
  • Interesting Engineering

    DeepMind’s new AI teaches robots to tie shoelaces and handle complex tasks

    By Jijo Malayil,

    16 hours ago

    https://img.particlenews.com/image.php?url=0XtPcN_0vV3O4aU00

    Google DeepMind has launched two new AI systems designed to help robots learn and perform complex tasks that need precise, skillful movements.

    First on the list is ALOHA Unleashed, an upgrade of the ALOHA 2 system. Equipped with two arms, it can be remotely controlled to gather high-quality training data, allowing robots to learn new tasks with fewer demonstrations.

    DemoStart moves progress forward using a reinforcement learning algorithm to learn different behaviors from only a few simulated demonstrations.

    Using these systems, the team tested the robot in simulations and real-world tasks. It successfully tied shoelaces, inserted gears, replaced robot parts, hung shirts, and cleaned a kitchen.

    However, the robot’s shoelace-tying abilities are still developing and require further refinement.

    “By helping robots learn from human demonstrations and translate images to action, these systems are paving the way for robots that can perform a wide variety of helpful tasks,” said Google in a statement .

    https://img.particlenews.com/image.php?url=1D7rKY_0vV3O4aU00
    Example of a bi-arm robot laying out a polo shirt on a table, putting it on a clothes hanger and then hanging it on a rack.

    Enhanced robotic learning

    Until now, most sophisticated AI robots have been able to use one arm to pick up and position objects. DeepMind claims that its Alohawe Unleashed reaches a high degree of dexterity in bi-arm handling.

    The ALOHA Unleashed method builds on the ALOHA 2 platform, which itself was developed from the original ALOHA, a low-cost open-source hardware system for bimanual teleoperation from Stanford University.

    According to the team, ALOHA 2 is more dexterous than earlier systems. It features two teleoperable hands facilitating training and data collection, allowing robots to learn new tasks with fewer demonstrations.

    The latest system also includes improved ergonomic design and an enhanced learning process. Demonstration data was gathered by remotely controlling the robot to perform complex tasks such as tying shoelaces and hanging t-shirts.

    A diffusion method, similar to the Imagen model used for generating images, was then applied to predict robot actions from random noise. This method enables the robot to learn from the data and perform the same tasks independently.

    Dexterous AI breakthrough

    Controlling a dexterous robotic hand is a complex challenge, especially as more fingers, joints, and sensors are added.

    DemoStart, a new approach described in recent research, addresses this by using reinforcement learning to help robots develop dexterous abilities in simulations. This method is particularly beneficial for robots with complex features like multi-fingered hands.

    DemoStart begins by learning from simple tasks and progressively moves to more difficult ones, mastering each task step by step. Compared to traditional learning methods from real-world examples, DemoStart requires 100 times fewer simulated demonstrations to achieve the same level of proficiency.

    Developed using MuJoCo, an open-source physics simulator, DemoStart’s learned behaviors can be transferred to physical robots with minimal adjustments. Techniques like domain randomization can close the sim-to-real gap.

    https://img.particlenews.com/image.php?url=2hmp8M_0vV3O4aU00
    Image of the DEX-EE dexterous robotic hand, developed by Shadow Robot, in collaboration with the Google DeepMind robotics team.

    This method saves both time and costs by reducing the need for physical trials. However, designing simulations that successfully translate into real-world performance has been a persistent challenge.

    DemoStart’s reinforcement learning, combined with a few demonstrations, creates a curriculum that effectively bridges this gap. Tested on a three-fingered robotic hand, DEX-EE, it achieved a success rate of over 98 percent in simulation for tasks like cube reorientation and tightening nuts and up to 97 percent in real-world tests for similar tasks.

    According to DeepMind , robotics is a unique area of AI study that illustrates how effectively our approaches perform in the real world. For instance, even if a huge language model were implemented in a robot, it couldn’t carry out duties like tying your shoes or tightening a bolt on its own.

    “One day, AI robots will help people with all kinds of tasks at home, in the workplace and more. Dexterity research, including the efficient and general learning approaches we’ve described today, will help make that future possible,” said the team.

    Expand All
    Comments /
    Add a Comment
    YOU MAY ALSO LIKE
    Local News newsLocal News

    Comments / 0