Open in App
  • Local
  • U.S.
  • Election
  • Politics
  • Sports
  • Lifestyle
  • Education
  • Real Estate
  • Newsletter
  • Earth.com

    Digital deception: How do humans respond to lying robots?

    By Andrei Ionescu,

    5 days ago

    https://img.particlenews.com/image.php?url=3ElWc2_0vNV4KvV00

    Honesty is usually considered the best policy, but there are moments when telling the truth isn’t always the right choice. Social norms help humans decide when to be truthful and when to bend the truth to spare someone’s feelings or avoid harm. But how do these norms translate to robots, which are becoming increasingly integrated into our daily lives?

    To explore whether humans are comfortable with robots lying, researchers asked nearly 500 participants to evaluate and justify different types of robot deception .

    “I wanted to explore an understudied facet of robot ethics, to contribute to our understanding of mistrust towards emerging technologies and their developers,” explained lead author Andres Rosero, a PhD candidate at George Mason University .

    “With the advent of generative AI , I felt it was important to begin examining possible cases in which anthropomorphic design and behavior sets could be utilized to manipulate users.”

    Distinct types of deceptive robot behavior

    The researchers selected three common work scenarios involving robots - medical, cleaning, and retail - and tested three distinct types of deceptive behavior.

    These behaviors were categorized as external state deceptions (lying about the outside world), hidden state deceptions (concealing the robot’s capabilities), and superficial state deceptions (exaggerating the robot’s abilities).

    External state deception

    In the external state deception scenario, a robot acting as a caretaker for a woman with Alzheimer’s lies, telling her that her late husband will be home soon.

    Hidden state deception

    The hidden state deception scenario involves a woman visiting a house where a robot is cleaning, unaware that the robot is secretly filming her.

    Superficial state deception

    In the superficial state deception scenario, a robot in a store complains about feeling pain while helping move furniture, prompting a human to ask someone else to take over for the robot.

    How people view robot deception

    The team recruited 498 participants, asking each to read one of the scenarios and respond to a questionnaire.

    The participants were asked whether they approved of the robot’s actions, how deceptive they considered the behavior, whether the deception was justifiable, and who, if anyone, they believed was responsible for the lie. The researchers then analyzed the responses to identify patterns in how people viewed each type of deception.

    Of the three scenarios, participants were most disapproving of the hidden state deception involving the house cleaning robot that was secretly filming. They rated it as the most deceptive and found it largely unjustifiable.

    In contrast, while the external and superficial state deceptions were also seen as deceptive, the superficial deception - where the robot pretended to feel pain - was viewed more negatively, likely because it seemed manipulative.

    Some lies are justified

    Interestingly, participants were most supportive of the external state deception, where the robot lied to the Alzheimer’s patient. Many justified the lie by saying it protected the patient from emotional pain, prioritizing compassion over truthfulness.

    While participants could offer justifications for all three forms of deception - some suggested that the hidden state deception might have been for security reasons - most were firmly against the idea of robots lying about their capabilities without disclosure.

    Robots that manipulate users

    More than half the participants who encountered the superficial state deception, where the robot faked pain, also found it unacceptable. When it came to these unacceptable deceptions, participants often pointed the blame at the robot’s developers or owners rather than the robot itself.

    “I think we should be concerned about any technology that is capable of withholding the true nature of its capabilities, because it could lead to users being manipulated by that technology in ways the user (and perhaps the developer) never intended,” Rosero said.

    “We’ve already seen examples of companies using web design principles and artificial intelligence chatbots in ways that are designed to manipulate users towards a certain action. We need regulation to protect ourselves from these harmful deceptions.”

    Gauging human reactions to robots

    However, the researchers acknowledged that this study represents just a first step in understanding human responses to robot deception. They noted that future experiments should use real-life scenarios - such as videos or roleplay simulations - to better gauge human reactions.

    “The benefit of using a cross-sectional study with vignettes is that we can obtain a large number of participant attitudes and perceptions in a cost-controlled manner,” Rosero explained.

    “Vignette studies provide baseline findings that can be corroborated or disputed through further experimentation. Experiments with in-person or simulated human-robot interactions are likely to provide greater insight into how humans actually perceive these robot deception behaviors.”

    Emerging technologies and ethical decisions

    The research opens the door to understanding how we navigate the ethics of deception when it comes to emerging technologies like robots and AI.

    As robots increasingly interact with humans in caregiving, service, and other industries, it's crucial to explore how comfortable we are with their behaviors, particularly when they involve deception.

    With this study, the researchers have started a conversation about the ethical complexities of robot deception, pushing us to consider not just what robots can do, but what they should do. And as AI and robotic technologies continue to evolve, the need for clear regulations to protect users from potential manipulation becomes ever more pressing.

    The study is published in the journal Frontiers in Robotics and AI .

    -----

    Like what you read? Subscribe to our newsletter for engaging articles, exclusive content, and the latest updates.

    Check us out on EarthSnap , a free app brought to you by Eric Ralls and Earth.com.

    -----

    Expand All
    Comments /
    Add a Comment
    YOU MAY ALSO LIKE
    Local News newsLocal News
    Alameda Post17 days ago

    Comments / 0