Open in App
  • Local
  • U.S.
  • Election
  • Politics
  • Crime
  • Sports
  • Lifestyle
  • Education
  • Real Estate
  • Newsletter
  • Interesting Engineering

    Study reveals humans change their own behavior when training AI, creating biases

    By Kapil Kajal,

    8 hours ago

    https://img.particlenews.com/image.php?url=2QtWnb_0uzJVo7400

    In recent years, people have become more reliant on AI to help them make decisions.

    These models not only help humans but also learn from their behavior. Therefore, it is important to understand how our interactions with AI models influence them.

    Current practice assumes that the human behavior used to train is unbiased.

    However, Washington University research challenges this assumption. It shows that people change their behavior when they know it is used to train AI.

    Moreover, this behavior persists days after training has ended.

    These findings highlight a problem with AI development: assumptions of unbiased training data can lead to unintentionally biased models.

    This AI may reinforce these habits, resulting in humans and AI deviating from optimal behavior.

    Unexpected psychological phenomenon

    This new cross-disciplinary study by WashU researchers has uncovered an unexpected psychological phenomenon at the intersection of human behavior and artificial intelligence.

    When humans were told they were training AI to play a bargaining game, participants actively adjusted their behavior to appear more fair and just, an impulse with potentially important implications for real-world AI developers .

    “The participants seemed to have a motivation to train AI for fairness, which is encouraging, but other people might have different agendas,” said Lauren Treiman, a PhD student in the Division of Computational & Data Sciences and lead author of the study.

    “Developers should know that people will intentionally change their behavior when they know it will be used to train AI.”

    The study, published in PNAS, was supported by a seed grant from the Transdisciplinary Institute in Applied Data Sciences (TRIADS), a signature initiative of the Arts & Sciences Strategic Plan.

    Ultimatum Game

    The study included five experiments, each with roughly 200-300 participants. Subjects were asked to play the “Ultimatum Game,” a challenge that requires them to negotiate small cash payouts (just $1 to $6) with other human players or a computer.

    Sometimes, they were told their decisions would be used to teach an AI bot how to play the game.

    The players who thought they were training AI were consistently more likely to seek a fair share of the payout, even if such fairness cost them a few bucks.

    Interestingly, that behavior change persisted even after they were told their decisions were no longer being used to train AI, suggesting the experience of shaping technology had a lasting impact on decision-making.

    “As cognitive scientists, we’re interested in habit formation,” Wouter Kool, co-author of the study, said.

    “This is a cool example because the behavior continued even when it was not called for anymore.”

    Still, the impulse behind the behavior isn’t entirely clear.

    Future consequences

    Researchers didn’t ask about specific motivations and strategies, and Kool explained that participants may not have felt a strong obligation to make AI more ethical.

    He said that the experiment possibly brought out their natural tendencies to reject unfair offers.

    “They may not be thinking about the future consequences,” he said. “They could just be taking the easy way out.”

    “The study underscores the important human element in the training of AI,” said Chien-Ju Ho, a computer scientist who studies the relationships between human behaviors and machine learning algorithms.

    “A lot of AI training is based on human decisions,” he said. “If human biases during AI training aren’t taken into account, the resulting AI will also be biased. In the last few years, we’ve seen a lot of issues arising from this sort of mismatch between AI training and deployment.”

    Some facial recognition software, for example, needs to be more accurate at identifying people of color, Ho said. “That’s partly because the data used to train AI is biased and unrepresentative,” he said.

    Expand All
    Comments / 0
    Add a Comment
    YOU MAY ALSO LIKE
    Most Popular newsMost Popular
    Cooking With Maryann3 hours ago

    Comments / 0