Open in App
  • Local
  • Headlines
  • Election
  • Sports
  • Lifestyle
  • Education
  • Real Estate
  • Newsletter
  • Black Enterprise

    AI Shows Deep-Seated Bias Against African American Vernacular

    By Daniel Johnson,

    2024-09-03
    https://img.particlenews.com/image.php?url=2Ei7f8_0vJ6UUQe00

    A new study shows that large language models like ChatGPT have biases, including stereotypes against African American English speakers.

    As more research about artificial intelligence explores the inner workings of the technology’s use of human language that has exploded following innovation from OpenAI and other players in the technology space, the anti-Black biases of these tools are being exposed.

    According to a paper in Nature, large language models (LLM) like the ones used by Open AI’s ChatGPT program, operate with bias embedded in their programming. In the paper, the authors show that LLMs use dialect prejudice and hold raciolinguistic stereotypes regarding speakers who use African American English, Ars Technica reported.

    According to the paper, “Dialect prejudice has the potential for harmful consequences: language models are more likely to suggest that speakers of AAE be assigned less-prestigious jobs, be convicted of crimes and be sentenced to death.”

    Nicole Holliday, a linguist at the University of California, Berkeley, told Science.org that the findings of the paper deserve to be heard and intimately understood.

    “Every single person working on generative AI needs to understand this paper;” Holliday also warned that although companies that make LLMs have attempted to address racial bias, “when the bias is covert…that’s something that they have not been able to check for.”

    Despite efforts to fix the racial bias in these language models, the bias remains. The paper’s authors say that using human preference alignment to solve the problem of racial bias only serves to hide the racism that these models maintain inside their protocols.

    According to the paper, “As the stakes of the decisions entrusted to language models rise, so does the concern that they mirror or even amplify human biases encoded in the data they were trained on, thereby perpetuating discrimination against racialized, gendered and other minoritized social groups.”

    The paper goes on to tie together the potential prejudices of these LLM’s against AAE speakers with real-world examples of discrimination. “For example, researchers have previously found that landlords engage in housing discrimination based solely on the auditory profiles of speakers, with voices that sounded Black or Chicano being less likely to secure housing appointments in predominantly white locales than in mostly Black or Mexican American areas,” the report read.

    According to the paper, “Our experiments show that these stereotypes are similar to the archaic human stereotypes about African Americans that existed before the civil rights movement, are even more negative than the most negative experimentally recorded human stereotypes about African Americans, and are both qualitatively and quantitatively different from the previously reported overt racial stereotypes in language models, indicating that they are a fundamentally different kind of bias.”

    The paper also warned that, like American society becoming less overtly racist, the attitudes embedded in the subprocesses of artificial intelligence programs will allow for anti-Black racism to persist in more acceptable parameters as it relates to artificial intelligence.

    The paper’s authors continued, “Worryingly, we also observe that larger language models and language models trained with HF exhibit stronger covert, but weaker overt, prejudice…There is therefore a realistic possibility that the allocational harms caused by dialect prejudice in language models will increase further in the future, perpetuating the racial discrimination experienced by generations of African Americans.”

    : Artificial Intelligence and Algorithms: 21st Century Tools for Racism

    Expand All
    Comments / 243
    Add a Comment
    mediasdisgusting
    09-06
    who axed you?
    Mrs. Jordan
    09-06
    It seems like you guys have started a KKK online gang on this page. It’s so sad how people get online and just say anything about anybody! 🙄 Let’s not mention that half of you all can’t even write proper sentences. That didn’t stop you all from texting on how you assume people of color speak. 😒 Don’t get it twisted you have those who speak proper English who are Caucasians, and you have those who do not. There are those who speak proper English who are people of color, and there are those who do not. That alone shows that no race is better than the other when it comes to speaking proper English. With that being said, I’m really not understanding why this conversation is even being had so disrespectfully. No need to comment on this, because I’m not replying. I said what I said and it is what it is. For those who want to get mad and call me names. 😂 There is nothing you can ever say that will affect me. I really don’t care 🤷‍♀️ what anyone other than God, and Jesus has to say about me.
    View all comments
    YOU MAY ALSO LIKE
    Local News newsLocal News

    Comments / 0