Open in App
  • Local
  • U.S.
  • Election
  • Politics
  • Crime
  • Sports
  • Lifestyle
  • Education
  • Real Estate
  • Newsletter
  • The Guardian

    Warnings AI tools used by government on UK public are ‘racist and biased’

    By Jon Ungoed-Thomas and Yusra Abdulahi,

    3 days ago
    https://img.particlenews.com/image.php?url=3CIJE8_0v9KvSPh00
    In 2020 a legal challenge stopped the Home Office from using an algorithm to help sort visa applications which contained ‘entrenched racism and bias’. Photograph: Guy Corbishley/Alamy

    Artificial intelligence and algorithmic tools used by central government are to be published on a public register after warnings they can contain “entrenched” racism and bias.

    Officials confirmed this weekend that tools challenged by campaigners over alleged secrecy and a risk of bias will be named shortly. The technology has been used for a range of purposes, from trying to detect sham marriages to rooting out fraud and error in benefit claims.

    The move is a victory for campaigners who have been challenging the deployment of AI in central government in advance of what is likely to be a rapid rollout of the technology in the public sector. Caroline Selman, a senior research fellow at the Public Law Project (PLP), an access-to-justice charity, said there had been a lack of transparency on the existence, details and deployment of the systems. “We need to make sure public bodies are publishing the information about these tools, which are being rapidly rolled out. It is in everyone’s interest that the technology which is adopted is lawful, fair and non-discriminatory.”

    In August 2020, the Home Office agreed to stop using a computer algorithm to help sort visa applications after it was claimed it contained “entrenched racism and bias”. Officials suspended the algorithm after a legal challenge by the Joint Council for the Welfare of Immigrants and the digital rights group Foxglove.

    It was claimed by Foxglove that some nationalities were automatically given a “red” traffic-light risk score, and those people were more likely to be denied a visa. It said the process amounted to racial discrimination.

    The department was also challenged last year over an algorithmic tool to detect sham marriages used to subvert immigration controls. The PLP said it appeared it could discriminate against people from certain countries, with an equality assessment disclosed to the charity revealing that Bulgarian, Greek, Romanian and Albanian people were more likely to be referred for investigation.

    The government’s Centre for Data Ethics and Innovation, now the Res­ponsible Technology Adoption Unit, warned in a report in November 2020 that there were numerous examples where the new technology had “entrenched or amplified historic biases, or even created new forms of bias or unfairness”.

    The centre helped develop an algorithmic transparency recording standard in November 2021 for public bodies deploying AI and algorithmic tools. It proposed that models which interact with the public or have a significant influence on decisions be published on a register or “repository”, with details on how and why they were being used.

    Related: As AI tools get smarter, they’re growing more covertly racist, experts find

    To date, just nine records have been published in three years on the repository. None of the models is operated by the Home Office or Department for Work and Pensions (DWP), which have operated some of the most controversial systems.

    The last government said in a consultation response on AI regulation in February that departments would be mandated to comply with the reporting standard. The Department for Science, Innovation and Technology (DSIT) confirmed this weekend that departments would now report on use of the technology under the standard.

    A DSIT spokesperson said: “Technology has huge potential to improve public services, but we know it’s important to maintain the right safeguards including, where appropriate, human oversight and other forms of governance.

    “The algorithmic transparency recording standard is now mandatory for all departments, with a number of records due to be published shortly. We continue to explore how it can be expanded across the public sector. We encourage all organisations to use AI and data in a way that builds public trust through tools, guidance and standards.”

    Departments are likely to face further calls to reveal more details on how their AI systems work and the measures taken to reduce the risk of bias. The DWP is using AI to detect potential fraud in advance claims for universal credit, and has more in development to detect fraud in other areas.

    Related: UK risks scandal over ‘bias’ in AI tools in use across public sector

    In its latest annual report, it says it has conducted a “fairness” analysis on its use of AI for universal credit advance claims which did not “present any immediate concerns of discrimination”. The DWP has not provided any details of its assessment due to concerns that publication could “allow fraudsters to understand how the model operates’’.

    The PLP is supporting possible legal action against the DWP over use of the technology. It is, pressing the department for details on how it is being used and the measures taken to mitigate harm. The project has compiled its own register of automated decision-making tools in government, with 55 tools tracked to date.

    Expand All
    Comments / 0
    Add a Comment
    YOU MAY ALSO LIKE
    Most Popular newsMost Popular
    The Guardian7 hours ago

    Comments / 0