Open in App
  • Local
  • U.S.
  • Election
  • Politics
  • Crime
  • Sports
  • Lifestyle
  • Education
  • Real Estate
  • Newsletter
  • TIME

    What We Know About the New U.K. Government’s Approach to AI

    By Harry Booth,

    4 days ago

    https://img.particlenews.com/image.php?url=2yCYnL_0uOyfM1a00

    W hen the U.K. hosted the world’s first AI Safety Summit last November, Rishi Sunak, the then Prime Minister, said the achievements at the event would “tip the balance in favor of humanity.” At the two-day event, held in the cradle of modern computing, Bletchley Park, AI labs committed to share their models with governments before public release, and 29 countries pledged to collaborate on mitigating risks from artificial intelligence. It was part of the Sunak-led Conservative government’s effort to position the U.K. as a leader in artificial intelligence governance, which also involved establishing the world's first AI Safety Institute—a government body tasked with evaluating models for potentially dangerous capabilities. While the U.S. and other allied nations subsequently set up their own similar institutes, the U.K. institute boasts 10 times the funding of its American counterpart.

    Eight months later, on July 5, after a landslide loss to the Labour Party, Sunak left office and the newly elected Prime Minister Keir Starmer began forming his new government. His approach to AI has been described as potentially tougher than Sunak’s.

    Starmer appointed Peter Kyle as science and technology minister, giving the lawmaker oversight of the U.K.’s AI policy at a crucial moment, as governments around the world grapple with how to foster innovation and regulate the rapidly developing technology. Following the election result, Kyle told the BBC that “unlocking the benefits of artificial intelligence is personal,” saying the advanced medical scans now being developed could have helped detect his late mother’s lung cancer before it became fatal.

    Alongside the potential benefits of AI, the Labour government will need to balance concerns from the public. An August poll of over 4,000 members of the British public conducted by the Centre for Data Ethics and Innovation found 45% respondents believed AI taking people’s jobs represented one of the biggest risks posed by the technology; 34% believed loss in human creativity and problem solving was one of the greatest risks.

    Here's what we know so far about Labour's approach to artificial intelligence.

    Regulating AI

    One of the key issues for the Labour government to tackle will likely be how to regulate AI companies and AI-generated content. Under the previous Conservative-led administration, the Department for Science, Innovation and Technology (DSIT) held off on implementing rules, saying that “introducing binding measures too soon, even if highly targeted, could fail to effectively address risks, quickly become out of date, or stifle innovation and prevent people from across the UK from benefiting from AI," in a 2024 policy paper about AI regulation . Labour has signaled a different approach, promising in its manifesto to introduce "binding regulation on the handful of companies developing the most powerful AI models," suggesting a greater willingness to intervene in the rapidly evolving technology’s development.

    Read More: U.S., U.K. Announce Partnership to Safety Test AI Models

    Labour has also pledged to ban sexually explicit deepfakes. Unlike proposed legislation in the U.S., which would allow victims to sue those who create non-consensual deepfakes, Labour has considered a proposal by Labour Together, a think-tank with close ties to the current Labour Party, to impose restrictions on developers by outlawing so-called nudification tools .

    While AI developers have made agreements to share information with the AI Safety Institute on a voluntary basis, Kyle said in a February interview with the BBC that Labour would make that information-sharing agreement a “statutory code.”

    Read More: To Stop AI Killing Us All, First Regulate Deepfakes, Says Researcher Connor Leahy

    “We would compel by law, those test data results to be released to the government,” Kyle said in the interview.

    Timing regulation is a careful balancing act, says Sandra Wachter, a professor of technology and regulation at the Oxford Internet Institute.

    “The art form is to be right on time with law. That means not too early, not too late,” she says. “The last thing that you want is a hastily thrown together policy that stifles innovation and does not protect human rights.”

    Watchter says that striking the right balance on regulation will require the government to be in “constant conversation” with stakeholders such as those within the tech industry to ensure the government has an inside view of what is happening at the cutting edge of AI development when formulating policy.

    Kirsty Innes, director of technology policy at Labour Together points to the U.K. Online Safety Act, which was signed into law last October as a cautionary tale of regulation failing to keep pace with technology. The law, which aims to protect children from harmful content online, took 6 years from the initial proposal being made to finally being signed in.

    “During [those 6 years] people’s experiences online transformed radically. It doesn't make sense for that to be your main way of responding to changes in society brought by technology,” she says. “You've got to be much quicker about it now.”

    Read More: The 3 Most Important AI Policy Milestones of 2023

    There may be lessons for the U.K. to learn from the E.U. AI Act, Europe’s comprehensive regulatory framework passed in March, which will come into force on August 1 and become fully applicable to AI developers in 2026. Innes says that mimicking the E.U. is not Labour’s endgame. The European law outlines a tiered risk classification for AI use cases, banning systems deemed to pose unacceptable risks, such as social scoring systems, while placing obligations on providers of high-risk applications like those used for critical infrastructure. Systems said to pose limited or minimal risk face fewer requirements. Additionally, it sets out rules for “general-purpose AI”, which are systems with a wide range of uses, like those underpinning chatbots such as OpenAI’s ChatGPT. General-purpose systems trained on large amounts of computing power—such as GPT-4 —are said to pose “systemic risk,” and developers will be required to perform risk assessments as well as track and report serious incidents.

    “I think there is an opportunity for the U.K. to tread a nuanced middle ground somewhere between a very hands-off U.S. approach and a very regulatory heavy E.U. approach,” says Innes.

    Read More: There’s an AI Lobbying Frenzy in Washington. Big Tech Is Dominating

    In a bid to occupy that middle ground, Labour has pledged to create what it calls the Regulatory Innovation Office, a new government body that will aim to accelerate regulatory decisions.

    A ‘pro-innovation’ approach

    In addition to helping the government respond more quickly to the fast-moving technology, Labour says the "pro-innovation" regulatory body will speed up approvals to help new technologies get licensed faster. The party said in its manifesto that it would implement AI into healthcare to “transform the speed and accuracy of diagnostic services, saving potentially thousands of lives.”

    Healthcare is just one area where Kyle hopes to use AI. On July 8, he announced the revamp of the DSIT, which will bring on AI experts to explore ways to improve public services.

    Meanwhile former Labour Prime Minister Tony Blair has encouraged the new government to embrace AI to improve the country’s welfare system. A July 9 report by his think tank the Tony Blair Institute for Global Change, concluded AI could save the U.K. Department for Work and Pensions more than $1 billion annually.

    Blair has emphasized AI's importance. “Leave aside the geopolitics, and war, and America and China, and all the rest of it. This revolution is going to change everything about our society, our economy, the way we live, the way we interact with each other,” Blair said, speaking on the Dwarkesh Podcast in June.

    Read more: How a New U.N. Advisory Group Wants to Transform AI Governance

    Modernizing public services is part of Labour’s wider strategy to leverage AI to grow the U.K. tech sector. Other measures include making it easier to set up data centers in the U.K., creating a national data library to bring existing research programs together, and offering decade-long research and development funding cycles to support universities and start-ups.

    Speaking to business and tech leaders in London last March , Kyle said he wanted to support “the next 10 DeepMinds to start up and scale up here within the U.K.”

    Workers’ rights

    Artificial intelligence-powered tools can be used to monitor worker performance, such as grading call center-employees on how closely they stick to the script. Labour has committed to ensuring that new surveillance technologies won’t find their way into the workplace without consultation with workers. The party has also promised to “protect good jobs” but, beyond committing to engage with workers, has offered few details on how.

    Read More: As Employers Embrace AI, Workers Fret—and Seek Input

    “That might sound broad brush, but actually a big failure of the last government's approach was that the voice of the workforce was excluded from discussions,” says Nicola Smith, head of rights at the Trades Union Congress, a union-group.

    While Starmer’s new government has a number of urgent matters to prioritize, from setting out its legislative plan for year one to dealing with overcrowded prisons , the way it handles AI could have far-reaching implications.

    "I'm constantly saying to my own party, the Labour Party [that] 'you've got to focus on this technology revolution. It's not an afterthought,” Blair said on the Dwarkesh Podcast in June. “It's the single biggest thing that's happening in the world today."

    Contact us at letters@time.com .

    Expand All
    Comments / 0
    Add a Comment
    YOU MAY ALSO LIKE
    Most Popular newsMost Popular
    Total Apex Sports & Entertainment28 days ago
    Total Apex Sports & Entertainment29 days ago

    Comments / 0