Open in App
  • Local
  • U.S.
  • Election
  • Politics
  • Crime
  • Sports
  • Lifestyle
  • Education
  • Real Estate
  • Newsletter
  • ITPro

    MIT just launched a new database tracking the biggest AI risks

    By Nicole Kobie,

    2 days ago

    https://img.particlenews.com/image.php?url=2NAskU_0v01PTcZ00

    MIT is tracking the potential dangers posed by AI — and has found that most adoption frameworks designed to boost safe use of the technology are missing out on key risks.

    Researchers at MIT's Computer Science and Artificial Intelligence Laboratory (CSAIL) have joined forces with colleagues at University of Queensland, Future of Life Institute, KU Leuven, and Harmony Intelligence to create the AI Risk Repository .

    This is a database of more than 700 AI-related risks that researchers have identified by examining 43 existing frameworks. From that, the researchers have further developed taxonomies that classify the risks.

    "The AI Risk Repository is, to our knowledge, the first attempt to rigorously curate, analyze, and extract AI risk frameworks into a publicly accessible, comprehensive, extensible, and categorized risk database," said Dr. Neil Thompson, head of the MIT FutureTech Lab and one of the lead researchers on the project.

    "It is part of a larger effort to understand how we are responding to AI risks and to identify if there are gaps in our current approaches."

    The AI Risk Repository was created because researchers spotted that people making use of AI were spotting some, but not all, risks. The aim was to pull together existing research, analysis, and safety work into one place for use by other academics, policymakers, and businesses.

    "Since the AI risk literature is scattered across peer-reviewed journals, preprints, and industry reports, and quite varied, I worry that decision-makers may unwittingly consult incomplete overviews, miss important concerns, and develop collective blind spots," said Dr. Peter Slattery, an incoming postdoc at the MIT FutureTech Lab and current project lead.

    The AI Risk Repository team warned that despite such efforts, the database may still lack all potential risks to consider, because of the risk of the researchers' own bias, emerging challenges, and those that are domain specific.

    "We are starting with a comprehensive checklist, to help us understand the breadth of potential risks. We plan to use this to identify shortcomings in organizational responses. For instance, if everyone focuses on one type of risk while overlooking others of similar importance, that's something we should notice and address,” Thompson added.

    How the MIT AI Risk Repository works

    The database breaks down risk by cause (how it occurs), the domain (such as misinformation) and the subdomain (false or misleading information).

    Most of the risks analyzed by the researchers were pinned on AI systems (51%) rather than humans (34%) and suggested that the risks emerged not in development (10%) but during deployment (65%) — in other words, it's the machine's fault and we didn't know this would happen until it did.

    The frameworks were most likely to address risks in system safety, socioeconomic or environmental harms, discrimination and toxicity, privacy/security, and malicious actors and misuse. They were less likely to consider human computer interaction (41%) or misinformation (44%).

    Overall, these frameworks only mentioned 34% of the 23 subdomains — with a quarter of those covering just a fifth of potential sources of risk. No single risk assessment document considered all 23 subdomains, and the one with the most comprehensive coverage only managed 70%.

    RELATED WHITEPAPER

    https://img.particlenews.com/image.php?url=4b57S8_0v01PTcZ00

    (Image credit: Snyk)

    Implement a secure coding assistant

    This means that assessments, frameworks, and other studies of the dangers posed by AI are failing to consider all aspects of risk.

    Soroush Pour, CEO & Co-founder of AI safety evaluations and red teaming company Harmony Intelligence, said: "It becomes much more likely that we miss something by simply not being aware of it”.

    Next, the project will see external experts working through the repository to rank the risk level, and then applying that to public documents from AI developers and companies. That will hopefully reveal if companies are doing enough to prevent risks in AI development.

    Expand All
    Comments / 0
    Add a Comment
    YOU MAY ALSO LIKE
    Most Popular newsMost Popular
    ITPro11 days ago

    Comments / 0