Open in App
  • Local
  • U.S.
  • Election
  • Politics
  • Crime
  • Sports
  • Lifestyle
  • Education
  • Real Estate
  • Newsletter
  • ITPro

    Transparency is “vital” in big tech’s new coalition on AI safety, experts suggest

    By George Fitzmaurice,

    27 days ago

    https://img.particlenews.com/image.php?url=29JTpx_0uZRl4QH00

    Some of the world's most influential tech companies have banded together to form a coalition on AI security , though experts have told ITPro that certain concerns need to be allayed.

    Dubbed the Coalition for Secure AI (CoSAI), the agreement will see the likes of Amazon , Microsoft , Anthropic , OpenAI , and others focus on collaborative efforts for the burgeoning technology. CoSAI has been about a year in the making, according to Google , following on from the tech giant’s introduction of the Secure AI Framework (SAIF) in 2023.

    Working towards similar ends, this coalition will focus on three areas - or “workstreams” as Google has termed them - to help support a “collective investment in AI security.”

    One such area is security in the AI software supply chain . Google has extended SLSA provenance to AI models to “help identify when AI software is secure” by providing an understanding of how it was created and handled in a supply chain.

    As part of this workstream, CoSAI will look to aid the management of third-party model risks and expand on the “existing efforts” of supply chain frameworks.

    CoSAI will also look to assist security practitioners in “day-to-day” AI governance challenges by creating clearer pathways to “identify investments and mitigation techniques to address the security impact of AI use.”

    Finally, the coalition will work to construct a taxonomy of AI risks and controls, as well as a checklist and scorecard to help guide practitioners in preparedness, management, and monitoring.

    As Peter Wood, CTO at Spectrum Search, pointed out, though, there are concerns around the self-regulatory nature of this body, given that big tech is exercising a degree of control over its own security measures.

    “A principal worry is the matter of who's held accountable. When these tech titans join forces to lay down the law on AI security, there's the worry that these guidelines could skew towards their benefit rather than that of the public interest,” Wood told ITPro .

    “This could potentially quash innovation from more modest firms and startups lacking the same resources. The concern is that self-regulation might become a tool for these giants to keep hold of their stronghold, forming a wall against newer contenders in the AI arena,” Wood added.

    CoSAI is a positive move if handled correctly

    According to Wood, “transparency in how the coalition operates is vital” as, without it, there could be growing concerns that CoSAI is setting standards for its own end.

    RELATED WHITEPAPER

    https://img.particlenews.com/image.php?url=2bhOK2_0uZRl4QH00

    (Image credit: Getty Images)

    Enhance your AI capabilities with Dell PowerEdge

    “Without clear transparency, there's a risk that these self-imposed rules could be less about security and more about managing the story around AI,” he said.

    In principle, however, the move is a positive one and “underscores a sector-wide recognition of the importance of AI security,” suggesting a push towards developing standards that might take longer to establish if the onus was placed solely on the government .

    “It needs to be handled with care to ensure it serves the wider interests of society, not just those of the coalition members. Striking the right balance between innovation, regulation, and public interest will be the key to its success,” Wood added.

    Expand All
    Comments / 0
    Add a Comment
    YOU MAY ALSO LIKE
    Most Popular newsMost Popular
    Total Apex Sports & Entertainment25 days ago

    Comments / 0