Open in App
  • Local
  • U.S.
  • Election
  • Politics
  • Crime
  • Sports
  • Lifestyle
  • Education
  • Real Estate
  • Newsletter
  • ITPro

    American tech workers want AI regulation – but they might have to wait a while

    By Nicole Kobie,

    10 days ago

    https://img.particlenews.com/image.php?url=4cJl4t_0urTei8Z00

    American tech workers want strong AI regulation so they can make better use of the technology, according to new research.

    Findings from a survey commissioned by data intelligence company Collibra , which surveyed 300 American tech experts working in data management, privacy and AI, found three-quarters support federal and state level regulations to watch over how the technology evolves.

    "As we look to the future, we need our governments to set clear and consistent rules while also creating an environment that enables innovation and bolsters data quality and integrity," said Collibra co-founder and CEO Felix Van de Maele.

    The survey found that the biggest challenges facing AI — and necessitating regulation — were privacy and security, both cited as a concern by 64% of respondents, followed by misinformation and ethical use/accountability, both above 50%.

    IP needs protection from AI practices

    Indeed, the survey results suggested accountability around data use was a high priority for those in the industry — perhaps no surprise given the furor around how data was collected by AI companies, with OpenAI, Microsoft and more facing legal challenges over copyright infringement and license agreements.

    Eight-in-ten of those surveyed believed that American regulators should update copyright laws to protect content against data gathering and to compensate creators if their material is used in training models.

    "While AI innovation continues to advance rapidly, the lack of a regulatory framework puts content owners at risk, and ultimately will hinder the adoption of AI," said Van de Maele.

    The survey comes amid growing efforts to regulate the technology , without hampering innovation, Van de Maele noted, adding that the US should follow the lead of the European Union (EU).

    The US has bits and pieces of legislation at state level, while Biden issued an executive order on "safe, secure and trustworthy" artificial intelligence last year.

    However, that largely directed other national-level organisations to work on fuller standards, such as the National Institute of Standards and Technology (NIST) and National Security Council (NSC).

    In the EU, the union’s flagship AI Act came into effect at the beginning of the month, though the full rollout will take years. First up is prohibited systems, such as banning AI that exploits vulnerable users, uses social scoring or predicts future criminality; that rule will be enforced from next year.

    RELATED WHITEPAPER

    https://img.particlenews.com/image.php?url=3NV2wp_0urTei8Z00

    (Image credit: IBM)

    Reimagine your shared services

    Beyond that, the EU AI Act requires high-risk AI systems to have a risk management system, data governance, proper record keeping and other oversight tools to ensure safety, with further rules for general purpose AI.

    More positively, the survey did suggest IT workers in the US trust their companies to make use of AI in a safe, sensible way.

    According to Collibra, 88% of those asked said they have a lot or a great deal of trust in their own employers' approach to AI.

    A further three-quarters said they believe their company correctly prioritizes AI training , with even more positive results at larger companies.

    Expand All
    Comments / 0
    Add a Comment
    YOU MAY ALSO LIKE
    Most Popular newsMost Popular

    Comments / 0