Open in App
  • Local
  • U.S.
  • Election
  • Politics
  • Crime
  • Sports
  • Lifestyle
  • Education
  • Real Estate
  • Newsletter
  • UPI News

    Meta won't release advanced AI in the EU due to stronger user data protections

    By Doug Cunningham,

    2 days ago

    July 18 (UPI) -- Meta said Thursday it won't release Llama, its most advanced artificial intelligence model, in the European Union due to concerns over stronger EU privacy protections and AI regulations.

    https://img.particlenews.com/image.php?url=4MMXRi_0uVavH2c00
    Meta said Thursday it has decided not to release its most advanced AI system in the EU due to concerns about the EU's stronger data privacy and AI regulations. File Photo by Terry Schmitt/UPI

    "We will release a multimodal Llama model over the coming months, but not in the EU due to the unpredictable nature of the European regulatory environment ," Meta said in a statement to Axios.

    As a result, European companies won't be able to use the multimodal AI models and Meta potentially could also prevent companies outside of the EU from offering services that use the systems to European customers.

    While that would limit AI products available to individuals in the EU, the data privacy laws in Europe also extend greater protections to users than in other parts of the world.

    Meta, however, has said that a text-only version of its Llama 3 model will be made available for EU customers and companies.

    Meta has been ordered to stop training AI using Facebook and Instagram user posts in the EU due to privacy concerns .

    Meta's decision not to release the Llama AI system in the EU is related to the General Data Protection Regulation, considered by the EU to be the strongest privacy and security law in the world.

    The GDPR governs how the personal data of individuals in the EU can be processed and transferred.

    According to the European Council, the GDPR defines "individuals' fundamental rights in the digital age, the obligations of those processing data, methods for ensuring compliance and sanctions for those in breach of the rules."

    The EU AI Act, which subjects AI systems deemed "high risk" to stronger regulations is also set to take effect in August.

    Under that law AI providers in the EU must "establish a risk management system throughout the AI systems lifecycle and conduct data governance making sure AI training, validation and testing datasets are relevant, sufficiently representative and, to the best extent possible, free of errors and complete according to the intended purpose."

    Meta and AI systems from other makers must also design the systems for record-keeping "to enable it to automatically record events relevant for identifying national level risks and substantial modifications throughout the system's lifecycle."

    Human oversight must also be built into the AI systems and they have to be designed "to achieve appropriate levels of accuracy, robustness, and cybersecurity."

    On July 1, the European Commission said Meta had violated the Digital Markets Act and potentially could face massive fines because Meta doesn't allow users to exercise their right to freely consent to use of their data.

    Expand All
    Comments / 0
    Add a Comment
    YOU MAY ALSO LIKE
    Most Popular newsMost Popular
    leadersedge.com1 day ago

    Comments / 0