Open in App
  • Local
  • U.S.
  • Election
  • Politics
  • Crime
  • Sports
  • Lifestyle
  • Education
  • Real Estate
  • Newsletter
  • Fortune

    How trust and safety leaders at top tech companies are approaching the security threat of AI: ‘Trust but verify’

    By Leo Schwartz,

    1 day ago

    Safety officers at large companies that have integrated AI tools like ChatGPT into their businesses are issuing similar warnings to their colleagues: "Trust, but verify."

    Speaking at Fortune 's Brainstorm Tech conference on Tuesday, Salesforce chief trust officer Brad Arkin detailed how the company, and its CEO Marc Benioff, balance the demand from customers to offer cutting-edge AI services while ensuring that it does not open them up to vulnerabilities. "Trust is more than just security," Arkin said, adding that the company's key focus is to create new features for its users that don't go against their interests.

    Against the backdrop of breakneck adoption of AI, however, is the reality that AI makes it easier for criminals to attack potential victims. Malicious actors can operate without the barrier of language, for example, while also being able to more easily send a massive volume of social engineering scams like phishing emails.

    Shadow AI

    Companies have long dealt with the threat of so-called "shadow IT," or the practice of employees using hardware and software that is not managed by a firm's technology department. Shadow AI could create even more vulnerabilities, especially without proper training. Still, Arkin said that AI should be approached like any tool—there will always be dangers, but proper instruction can lead to valuable results.

    Speaking on the panel, Cisco's chief security and trust officer Anthony Grieco shared advice that he passes on to employees about generative AI platforms like ChatGPT. "If you wouldn't tweet it, if you wouldn't put it on Facebook , if you wouldn't publish it publicly, don't put it into those tools," Grieco said."

    Even with proper training, the ubiquity of AI—and the rise of increased cybersecurity threats—means that every company has to rethink its approach to IT. A working paper published in October by the non-profit National Bureau of Economic Research found rapid adoption of AI across the country, especially among the largest firms. More than 60% of companies with more than 10,000 employees are using AI, the group said.

    Wendi Whitmore, the senior vice president for the "special forces unit" of the cybersecurity giant Palo Alto Networks , said on Tuesday that cybercriminals have deeply researched how businesses operate, including how they work with vendors and operators. As a result, employees should be trained to scrutinize every piece of communication for potential phishing or other related attacks. "You can be concerned about the technology and put some limitations around it," she said. "But the reality is that attackers don't have any of those limitations."

    Despite the novel perils, Accenture global security lead Lisa O'Connor touted the potential posed by what she called "responsible AI," or the need for organizations to implement a set of governance principles for how they want to adopt the technology. She added that Accenture has long embraced large language models, including working with Fortune on its own custom-trained LLM . "We drank our own champagne," O'Connor said.

    Read more coverage from Brainstorm Tech 2024:

    Experts worry that a U.S.-China cold war could turn hot: ‘Everyone’s waiting for the shoe to drop in Asia’

    Wiz CEO says ‘consolidation in the security market is truly a necessity’ as reports swirl of $23 billion Google acquisition

    Agility Robotics’ humanoid Digit robot is working hard at its first real job—at a Spanx factory

    This story was originally featured on Fortune.com

    Expand All
    Comments / 0
    Add a Comment
    YOU MAY ALSO LIKE
    Most Popular newsMost Popular

    Comments / 0