Open in App
  • U.S.
  • Election
  • Newsletter
  • ITPro

    AI isn’t the cure for AI-led cyber attacks

    By Ev Kontsevoy,

    15 days ago

    https://img.particlenews.com/image.php?url=4bIsPt_0uj7sFOP00

    Even without factoring in AI , breaches seem bad enough these days. Just about every week, another high-profile data breach gets reported in the press. Sometimes they’re on the very costly side, like the recent breach of UnitedHealth Group’s Change Healthcare unit , which is expected to cost that company up to $1.6 billion.

    Now imagine dousing that fire with AI, and there’s just cause for concern. Cyber attacks have always involved some degree of cleverness and patience from hackers. With AI, though, the danger is a sharp rise in the frequency and scale of the already-plentiful attacks we’re seeing.

    The channel, of course, might be tempted to peddle cybersecurity solutions powered by AI – ‘ fight AI with AI ,’ as they say. But it’s not the solution that will stop the root cause of breaches. In fact, AI isn’t even the biggest threat enterprises should worry about: social engineering is.

    The rise of AI in social engineering

    Social engineering is the leading cause of cyber breaches . For a sense of how prevalent it is, 68% of cyber attacks involve the human element . What do these attacks look like in practice? You get an email from your HR department, asking for your password to a pension provider platform.

    https://img.particlenews.com/image.php?url=28U2sm_0uj7sFOP00

    (Image credit: Getty Images)

    In the age of AI threats, the future of security is unified

    Maybe it happens right after your company announces a change of pension providers, too. The timing is certainly convenient, so what’s the harm, right? The twist is that the ‘HR person’ isn’t from the HR department. They’re an impersonator, who only found out about the pension provider change because they saw your company post about it on LinkedIn.

    These attacks keep succeeding, whether people find that fact silly or not, and generative AI is likely going to catapult the scale of them into stratospheric volumes. Generative AI tools like WormGPT, also known as the ‘ hackbot-as-a-service ,’ can now enable cyber criminals to design more convincing phishing campaigns and deepfake impersonations, while reducing the time and cost required to launch these cyberattacks. A teenager in a basement in Ohio will in theory be able to carry out a copious amount of social engineering attacks in a single day.

    Want to stop social engineering? Don’t fragment employees’ identity

    The reason why these breaches keep happening isn’t the sophistication of the scheme, or some elaborate software vulnerability or exploit. People are the problem, or rather, the secrets they leave behind pave the way for bad actors to breach modern infrastructure and pivot across resources.

    https://img.particlenews.com/image.php?url=1jbUgN_0uj7sFOP00

    (Image credit: Getty Images)

    Have we seen the end of the ‘true’ MSP?

    Most breaches involve cyber criminals targeting some form of privilege, such as credentials like passwords, browser cookies, or API keys . These credentials are everywhere . They appear in 86% of security breaches related to web-based applications and platforms . Most organizations even have credentials hard-coded into their code base .

    How much bad actors decide to lean on AI for phishing campaigns is the wrong issue to focus on. The real battle is stopping employees and enterprises from leaving secrets like credentials in places they shouldn’t be. Plain and simple, social engineering will never go away. That’s why the modern-day security imperative has to be eliminating secrets, now littered like plastic in an ocean basin across many disparate layers of the technology stack – Kubernetes , servers, cloud APIs , specialized dashboards, and databases, and more.

    These layers all manage security in different ways, and the consequence our industry has been left with is a multitude of silos that each open up new vectors for bad actors to breach. Adding AI to workflows will inevitably create yet another silo, but it doesn’t have to be.

    Consolidate AI with the rest of your identity

    For some time, data transparency has been a thorny issue for generative AI. If a leak happens, you want to quickly find out what data the AI agent had access to, and who had access to the agent itself. Not every company governs data the same way, so finding the source of truth is rarely easy.

    https://img.particlenews.com/image.php?url=0cqrfZ_0uj7sFOP00

    (Image credit: Getty Images)

    Opening the black box: How to respond to transparency demands

    To reduce friction, enterprises must resist treating AI agents as a separate technology silo. They have to consolidate the identity of their AI agents with all their other resources, including servers , laptops , microservices , etc into one inventory that provides that single source of truth for identity and access relationships. To further streamline things, these companies should apply the same rules and policy to AI that they do to everything else.

    Moreover, no business should ever present employee identities as ‘information’ such as passwords or usernames. It’s time for every enterprise housing modern infrastructure to cryptographically secure identities. This means basing access not on passwords but on physical-world attributes like biometric authentication , and enforcing access with short-lived privileges that are only granted for individual tasks that need performing.

    RELATED WHITEPAPER

    https://img.particlenews.com/image.php?url=3oUxwd_0uj7sFOP00

    (Image credit: Nasuni)

    Unlock the value of your business data

    A cryptographic identity for employees can consist of three key components: the machine identity of the device being used, the employee's biometric marker, and a personal identification number (PIN). The point of this approach is to significantly reduce the attack surface threat actors can exploit with social engineering tactics. If you need a poster child for this security model, it already exists, and it’s called the iPhone. It uses facial recognition for biometric authentication, a PIN code, and a Trusted Platform Module (TPM) chip inside the phone that governs its ‘machine identity.’ This is why you never hear about iPhones getting hacked.

    This doesn’t mean the channel won’t find success in selling AI-powered cybersecurity tools. Obviously, they have their uses for analyzing threat activity and detecting anomalies in infrastructure . But they don’t treat human error. With or without AI, social engineering will always rely on human error happening. This is where anti-malware and virus remediation tools will fall short, no matter how much AI you throw at them. People are still going to leave their passwords lying around in an unlocked laptop at a cafe. So, businesses can sing the praises of AI, but they need to get rid of the passwords first.

    Expand All
    Comments / 0
    Add a Comment
    YOU MAY ALSO LIKE
    Most Popular newsMost Popular

    Comments / 0