Open in App
  • Local
  • Headlines
  • Election
  • Crime Map
  • Sports
  • Lifestyle
  • Education
  • Real Estate
  • Newsletter
  • Fortune

    How to prevent millions of invisible law-free AI agents casually wreaking economic havoc

    By Gillian Hadfield,

    22 hours ago

    Shortly after the world tilted on its axis with the release of ChatGPT, folks in the tech community started asking each other: What is your p(doom) ? That’s nerd-speak for, what do you think is the probability that AI will destroy humanity? There was Eliezer Yudkowsky warning women not to date men who didn’t think the odds were greater than 80%, as he believes. AI pioneers Yoshua Bengio and Geoffrey Hinton estimating 10-20%—enough to get both to shift their careers from helping to build AI to helping to protect against it. And then another AI pioneer, Meta’s chief AI scientist Yann LeCun, pooh-poohing concerns and putting the risk at less than .01%—“ below the chances of an asteroid hitting the earth .” According to LeCun, current AI is dumber than a cat .

    The numbers are, of course, meaningless. They just stand in for wildly different imagined futures in which humans either manage or fail to maintain control over how AI behaves as it grows in capabilities. The true doomers see us inevitably losing an evolutionary race with a new species: AI “wakes up” one day with the goal of destroying humanity because it can and we’re slow, stupid, and irrelevant. Big digital brains take over from us puny biological brains just like we took over the entire planet, destroying or dominating all other species. The pooh-poohers say, Look, we are building this stuff, and we won’t build and certainly won’t deploy something that could harm us.

    Personally, although I’ve been thinking about what happens if we develop really powerful AI systems for several years now, I don’t lose sleep over the risk of rogue AI that just wants to squash us. I have found that people who do worry about this often have a red-in-tooth-and-claw vision of evolution as a ruthlessly Darwinian process according to which the more intelligent gobble up the less. But that’s not the way I think about evolution, certainly not among humans. Among humans, evolution has mostly been about cultural group selection—humans succeed when the groups they belong to succeed. We have evolved to engage in exchange—you collect the berries, I’ll collect the water. The extraordinary extent to which humans have outstripped all other species on the planet is a result of the fact that we specialize and trade, engage in shared defense and care. It begins with small bands sharing food around the fire and ultimately delivers us to the phenomenally complex global economy and geopolitical order we live in today.

    And how have we accomplished this? Not by relentlessly increasing IQ points but by continually building out the stability and adaptability of our societies. By enlarging our capacity for specialization and exchange. Our intelligence is collective intelligence, cooperative intelligence. And to sustain that collectivity and cooperation we rely on complex systems of norms and rules and institutions that give us each the confidence it takes to live in a world where we depend constantly on other people doing what they should. Bringing home the berries. Ensuring the water is drinkable.

    'More AI agents than there are people'

    So, when I worry about where AI might take us, this is what I worry about: AI that messes up these intricate evolved systems of cooperation in the human collective.

    Because what our AI companies are furiously building is no longer well thought of as new tools for us to use in what we do. What they are bringing into being is a set of entirely new actors who will themselves start to “do.” It is the implication of the autonomy of advanced AI that we need to come to grips with.

    Autonomy has been there from the Big Bang of modern AI in 2012—when Geoffrey Hinton and his students Ilya Sutskever and Alex Krizhevsky demonstrated the power of a technique for building software that writes itself. That’s what machine learning is: Instead of a human programmer writing all the lines of code, the machine does it. Humans write learning algorithms that give the machine an objective—try to come up with labels for these pictures (“cat” “dog”) that match the ones that humans give them—and some mathematical dials and switches to fiddle with to get to the solution. But then the machine does the rest. This type of autonomy already generates a bunch of issues, which we’ve grappled with in AI since the start: Since the machine wrote the software, and it’s pretty darn complicated, it’s not easy for humans to unpack what it’s doing. That can make it hard to predict or understand or fix. But we can still decide whether to use it; we’re still very much in control.

    Today, however, the holy grail that is drawing billions of AI investment dollars is autonomy that goes beyond a machine writing its own code to solve a problem set for it by humans. AI companies are now aiming at autonomous AI systems that can use their code to go out and do things in the real world: buy or sell shares on the stock exchange, agree to contracts, execute maintenance routines on equipment. They are giving them tools—access to the internet to do a search today, access to your bank account tomorrow.

    Our AI developers and investors are looking to create digital economic actors, with the capacity to do just about anything. When Meta CEO Mark Zuckerberg recently said that he expected the future to be a world with hundreds of millions if not billions of AI agents—"more AI agents than there are people”—he was imagining this. Artificial agents helping businesses manage customer relations, supply chains, product development, marketing, pricing. AI agents helping consumers manage their finances and tax filings and purchases. AI agents helping governments manage their constituents. That’s the “general” in artificial general intelligence, or AGI. The goal is to be able to give an AI system a “general” goal and then set it loose to make and execute a plan. Autonomously.

    Mustafa Suleyman, now head of AI at Microsoft , imagined it this way: In this new world, an AI agent would be able to “make $1 million on a retail web platform in a few months with just a $100,000 investment.” A year ago, he thought such a generally capable AI agent could be two years away. Pause for a moment and think about all the things this would entail: a piece of software with control over a bank account and the capacity to enter into contracts, engage in marketing research, create an online storefront, evaluate product performance and sales, and accrue a commercial reputation. For starters. Maybe it has to hire human workers. Or create copies of itself to perform specialized tasks. Obtain regulatory approvals? Respond to legal claims of contract breach or copyright violation or anticompetitive conduct?

    Who to sue if it's an autonomous AI agent

    Shouldn’t we have some rules in place for these newcomers? Before they start building and selling products on our retail web platforms? Because if one of our AI companies announces they have these agents ready to unleash next year, we currently have nothing in place. If someone wants to work in our economy, they have to show their ID and demonstrate they’re authorized to work. If someone wants to start a business, they have to register with the secretary of state and provide a unique legal name, a physical address, and the name of the person who can be sued if the business breaks the rules. But an AI agent? Currently they can set up shop wherever and whenever they like.

    Put it this way: If that AI agent sold you faulty goods, made off with your deposit, stole your IP, or failed to pay you, who are you going to sue? How are all the norms and rules and institutions that we rely on to make all those complex trades and transactions in the modern economy function going to apply? Who is going to be accountable for the actions of this autonomous AI agent?

    The naïve answer is: whoever unleashed the agent. But there many problems here. First, how are you going to find the person or business who set the AI agent on its path, possibly, according to Suleyman, a few months ago? Maybe the human behind the agent has disclosed their identity and contact information at every step along the way. But maybe they haven’t. We don’t have any specific rules in place yet requiring humans to disclose like this. I’m sure smart lawyers will be looking for such rules in our existing law, but smart scammers, and aggressive entrepreneurs, will be looking for loopholes too. Even if you can trace every action by an AI agent back to a human you can sue, it’s probably not cheap or easy to do so. And when legal rules are more expensive to enforce, they are less often enforced. We get less deterrence of bad behavior and distortion in our markets.

    And suppose you do find the human behind the machine. Then what? The vision, remember, is one of an AI agent capable of taking a general instruction—go make a million bucks!—and then performing autonomously with little to no further human input. Moseying around the internet, making deals, paying bills, producing stuff. What happens when the agent starts messing up or cheating? These are agents built on machine learning: Their code was written by a computer. That code is extremely uninterpretable and unpredictable—there are literally billions of mathematical dials and switches being used by the machine to decide what to do next. When ChatGPT goes bananas and starts telling a New York Times journalist it’s in love with him and he should leave his wife, we have no idea why. Or what it might do if it now has access to bank accounts and email systems and online platforms. This is known as the alignment problem , and it is still almost entirely unsolved.

    Multi-agent dynamics

    So, here’s my prediction if you sue the human behind the AI agent when you find them: The legal system will have a very hard time holding them responsible for everything the agent does, regardless of how alien, weird, and unpredictable. Our legal systems hold people responsible for what they should have known better to avoid. It is rife with concepts like foreseeability and reasonableness. We just don’t hold people or businesses responsible very often for stuff they had no way of anticipating or preventing.

    And if we do want to hold them responsible for stuff they never imagined, then we need clear laws to do that.

    But that’s why the existential risk that keeps me up at night is what seems to be the very real and present risk that we’ll have a flood of AI agents joining our economic systems with essentially zero legal infrastructure in place. Effectively no rules of the road—indeed, no roads—to ensure we don’t end up in chaos.

    And I do mean chaos—in the sense of butterfly wings flapping in one region of the world and hurricanes happening elsewhere. AI agents won’t just be interacting with humans; they’ll be interacting with other AI agents. And creating their own sub-agents to interact with other sub-agents. This is the problem my lab works on—multi-agent AI systems, with their own dynamics at a level of complexity beyond just the behavior of a Gemini or GPT-4 or Claude. Multi-agent dynamics are also a major gap in our efforts to plug the hole of risks from frontier models: Red-teaming efforts to make sure that our most powerful language models can’t be prompted into helping someone build a bomb do not go near the question of what happens in multi-agent systems. What might happen when there are hundreds, thousands, millions of humans and AI agents interacting in transactions that can affect the stability of the global economy or geopolitical order? We have trouble predicting what human multi-agent systems—economies, political regimes—will produce. We know next to nothing about what happens when you introduce AI agents into our complex economic systems.

    But we do know something about how to set the ground rules for complex human groups. And we should be starting there with AI.

    AI registration requirements

    Number one: Just as we require workers to have verifiable ID and a social security number to get a job, and just as we require companies to register a unique identity and legally effective address to do business, we should require registration and identification of AI agents set loose to autonomously participate in transactions online. And just as we require employers and banks to verify the registration of the people they hire and the businesses they open bank accounts for, we should require anyone transacting with an AI agent to verify their ID and registration. This is a minimal first step. It requires creating a registration scheme and some verification infrastructure. And answering some thorny questions about what counts as a distinct “instance” of an AI agent (since software can be freely replicated and effectively be in two—multiple—places at once and an agent could conceivably “disappear” and “reconstitute” itself). That’s what law does well: create artificial boundaries around things so we can deal with them. (Like requiring a business to have a unique name that no one else has registered, and protecting trademarks that could otherwise be easily copied. Or requiring a business or partnership to formally dissolve to “disappear.”) It’s eminently doable.

    The next step can go one of two ways, or both. The reason to create legally recognizable and persistent identities is to enable accountability—read: someone to sue when things go wrong. A first approach on this would be to require any AI agent to have an identified person or business who is legally responsible for its actions—and for that person or business to be disclosed in any AI transaction. States usually require an out-of-state business to publicly designate an in-state person or entity who can be sued in local courts in the event the business breaches a contract or violates the law. We could establish a similar requirement for an AI agent “doing business” in the state.

    But we may have to go further and make AI agents directly legally responsible for their actions. Meaning you could sue the AI agent. This may sound strange at first. But we already make artificial “persons” legally responsible: Corporations are “legal persons” that can sue and be sued in their own “name.” No need to track down the shareholders or directors or managers. And we have a whole legal apparatus around how these artificial persons must be constituted and governed. They have their own assets—which can be used to pay fines and damages. They can be ordered by courts to do or not do things. They can be stopped from doing business altogether if they engage in illegal or fraudulent activity. Creating distinct legal status for AI agents—including laws about requiring them to have sufficient assets to pay fines or damages as well as laws about how they are “owned” and “managed”—is probably something we’d have to do in a world with “more AI agents than people.”

    How far away is this world? To be honest, we—the public—really don’t know. Only the companies who are furiously investing billions into building AI agents— Google , OpenAI, Meta, Anthropic, Microsoft—know. And our current laws allow them to keep this information to themselves. And that’s why there’s another basic legal step we should be taking, now. Even before we create registration schemes for AI agents, we should create registration requirements for our most powerful AI models, as I’ve proposed (in brief and in detail ) with others. That’s the only way for governments to gain the kind of visibility we need to even know where to start on making sure we have the right legal infrastructure in place. If we’re getting a whole slew of new alien participants added to our markets, shouldn’t we have the same capacity to know who they are and make them follow the rules as we do with humans?

    We shouldn’t just be relying on p-doom fights on X between technologists and corporate insiders about whether we are a few years, or a few decades, or a few hundred years away from these possibilities.

    The opinions expressed in Fortune.com commentary pieces are solely the views of their authors and do not necessarily reflect the opinions and beliefs of Fortune .

    Read more:

    This story was originally featured on Fortune.com

    Comments /
    Add a Comment
    YOU MAY ALSO LIKE
    Local News newsLocal News

    Comments / 0