Open in App
  • Local
  • U.S.
  • Election
  • Politics
  • Crime
  • Sports
  • Lifestyle
  • Education
  • Real Estate
  • Newsletter
  • San Francisco Examiner

    Why this Stanford expert isn’t worried about AI starting a nuclear war

    By Troy_WolvertonCraig Lee/The ExaminerOlivia Wise/The Examiner,

    11 days ago
    https://img.particlenews.com/image.php?url=28iDJN_0uN656bi00
    Olivia Wise/The Examiner

    Stanford launched its Human-centered Artificial Intelligence institute five years ago, but it wasn’t until recently that the center’s founders tried to narrow down what they actually meant by “human-centered AI.”

    At first, leaving that definition open seemed like a good idea, because it allowed people coming to the institute from different disciplines to explore different directions, said James Landay , a co-founder of Stanford HAI . But more recently, it became clearer the institute needed to have more of a standard definition to guide its overall direction and research efforts, said Landay, who is also a computer-science professor at the university.

    What he and his colleagues have come up with is that human-centered AI incorporates not just how individual users interact with the technology, but the notion that it could affect a broader community of users and society at large. Their new mission is to offer companies and AI developers concrete ways to bake in that thinking about the broader effects of AI as they design their products.

    That’s not to say the institute has been sitting idle for the last five years. Since it launched in 2019, Stanford HAI has bulked up its staff to 35 to 40 people; funded the research of some 400 faculty members; signed up a slew of fellows to conduct research and teach classes; convened events that have brought together leading researchers policymakers and investors; and led training sessions for corporate executives and congressional staffers to help them get up to speed on AI.

    Last month, following Stanford HAI’s fifth-anniversary event on the university’s campus, The Examiner spoke with Landay about how well AI developers in the tech industry are adhering to human-centered thinking, the dangers of not incorporating that kind of mindset into AI development, and the future of the technology . This interview has been edited for length and clarity.

    As you look out over the last five years, how would you rate the tech industry on the way it’s incorporated human-centered thinking into the design of its AI products? I don’t see a lot of it. We see a variety of results of companies where they’re not really thinking about this; they’re just pushing out products, whether OpenAI or Google — where they might try but they’ve had some missteps — to a company like Microsoft, which I actually think they’ve done a good job in thinking about a lot of these things. On the other hand, they push stuff out, because they want to be first or lead the market.

    Some of them have the right teams and notions about it. But sometimes the market overtakes the decision making.

    Companies who treat this, responsible AI, like a separate function, [where] you have some team check it near the end and say, ”Hey, are we good to go?” — which is often how they think about privacy and security — that doesn’t work. Because at that point, there’s a lot of market pressure to release things, and it’s too late.

    When you can integrate that kind of thinking into the actual teams that have technologists, but also have designers and social scientists and other people [with] different skills, some of the problems are caught earlier on. Those people have social capital, they’re part of the team. And they have better influence to change the direction.

    What are the dangers of AI developers not paying attention to the broader social impacts of the technology? First of all, I’m not worried about headline-grabbing things like AI taking over the world and launching nuclear weapons or stuff like that.

    Why not? Because there’s nothing I’ve seen in the technology and knowing what it’s built on that [leads me to believe] that just throwing more data and processing leads to some super-intelligence. It’s really not capable. It requires new architectures that include a lot more things that these models just cannot do.

    It’s not to say that will never happen. But I’m talking about that 30-, 50-, 100-year never kind of thing. It will require big scientific leaps that have not occurred and are not being worked on by most of these companies. A lot of that talk is purely science fiction and is, in the worst case, meant to distract people from the real harms that are occurring right now, which I often refer to as the “triple D.”

    So, disinformation — this is a thing that these models could be very easily trained to do. Another area that we know is already going on is the second D — deepfakes. We see that, whether it was [President] Biden supposedly calling people telling them not to vote or really harmful things like fake porn being used against young girls. And then, finally, discrimination and bias — models being misused for making certain decisions, whether it’s hiring, housing, finance, where we know the models can have problems.

    These are real problems today, with the models that are out in the world today. And those are the things that a lot of researchers, government and journalists should be focused on.

    You mentioned three D’s. One of the things you didn’t mention is the impact on employment. One of the reasons I don’t push that is I feel we actually don’t know. Those other three D’s we know are already happening. Displacement of jobs — I think it will happen, but we don’t really have a lot of good data on what jobs and how much.

    I definitely think we need policymakers coming up with plans if we see displacement. What are we going to do to make sure that people [are] able to pay their mortgage, pay their rent, send their kids to college? We don’t want to do what happened in globalization, where a lot of people really were negatively impacted. That was bad for our country and probably for other countries that saw similar problems.

    Given that you’re often looking years down the line, if HAI has an event at the 10-year mark, what do you think we’ll be talking about then? We’re really going to see AI behind the scenes in a lot of applications we use.

    I work a lot on interfaces, how we interact with computing. I don’t see speech and typing to be the be-all, end-all interface. There’s many places where me gesturing at something, pointing, using my body, drawing — these other modalities are better. So what I call multimodal interface, like how we communicate with people, that’s the interaction we’re gonna see a lot more with our day-to-day computing.

    I think from applications, education, health — you’re going to see AI involved in a lot of that. My wife just had some diagnostics, and they clearly were using some AI algorithm to help make some decision on one of her tests. That’s going to be the norm in 10 years. AI algorithms are going to be involved in a lot of how our health data is interpreted.

    And then education — we are going to have personalized tutors for kids or people out in the work world that understand better the context of what you’re struggling with or what you need to learn, and are able to adapt it and make it more interesting to you based on your interests. Do we replace teachers? No, I hope not. We need humans to understand humans in a lot of ways.

    I see this [as an] augmentation tool. All of us will be using them in our jobs, and it’s just a matter of companies and designers figuring out good ways to help you do your job better.

    Hopefully, we’re also upskilling people to do more fulfilling things that require their human skills, and they use AI to help them do better and spend more time on things where we really need people. That’s my hope for 10 years.

    But I don’t think it will happen by itself. I think we actually have to talk about it, and we have to say this is what we want. And we have to give people tools to think about how to design it that way. Because technologists, I think, will default to, “How do I just replace the person?” And many companies will default to, “How do I replace that person and just make more money?” And so I think we need to shape that future we want or else it might not happen.

    If you have a tip about tech, startups or the venture industry, contact Troy Wolverton at twolverton@sfexaminer.com or via text or Signal at 415.515.5594.

    Expand All
    Comments / 0
    Add a Comment
    YOU MAY ALSO LIKE
    Most Popular newsMost Popular

    Comments / 0