Open in App
  • Local
  • U.S.
  • Election
  • Politics
  • Crime
  • Sports
  • Lifestyle
  • Education
  • Real Estate
  • Newsletter
  • Lake Oswego Review

    Readers Respond: AI should be labeled, limited to quell misinformation

    By Jules Rogers, Your Oregon News,

    2024-03-04

    https://img.particlenews.com/image.php?url=4WiVhC_0rgC1pt500

    Your Oregon News asked readers where they stand on artificial intelligence being regulated by government.

    About 83% of Your Oregon News readers said they think AI should be regulated by government, with 10% responding no and 10% responding maybe.

    This comes after Oregon's first-ever bill to regulate AI passed in the Senate in late February.

    Many readers are saying that AI can spread misinformation, and products that use AI should have a label. Some said AI is impossible to regulate, and trying to do so would cause the US to fall behind other countries that are not creating AI policies.

    Of reader respondents, 73% said they don’t use AI at all; while 15% use it for play and 10% use it for work. About 13% said they use AI in general.

    Here’s what our readers think about AI being regulated by government:

    Reader: “I don't think it can be done effectively enough to prevent misuse. The government can outlaw it all together, but then there are first amendment issues. AI abuse will destroy the Internet. No one will believe anything if they are smart. Back to print!”

    Reader: “Any misinformation from corporations should be regulated. Also, the use of personal likenesses without written permission.”

    Reader: “This isn’t a terminator movie. AI will improve all our lives if we get out of the way. China and Russia also have AI and aren’t constraining it. What do you think the world looks like if their AI accelerates and ours is constrained in its infancy?”

    Reader: “Free speech and manipulated speech are not the same.”

    Reader: “Scary that it might be smarter than our politicians.”

    Reader: ”I worry that the government will control everything we do with AI; I want laws for job security and privacy rights for AI.”

    Reader: “AI has the capacity to deceive, use art without attribution and to take away jobs. It will greatly impact our society and our government should step in to minimize negative impacts.”

    Reader: “I see AI as a pandora's box of problems, many of which are already too late to deal with. We should have been regulating this technology 10 years ago.”

    Reader: “It's still in its infancy. We need to make sure it's not used to make important decisions. It cannot be a substitute for human work. It needs to be limited and not seen as more than it is.”

    Reader: “One of the biggest immediate threats that AI presents is its ability to present fantasy as fact. Requiring those who use this tool to disclose its use is sensible.”

    Reader: “AI can and will be used to further special interests to spread lies and misinformation to gain power. This power will be used to mislead people and further erode our already crumbling democracy.”

    Reader: “AI has too many potentially dangerous applications, and it should be regulated so all AI-produced media are clearly labeled.”

    Reader: “It is already being used in ways that are not appropriate.”

    Reader: “While AI has an upside, its downside is far larger and potentially more damaging to society. It has already been used by the unscrupulous to create false and misleading information. As our society leans more toward digital information, the potential for abuse far exceeds the potential for beneficial uses as it may sway public opinion and public policy under false pretenses. Much like the current environment for hackers and their nefarious deeds, AI will create a situation where those seeking to contain it for beneficial uses will always be challenged by those with ill intent. AI will make this an exceedingly dangerous dynamic with little real control.”

    Reader: “AI is powerful enough that in the wrong hands it had potential for irreversible damage. Further, it is more difficult (or impossible) to undo behavioral patterns than to slowly monitor introduction of use cases as they are researched for potential damage. Many risks, from automation that puts public safety at risk, to data mining that destroys personal privacy, and information credibility itself disintegrating with the many uses currently being explored. All these have massive global implications.”

    Reader: “Someone has to regulate it. If not, AI can get out of hand.”

    Reader: “It’s a dangerous technology.”

    Reader: “AI is too intrusive, too new, and too dangerous to be unregulated—like nuclear weapons or other things that are frightening to the host of Americans. And it's been shown that tech can't self-regulate, so that leaves the government.”

    Reader: “It's hard to spot something that's fake. Plus, it's dangerous.”

    Reader: “In general, government (i.e. the Federal U.S. government or the EU) is the only entity large enough to try to prevent new technology from doing harm, both in terms of unanticipated consequences of the technology itself and to prevent the loss off employment for millions of people.”

    Reader: “Misinformation is just dangerous. It has wonderful uses, but must be watched carefully.”

    Reader: “AI robs photographers and artists of their income and provides fake information as if it was proven fact. It's negatively affecting all parts of life from politics to the news, technology, geology and all of the other sciences as well as hobbies like birding.”

    Reader: “The potential for spread of misinformation is too great; too easy to manipulate.”

    Reader: “People can’t be trusted to use it wisely, unfortunately.”

    Reader: “It has the potential to control individual content and choice, to displace workers, and increase the power of those already in power while making the weak more powerless and superfluous.”

    Expand All
    Comments / 0
    Add a Comment
    YOU MAY ALSO LIKE
    Most Popular newsMost Popular

    Comments / 0