Get updates delivered to you daily. Free and customizable.
LiveScience
32 times artificial intelligence got it catastrophically wrong
By John Loeppky,
2024-06-16
The fear of artificial intelligence (AI) is so palpable, there's an entire school of technological philosophy dedicated to figuring out how AI might trigger the end of humanity. Not to feed into anyone's paranoia, but here's a list of times when AI caused — or almost caused — disaster.
Aside from the huge reputational damage possible in scenarios like this, if chatbots can't be believed, it undermines the already-challenging world of airplane ticket purchasing. Air Canada was forced to return almost half of the fare due to the error.
NYC website's rollout gaffe
Welcome to New York City, the metropolis that never sleeps and the city with the largest AI rollout gaffe in recent memory. A chatbot called MyCity was found to be encouraging business owners to perform illegal activities . According to the chatbot, you could steal a portion of your workers' tips, go cashless and pay them less than minimum wage.
Microsoft bot's inappropriate tweets
In 2016, Microsoft released a Twitter bot called Tay, which was meant to interact as an American teenager, learning as it went. Instead, it learned to share radically inappropriate tweets. Microsoft blamed this development on other users, who had been bombarding Tay with reprehensible content. The account and bot were removed less than a day after launch. It's one of the touchstone examples of an AI project going sideways.
In 2021, leaders in the Dutch parliament, including the prime minister, resigned after an investigation found that over the preceding eight years, more than 20,000 families were defrauded due to a discriminatory algorithm . The AI in question was meant to identify those who had defrauded the government's social safety net by calculating applicants' risk level and highlighting any suspicious cases. What actually happened was that thousands were forced to pay with funds they did not have for child care services they desperately needed.
Medical chatbot's harmful advice
The National Eating Disorder Association caused quite a stir when it announced that it would replace its human staff with an AI program. Shortly after, users of the organization's hotline discovered that the chatbot, nicknamed Tessa, was giving advice that was harmful for those with an eating disorder . There have been accusations that the move toward the use of a chatbot was also an attempt at union busting. It's further proof that public-facing medical AI can cause disastrous consequences if it's not ready or able to help the masses.
Amazon's discriminatory AI recruiting tool
In 2015, an Amazon AI recruiting tool was found to discriminate against women. Trained on data from the previous 10 years of applicants, the vast majority of whom were men, the machine learning tool had a negative view of resumes that used the word "women's" and was less likely to recommend graduates from women's colleges. The team behind the tool was split up in 2017, although identity-based bias in hiring, including racism and ableism, has not gone away.
Normally, when we talk about the threat of AI, we mean it in an existential way: threats to our job, data security or understanding of how the world works. What we're not usually expecting is a threat to our safety.
When first launched, Microsoft's Bing AI quickly threatened a former Tesla intern and a philosophy professor, professed its undying love to a prominent tech columnist, and claimed it had spied on Microsoft employees.
Driverless car disaster
While Tesla tends to dominate headlines when it comes to the good and the bad of driverless AI, other companies have caused their own share of carnage. One of those is GM's Cruise. An accident in October 2023 critically injured a pedestrian after they were sent into the path of a Cruise model. From there, the car moved to the side of the road, dragging the injured pedestrian with it.
An investigation by the BBC found that social media platforms are using AI to delete footage of possible war crimes that could leave victims without the proper recourse in the future. Social media plays a key part in war zones and societal uprisings, often acting as a method of communication for those at risk. The investigation found that even though graphic content that is in the public interest is allowed to remain on the site, footage of the attacks in Ukraine published by the outlet was very quickly removed.
Discrimination against people with disabilities
Research has found that AI models meant to support natural language processing tools, the backbone of many public-facing AI tools, discriminate against those with disabilities. Sometimes called techno- or algorithmic ableism, these issues with natural language processing tools can affect disabled people's ability to find employment or access social services. Categorizing language that is focused on disabled people's experiences as more negative — or, as Penn State puts it , "toxic" — can lead to the deepening of societal biases.
Apple's Face ID has had its fair share of security-based ups and downs, which bring public relations catastrophes along with them. There were inklings in 2017 that the feature could be fooled by a fairly simple dupe, and there have been long-standing concerns that Apple's tools tend to work better for those who are white . According to Apple, the technology uses an on-device deep neural network, but that doesn't stop many people from worrying about the implications of AI being so closely tied to device security.
With Roe v. Wade being struck down in the U.S. Supreme Court and with those who can become pregnant having their bodies scrutinized more and more, there is concern that these data might be used to prosecute people who are trying to access reproductive health care in areas where it is heavily restricted.
While it's not the first example of AI's faults having a direct impact on law enforcement, it certainly was a warning sign that the AI tools used to identify accused criminals could return many false positives.
Worse than "RoboCop"
In one of the worst AI-related scandals ever to hit a social safety net, the government of Australia used an automatic system to force rightful welfare recipients to pay back those benefits. More than 500,000 people were affected by the system, known as Robodebt, which was in place from 2016 to 2019. The system was determined to be illegal, but not before hundreds of thousands of Australians were accused of defrauding the government . The government has faced additional legal issues stemming from the rollout, including the need to pay back more than AU$700 million (about $460 million) to victims.
AI's high water demand
According to researchers, a year of AI training takes 126,000 liters (33,285 gallons) of water — about as much in a large backyard swimming pool. In a world where water shortages are becoming more common, and with climate change an increasing concern in the tech sphere, impacts on the water supply could be one of the heavier issues facing AI. Plus, according to the researchers, the power consumption of AI increases tenfold each year.
AI deepfakes
AI deep fakes have been used by cybercriminals to do everything from spoofing t he voices of political candidates , to creating fake sports news conferences ,, to producing celebrity images that never happened and more. However, one of the most concerning uses of deep fake technology is part of the business sector. The World Economic Forum produced a 2024 report that noted that "...synthetic content is in a transitional period in which ethics and trust are in flux." However, that transition has led to some fairly dire monetary consequences, including a British company that lost over $25 million after a worker was convinced by a deepfake disguised as his co-worker to transfer the sum
Zestimate sellout
In early 2021, Zillow made a big play in the AI space. It bet that a product focused on house flipping, first called Zestimate and then Zillow Offers, would pay off. The AI-powered system allowed Zillow to offer users a simplified offer for a home they were selling. Less than a year later, Zillow ended up cutting 2,000 jobs — a quarter of its staff.
Age discrimination
Last fall, the U.S. Equal Employment Opportunity Commission settled a lawsuit with the remote language training company iTutorGroup . The company had to pay $365,000 because it had programmed its system to reject job applications from women 55 and older and men 60 and older. iTutorGroup has stopped operating in the U.S., but its blatant abuse of U.S. employment law points to an underlying issue with how AI intersects with human resources.
Among the things you want a car to do, stopping has to be in the top two. Thanks to an AI vulnerability, self-driving cars can be infiltrated, and their technology can be hijacked to ignore road signs. Thankfully, this issue can now be avoided.
Earlier this year, a lawyer in Canada was accused of using AI to invent case references . Although his actions were caught by opposing counsel, the fact that it happened is disturbing.
Sheep over stocks
Regulators, including those from the Bank of England, are growing increasingly concerned that AI tools in the business world could encourage what they've labeled as "herd-like" actions on the stock market . In a bit of heightened language, one commentator said the market needed a "kill switch" to counteract the possibility of odd technological behavior that would supposedly be far less likely from a human.
Bad day for a flight
In at least two cases, AI appears to have played a role in accidents involving Boeing aircraft. According to a 2019 New York Times investigation, one automated system was made "more aggressive and riskier" and included removing possible safety measures. Those crashes led to the deaths of more than 300 people and sparked a deeper dive into the company.
In February 2024, Google restricted some portions of its AI chatbot Gemini's capabilities after it created factually inaccurate representations based on problematic generative AI prompts submitted by users. Google's response to the tool, formerly known as Bard, and its errors signify a concerning trend: a business reality where speed is valued over accuracy.
AI companies' copyright cases
An important legal case involves whether AI products like Midjourney can use artists' content to train their models . Some companies, like Adobe, have chosen to go a different route when training their AI, instead pulling from their own license libraries. The possible catastrophe is a further reduction of artists' career security if AI can train a tool using art they do not own.
Google-powered drones
The intersection of the military and AI is a touchy subject, but their collaboration is not new. In one effort, known as Project Maven, Google supported the development of AI to interpret drone footage . Although Google eventually withdrew, it could have dire consequences for those stuck in war zones.
Get updates delivered to you daily. Free and customizable.
Welcome to NewsBreak, an open platform where diverse perspectives converge. Most of our content comes from established publications and journalists, as well as from our extensive network of tens of thousands of creators who contribute to our platform. We empower individuals to share insightful viewpoints through short posts and comments. It’s essential to note our commitment to transparency: our Terms of Use acknowledge that our services may not always be error-free, and our Community Standards emphasize our discretion in enforcing policies. We strive to foster a dynamic environment for free expression and robust discourse through safety guardrails of human and AI moderation. Join us in shaping the news narrative together.
Comments / 0