Show Mobile Navigation
           
Technology |

Ten Bizarre AI Blunders

by Benjamin Thomas
fact checked by Darci Heikkinen

Artificial intelligence is transforming the modern world. Algorithmic technology is constantly evolving, which means slip-ups are inevitable. And when they occur, some of them are completely off the wall.

From images of racially diverse Nazi soldiers to mistaking a cat for guacamole, AI is responsible for making some spectacular faux pas. But these mistakes are also an excellent learning opportunity for developers. So join us as we dive into ten of the most baffling yet intriguing glitches from the wild world of AI.

Related: 10 Realistic Robots That Will Freak You Out

10 Scientific Journal Publishes Rat with Enormous Genitals

AI Art Ruined Science

In February 2024, a respected scientific journal made headlines after publishing an extraordinary image of a rat with a giant penis.

Frontiers in Cell and Developmental Biology released the wildly inaccurate diagram alongside a paper on sperm stem cell research. The figure of the well-endowed rodent was created by the AI software Midjourney. The illustration was supposed to show how scientists extract stem cells from rat testes. Instead, readers were treated to a bemused rat starting up at its towering schlong. The poor creature was also weighed down by four obscenely large testes.

The journal soon took down the image. They stated that the paper failed to meet “the standards of editorial and scientific rigor for Frontiers in Cell and Development Biology; therefore, the article has been retracted.”[1]

9 Google’s Inception System Mistakes Cat for Guacamole

Fooling Image Recognition with Adversarial Examples

Researchers at MIT developed an algorithm to fool image recognition systems. The researchers tricked Google’s Inception AI into thinking that a model turtle was a gun. They also had the software saying that a baseball was an espresso and a cat was guacamole. The team only had to make tiny modifications to throw the AI wildly off target.

In their 2017 paper, the scientists pointed to the model turtle as a strong example. At first, the group showed a normal toy turtle to Inception. The system had no problem identifying that one. But when they used 3D printing to mildly alter the shell’s texture, the AI thought it was a weapon.

It might sound like an odd experiment, but it shows the dangers of using machines to identify objects. Cameras on self-driving cars use similar software to read signs and scan their surroundings. But if slight alterations can trick one AI into thinking that a turtle is a rifle, then it makes you wonder, how reliable can smart devices be?[2]


8 Air Canada Chatbot Hands Out Dodgy Advice

Air Canada found liable for chatbot’s bad advice on plane tickets

Technology is evolving at a rapid pace. Companies are rushing to automate their services as much as possible. But an unreliable interface can leave you with problems, as Air Canada learned recently.

In February 2024, the airline had to compensate a customer after their chatbot gave him faulty advice. Jake Moffat needed to fly to Toronto for a funeral. But Air Canada’s AI gave him false information on claiming a refund.

The company told Mr. Moffat they would update their chatbot but refused to refund him. Instead, they claimed the bot was a “separate legal entity” and “responsible for its own actions.” Mr. Moffat sued the company for C$650.88&—the flight cost plus interest and fees.[3]

7 Scientists in Atlanta Build Racist Robot as a Warning about AI

WION Fineprint| AI-powered robots make racist & sexist decisions

Sloppy algorithms can lead to AI systems taking on the same prejudices as us humans. In 2022, scientists at Georgia Tech showed how easy it is for robots to pick up racist and sexist views if guided by poorly designed AI.

The Georgia Tech team found that when AI is fed biased data, it tends to amplify those biases. And sadly, the internet is swimming with prejudiced datasets. For example, when the researchers asked their robot to select a criminal, it chose black men 10% more than white. But when it came to picking out doctors, the robot was significantly less likely to choose a woman over a man.

“The robot has learned toxic stereotypes through these flawed neural network models,” argued study author and researcher Andrew Hundt. “We’re at risk of creating a generation of racist and sexist robots, but people and organizations have decided it’s OK to create these products without addressing the issues.”[4]


6 Google’s AI Reveal Undermined by Telescope Blunder

Google AI chatbot Bard flubs an answer in ad

In February 2023, Google launched their highly anticipated AI chatbot Bard as a rival to ChatGPT. But the grand reveal failed to live up to the hype after the AI made mistaken claims about one of NASA’s telescopes. The chatbot affirmed that the James Webb Space Telescope took the first pictures of planets outside the solar system. But Bard’s developers were left red-faced when astronomers pointed out that the first images came from the European Southern Observatory in 2004.

The blunder featured in an advert that was supposed to show off Bard’s capabilities. Instead, it exposed flaws in the system.[5]

5 Bing Chatbot Gets Lippy with Users

Bing ChatBot (Sydney) Is Scary And Unhinged! – Lies, Manipulation, Threats!

The new Bing chatbot from Microsoft has gained a reputation for the way it speaks to users. Testers have found that the interface can be argumentative and defensive, often refusing to accept its mistakes.

Tech enthusiasts were keen to try out the new chatbot. But they soon took to Reddit with reports that the Bing AI had come up with incorrect facts, like insisting on giving out the wrong year. Others found instances of inappropriate advice and racist jokes. The chatbot even told someone that they had “not been a good user” after they disputed its supposed wisdom.

Microsoft, in collaboration with OpenAI, launched the Bing chatbot in 2023. A spokesperson claimed that the bot’s feisty behavior was due to it being an early preview. “As we continue to learn from these interactions,” they explained, “We are adjusting its responses to create coherent, relevant, and positive answers.”[6]


4 Gemini Creates Images of Ethnically Diverse Nazis

Google SHUTS DOWN After Woke AI Images

Another embarrassing moment for Google now. In February 2024, the company paused part of its AI platform over ethnicity issues. Developers had to stop the model from creating images of people due to errors with race and gender.

Users reported that Gemini had made several odd gaffs. These include drawing Third Reich soldiers and Vikings as people of color. The model also struggled to find the correct race and gender for historical figures like the US founding fathers or popes.

Gemini’s blunders have caused some users to question the accuracy and bias of the AI system. As one former Google employee put it, it is “hard to get Google Gemini to acknowledge that white people exist.”[7]

3 Elon Musk’s Grok Invents Iranian Attack on Israel

Musk’s X publishes fake news headline on Iran-Israel generated by its own AI chatbot • FRANCE 24

In April 2024, social media platform X was abuzz with reports of escalating conflict in the Middle East. A headline in the official trending news section told users, “Iran Strikes Tel Aviv with Heavy Missiles.” The only issue? The headline was fake. It had been invented by Grok, X’s official AI chatbot.

Tech experts believe that the blunder came about after a wave of verified accounts shared the false story. The algorithms saw the spike in posts about an Iranian assault. Grok then cooked up its own headline based on the fake reports that had spread across X before being debunked.[8]


2 Writers Fired Due to AI Detection Error

The Truth About AI Content Detectors (And What I Would Do)

AI detectors are designed to flag text that has been written by an algorithm. But they often go awry, falsely accusing writers of plagiarism. This can cause major issues for freelance journalists when the authenticity of their work comes into question. In some cases, it has even led to writers losing their jobs and a significant amount of income.

AI detection companies claim their tools are exceptionally accurate, but some tech experts say they’re selling snake oil. Bars Juhasz is the co-founder of Undetectable AI, an online tool that makes AI-generated text seem more human.

“This technology doesn’t work the way people are advertising it,” Juhasz explained. “We have a lot of concerns around the reliability of the training process these AI detectors use. These guys are claiming they have 99% accuracy, and based on our work, I think that’s impossible. But even if it’s true, that still means for every 100 people, there’s going to be one false flag. We’re talking about people’s livelihoods and their reputations.”[9]

1 Sports Camera Confuses Soccer Ball with Official’s Bald Head

AI Camera Ruins Soccer Game For Fans After Mistaking Referee’s Bald Head For Ball

Scottish soccer fans were treated to a brilliant AI blunder after automated cameras confused the ball with an official’s bald head.

Inverness Caledonian Thistle FC boasted about plans to switch from human-operated cameras to a new AI system. They told supporters that HD footage of all home matches would be available, captured by “cameras with in-built, AI, ball-tracking technology.”

But this high-tech innovation didn’t go to plan. At a match in October 2020, the technology hit a stumbling block when it came time to put the cameras into action. It kept mistaking a linesman’s bald head for the ball. It missed much of the action, panning to the sideline instead to record footage of the official’s hairless noggin.

Fans couldn’t enter the stadium due to COVID restrictions at the time. They had to sit at home trying to catch glimpses of the game against Ayr United between repeated shots of the bald man. Some even took to social media and asked the club to offer him a wig.[10]

fact checked by Darci Heikkinen

0 Shares
Share
Tweet
WhatsApp
Pin
Share