AV Alerts
  • April 28PPIE Run for Education - (04/28)
  • May 4May the Fourth Star Wars Day - (05/04)
  • May 5Cinco de Mayo - (05/05)
The student news site of Amador Valley High School

AmadorValleyToday

The student news site of Amador Valley High School

AmadorValleyToday

The student news site of Amador Valley High School

AmadorValleyToday

Gemini AI takedown: How can inaccuracies appear in AI?

AIs+that+can+generate+life-like+images+are+a+relatively+new+and+untested+technology.
Leo He
AIs that can generate life-like images are a relatively new and untested technology.

On February 23, 2024, Google’s Gemini Conversation App’s image generation was taken down. Gemini is a generative AI model created by Google. The reason the service was pulled is because of accusations of the AI generating ahistorical/inaccurate images.

The use of AI to create images has become very popular lately. Many people use it, whether it be used as an inspiration source or just entertainment. Though controversies surround the images generated by AI, some teachers have used it as an activity for class projects.

“Using generative AI like either Dolly, or Adobe Express — there are many free options out there that would entice somebody to consider that career,” said Kevin Kiyoi, who teaches Advanced Computer Science at Amador.

However, problems surfaced with Google’s Gemini AI Generative model. When prompted to create images of people, Gemini is designed to show a diverse range of ethnicities. However, two issues eventually appeared. 

For one, the AI’s tendency to depict ethnic diversity can lead to historical inaccuracies in images of historical events, introducing anachronistic diversity. For another, the AI’s avoidance of sensitive topics was overtuned, leading to a refusal to answer some innocuous prompts. To understand why this happened, it’s necessary to know how an AI is trained in the first place. 

How training can cause mistakes

“To train a well-behaved large language model, you have to have a lot of relevant information, relevant data, and to prepare the data set of the problem you’re trying to solve. It takes a lot of time and money and sometimes, we just don’t have that amount of data to train the model,” said Jinming He, a Google Software Engineer. 

Issues with AIs usually arise in the training process. For Gemini, the diversity-focused training failed to account for historical accuracy.

“I assume it was a problem with the alignment process, that’s the second step to make the model follow human instruction. But there may be some problem with the data set that was used to train the model, so it does not behave like what most people’s saying it should do,” said He.

If nothing else, Gemini’s case is a reminder that even the most advanced AIs are still prone to making mistakes. It’s important to catch errors that the model may have overlooked.

“You can’t just say ‘Okay, well there’s the answer,’ and then go and without sitting here critically thinking and saying, ‘that doesn’t look right, how do I change the prompt to give a more accurate or specific image,’” said Kiyoi. 

AI is a powerful tool that people can use, but there are still improvements to be made. Gemini’s image generation over-diversification is one of them. But since the AIs can’t check the accuracy of their data by themselves, it’s up to people to fact-check.

“One of the difficulties is that AIs cannot use tools. And humans can. As humans, we can use browsers and all kinds of software on our computer. But AIs are not connected to these kinds of tools yet,” said He.

Leave a Comment
Donate to AmadorValleyToday
$50
$5000
Contributed
Our Goal

Your donation will support the student journalists in the AVJournalism program. Your contribution will allow us to purchase equipment and cover our annual website hosting costs.

Navigate Left
Navigate Right
Donate to AmadorValleyToday
$50
$5000
Contributed
Our Goal

Comments (0)

All AmadorValleyToday Picks Reader Picks Sort: Newest

Your email address will not be published. Required fields are marked *