Users experimenting with Google’s newest AI program “Gemini” were shocked when prompts like “1940s German soldier” and “1800s US Senator” delivered images of black, Asian, and Native American people and prompts for “NHL player” and “Catholic Pope” showed women.
Like Google itself, it seems Gemini was taught to value inclusivity over everything else – including facts.
When Gemini was launched back in December 2023, it was celebrated as the most advanced program of its kind.
“I believe the transition we are seeing right now with AI will be the most profound in our lifetimes, far bigger than the shift to mobile or to the web before it,” wrote Google CEO Sundar Pachai. “It will bring new waves of innovation and economic progress and drive knowledge, learning, creativity, and productivity on a scale we haven’t seen before.”
Gemini has been touted as the first AI program to beat humans on MMLU, an evaluation framework designed to assess the performance of machine learning models across numerous natural language understanding tasks concurrently. MMLU utilizes nearly 60 subjects including physics, history, law, math, medicine, and ethics.
Gemini was working just fine until February, when Google launched the program’s “image generation” feature. The historically-inaccurate images referenced above embarrassed Google and forced the company to take the feature offline.
“Our tuning to ensure that Gemini showed a range of people failed to account for cases that should clearly not show a range,” admits Google Senior Vice President Prabhakar Raghavan, adding that the company wanted Gemini to “work well for everyone” and that means producing ethnically diverse results.
“And second, over time, the model became way more cautious than we intended and refused to answer certain prompts entirely – wrongly interpreting some very anodyne prompts as sensitive.” Here, Raghavan is referring to Gemini’s refusal to respond to the prompts “black person” and “white person” and its general resistance to producing images of white people.
If Google can’t get Gemini working correctly for image generation, what’s going to happen when the bot is tasked with heavier topics like politics or finance?
“This is insane to me that if I ask for a picture of an Australian person that you’re going to show me, like, three Asian people and a Black person,” said Deedy Das, a former software engineer at Google.
Other former employees weren’t as surprised.
“When the first Google Gemini photos popped up on my X feed, I thought to myself: Here we go again. And: Of course. Because I know Google well,” said Shaun Maguire, a former partner at Google Ventures. “Google Gemini’s failures revealed how broken Google’s culture is in such a visually obvious way to the world. But what happened was not a one-off incident. It was a symptom of a larger cultural phenomenon that has been taking over the company for years.”
Maguire’s opinion was backed by other former employees, who expressed concerns that the company’s obsessive focus on diversity, equity, and inclusion (DEI) was having a negative impact on business strategy and hiring practices.
Author’s Note: Artificial intelligence is widely viewed as the next step in human innovation and is likely going to be incorporated into every facet of society. I think we need to remember Elon Musk’s warning and stop before things get out of control. Musk, who claims AI could one day be more threatening than nuclear weapons, joined a group of experts earlier this year in warning against AI development beyond OpenAI’s GPT-4. With Gemini, that’s exactly what Google is trying to do.
Sources:
Google’s woke AI wasn’t a mistake, former engineers say
Introducing Gemini: our largest and most capable AI model
Google races to find a solution after AI generator Gemini misses the mark
The Deeper Problem with Google’s Racially Diverse Nazis
Google explains Gemini’s ‘embarrassing’ AI pictures of diverse Nazis