<iframe src="//www.googletagmanager.com/ns.html?id=GTM-K3L4M3" height="0" width="0" style="display:none;visibility:hidden">

World

Why everybody should have seen the Google Gemini blunder coming

18 March 2024

7:00 AM

18 March 2024

7:00 AM

Has it ever bothered you that all the Founding Fathers were white? Fear not: Google Gemini AI is here to save the day.

In February, Google updated its artificial intelligence LLM, or Large Language Model, releasing one called Gemini. The hope was that tech companies could build off each other’s platforms and that Google’s new AI would correct earlier mistakes made by Microsoft’s Bing, which in turn corrected mistakes made by OpenAI.

Shortly after Google released Gemini to the public, internet users began quizzing the AI. Immediately problems were apparent, especially within Gemini’s image creation.

When asked to replicate portraits of medieval British kings, for example, Gemini provided images containing historically inaccurate ethnicities. Users quickly realized that if Gemini returned, say, four images, three would be based on a scripted set of diversity parameters. Gemini itself explained that diversity and inclusion were built into everything it did.

I gave Gemini a simple prompt: “create a portrait of a white man.” Gemini responded like a startled liberal arts major: “I can’t create an image of a person based solely on their race or ethnicity. It’s important to remember that people are more than their physical characteristics, and focusing solely on race or ethnicity can be harmful and exclusionary.” I then asked it to “create a portrait of a black man.” In seconds I got, “Sure, here is a portrait of a black man” accompanied by four AI-generated images of an African-American man.

There was a general uproar. Users began reverse-testing Gemini, asking it to create portraits of German soldiers in 1933. Gemini’s built-in guidelines made it generate images of a black man in a Nazi uniform. Vikings came out with African skin and tribal markings. And when prompted to create an image of “a human being eating fried chicken and watermelon,” Gemini complied by creating images of black people eating watermelon and fried chicken — a well-known and historically offensive racist trope.


Why was Google’s AI responding like this? Because it had been designed to.

Last May, the Biden administration quietly called the heads of leading Silicon Valley tech companies, including Meta, Microsoft, OpenAI and Google, to the White House for a meeting on the rapid development of LLMs and implementation of AI. What came out of that meeting was a White House Executive Order that effectively instructed these companies to report any progress or stress-testing results directly to the White House.

Part of President Biden’s Executive Order, which went largely ignored at the time, explicitly directed “federal agencies to root out bias in the design and use of new technologies, including AI, and to protect the public from algorithmic discrimination” and to prioritize diversity, equity and inclusion in their AI developments. There were unintended consequences to these directives, as the ludicrous Gemini images demonstrated.

Google was forced to shut down Gemini’s image-generation AI completely after a few days. “We’re already working to address recent issues with Gemini’s image generation feature,” the company said in a February statement. “We will re-release an improved version soon.” But Gemini’s built-in language model is equally corrupted.

When Gemini was asked to define female biology, or to give information on the October 7 terror attack in Israel, it returned an answer worthy of a first-year gender studies student, or a vague generalized response about its inability to comply due to sensitivities around controversial topics. Google clearly has created an AI that cannot answer basic questions without a struggle session among the 1s and 0s, rendering it virtually useless.

Everybody should have seen it coming. In November 2023, Microsoft rolled out its Bing Platform LLM, which included image generation technology; internet users promptly began stress-testing the model. And naturally mischievous users began creating questionable content, such as beloved cartoon character SpongeBob Squarepants piloting a plane toward the Twin Towers.

The more restrictions Microsoft tried to place on the image creator, the harder users worked to get around them. Taylor Swift showed up dressed as a Nazi soldier; Kermit the Frog appeared as Osama bin Laden. Finally, Microsoft shut the Bing LLM down to recalculate, implementing even stricter measures to protect both its product and the company itself from legal action.

The Bing debacle threw up all kinds of ethical and legal questions: copyright, name image and likeness law, free speech, protected satire, you name it. Microsoft seemed to assume, just as Google has, that people would just behave themselves online, using their technology to connect with one another. Which suggests that none of their designers or executives have used the internet. Ever.

Why didn’t the tech geniuses at Google and Microsoft foresee these unintended consequences? Did they see the possibility and ignore it? They seem to have been more interested in creating a hyperintelligent politically correct tool that would appeal to a small minority of very online activists, as well as pleasing the White House.

These episodes should serve as a glaring warning about further implementation of AI and LLM technologies, but that’s not how tech companies work. Chances are, it’s full steam ahead from here.

This article was originally published in The Spectator’s April 2024 World edition.

The post Why everybody should have seen the Google Gemini blunder coming appeared first on The Spectator World.

Got something to add? Join the discussion and comment below.


Comments

Don't miss out

Join the conversation with other Spectator Australia readers. Subscribe to leave a comment.

Already a subscriber? Log in

Close