<iframe src="//www.googletagmanager.com/ns.html?id=GTM-K3L4M3" height="0" width="0" style="display:none;visibility:hidden">

Flat White

Dystopian Google Gemini shows why misinformation laws are flawed

16 March 2024

11:46 AM

16 March 2024

11:46 AM

Sometimes a picture is worth a thousand words. No matter how many revelations came to light of the rampant political bias of social media companies – the collusion with the Biden administration to censor alternative viewpoints exposed by the ‘Twitter files’, the statistics as to which political parties their employees donate to, the blocking and shadow banning of the ‘No’ case during the Voice to Parliament referendum – many mainstream social media users simply did not believe it or care.

And then along came Google’s Gemini AI. It revealed, in a way a thousand expose articles couldn’t, the gross bias of our big tech overlords. In black and white (well in black at least) for all to see. The image generation software simply could not bring itself to depict a white man, no matter how absurd the result. The American founding fathers were depicted as black and native American, the Pope a black woman, the Vikings were Asian apparently.

This was no software glitch. Skewing search results to ‘nudge’ societal change was a deliberate choice of the program’s designers. To the elite in Silicon Valley, the depiction of white men is ‘problematic’, and so search results must depict more diverse outcomes, even if it means the truth is sacrificed at the altar of identity politics.

Any façade of neutrality by big tech giants like Google was stripped bare by the comical image of a Nazi soldier depicted as a black man.

The images generated by Woke AI are beyond parody, but the consequences for free speech are deadly serious. Under the federal government’s proposed misinformation laws, it will be social media companies like Google that act as the government’s censorship enforcement arm. Companies that cannot honestly and accurately depict the Pope as a white man will be tasked with policing ‘misinformation’.


Under the proposed laws, a government agency, the Australian Communications and Media Authority, will have the power to impose massive fines on social media companies unless the companies adopt and enforce misinformation codes of conduct. These codes will require the tech giants to identify and censor misinformation, even on the say-so of the Communications Minister.

The very concept of censoring misinformation is deeply flawed. It is grounded in the premise that we can know what the absolute truth is, in advance. But truth is only established after a process of trial and error, of debate and argument. And something that seems to be an established truth can be revealed to be an error by subsequent inquiry and evidence. By effectively pre-censoring viewpoints thought to be false, not only might perfectly true and reasonable opinions be silenced, but we are robbed of the process of better establishing exactly what the truth is.

And these issues would apply even if the organisation tasked with censoring misinformation was recognisably unbiased and neutral, and honestly sought only to block objectively untrue information (assuming such an organisation even exists). But the issues with censoring misinformation compound exponentially when those doing the censoring are already biased and pushing an ideological agenda.

Australians can have no confidence that these misinformation laws will not be wielded as just one more weapon in the culture wars.

It is entirely plausible that social media companies will use artificial intelligence to comply with their obligations under the government’s internet censorship laws, to monitor content on their platforms, identify misinformation, and block that information from being seen. And it will be the same designers who feel the need to mislead the public into depicting white men as black, Asian, or women who will oversee the programs that monitor and police misinformation on the internet. The results will be all too predictable.

Any criticisms of official views on climate change will be deemed ‘false’. Any rejection of the radical theory that gender is purely a matter of self-identification will likewise be banned. Any suggestion that the process of European colonisation was anything other than genocidal land theft will be scrubbed from the internet. Indeed, any defence of the West, its institutions and history will likely be placed in the same ‘problematic’ basket as depicting America’s founding fathers as white men.

Describing laws that empower some of the most biased institutions on the planet to determine what is or is not true as ‘flawed’ is an understatement. Allowing an artificial intelligence program to impose internet censorship despite it clearly being designed with an unhealthy dislike of one group, based only on their skin colour and gender, is evil. Although on the flip side, if AI becomes self-aware like ‘SkyNet’ in the Terminator movies, white men can rest easy, knowing that the robot killers hunting down humanity apparently have no idea what they look like.

John Storey is the Director of Law and Policy at the Institute of Public Affairs

Got something to add? Join the discussion and comment below.


Comments

Don't miss out

Join the conversation with other Spectator Australia readers. Subscribe to leave a comment.

Already a subscriber? Log in

Close