Flat White

Protecting victims of explicit AI content online

29 November 2025

10:51 AM

29 November 2025

10:51 AM

This week ACT Independent Senator David Pocock introduced a Bill to ‘protect identity in [a] deepfake future’.

The Bill is titled Online Safety and Other Legislation Amendment (My Face, My Rights) Bill 2025 and will seek to amend the Online Safety Act 2021.

The Bill was introduced merely weeks away from the ‘social media ban’ that will commence on 10 December 2025 and attempt to prohibit Australian children under the age of 16 from having a social media account. The key word is ‘attempt’, as the technology is already on target to fail due to significantly misidentifying ages, as it has previously identified a 16-year-old as a 37-year-old in one instance.

I recently delivered an academic paper at the Australians for Science and Freedom 2025 Conference this past weekend on the efficacy of the ‘social media ban’ in the context of privacy. Amongst my concerns remain the sensitive data that will be collected when Australians have to upload their facial scans to verify their identity to gain access to social media platforms.

Although many Australians will not have to biometrically verify their age via facial scanning technology due to the age of their social media accounts, there will be many who will need to do so in order to interact on these platforms.

For example, young Australians will likely have to upload a facial scan to a social media company’s age-processing technology to verify their age profile. Although the image must be deleted under the legislation to avoid penalties under the Privacy Act, there have been many instances where companies fail to destroy sensitive data, which gets leaked on the internet for bad actors to take advantage of. Now, let’s say this happens for a 17-year-old child and their facial data gets leaked for anyone to access. This data can now be used as a foundation for deepfake content and, subsequently, Australian children could be featured on adult websites and portrayed as being involved in explicit content that they never created.


Ignoring the extreme psychological impacts this could have on children, there are valid concerns surrounding the potential for AI technology to be harnessed and result in the production of unwanted material. For example, it can be used as a means to place the face of someone onto a bad actor and be used for scamming or identity-theft purposes. In addition, there is the creation of explicit content or other content that can have a significant impact on the psyche of Australian’s.

In October 2025, I wrote about a Federal Court case that prosecuted an Australian resident who had shared deepfake images, both moving and still, of women online. In its decision, the court considered images that depicted, without their consent, several genital areas and breasts when being engaged in a sexual act ‘of a kind not ordinarily done in public in circumstances in which an ordinary reasonable person would reasonably expect to be afforded privacy’.

Thus, it is not a matter of if it will happen, it is a matter of when it will happen again.

The My Face, My Rights Bill has stated it will aim to achieve several objectives. Among them are:

– establishing a complaints system for the non-consensual sharing of deepfake material;

– strengthening the eSafety Commissioner’s powers to respond to harmful AI generated content; and

– provide the ability for remedies to be pursued for victims to rely upon.

Although this Bill has noble intentions, there is somewhat of a fallacy with regard to the efficacy of the proposed Bill.

For instance, my October 2025 article linked above reveals that the eSafety Commissioner and the Online Safety Act already provide strong protections against the posting of AI-generated/deepfake images. Thus, there are existing parameters that could be altered to achieve the desired result of the proposed Bill without having to include major legislative amendments. It is hard to argue that the eSafety Commissioner needs further legislated powers, as minor amendments to the current powers under the Online Safety Act could achieve the objectives of the My Face, My Rights Bill.

For example, Senator Pocock could seek to rely upon Section 75(1) of the Online Safety Act and propose amendments that would include the words ‘or video’ within the phrasing of the legislation. This would ensure that there are no further powers extended to the legislation and its regulators. Instead, the proposed Bill could utilise the existing legislation that has already passed, with minor alterations, and not require further Sections or Schedules to be added that may impede upon the rights of Australians who do not have sinister intentions when utilising AI technology.

As a strong advocate for privacy rights and the protection of Australians from having their sensitive biometric data placed on deepfake content, especially children, I implicitly support the objectives of the Bill suggested by Senator Pocock.

Of course, the Bill is yet to be drafted and I cannot endorse draft legislation that does not exist. However, its objectives are a clear step forward toward the regulation and protection of technology that can significantly harm Australian citizens as it continues to develop.

Got something to add? Join the discussion and comment below.


Close