The ‘social media ban’ that is scheduled to commence on December 10, 2025, has recently come under scrutiny due to its lack of efficacy.
There have been several reports and accounts of the facial recognition software failing, with a Professor from the University of Melbourne stating:
‘Every age assurance vendor that we tested had one bypass that was easily accomplished with things that you could buy at your local $2 shop.’
Essentially, they are referring to the ability to purchase a Halloween mask and using that for the purposes of bypassing the age assurance technology.
This was witnessed in the UK, where their version of the social media ban under their Online Safety Act was being circumvented by users who were scanning video game characters’ faces, using VPNs, or utilising AI technology to make their faces look older in order to bypass age-verification.
It seems like the technology relied upon by Australia to enforce the ban will face a similar fate, as the recent report released by the Senate Standing Committees on Environment and Communications has recommended that the commencement of the legislation be delayed until June 10, 2026.
The proposed six-month delay is a clear indication of the sentiment toward the infrastructure implementing the social media ban and its seemingly minimal effectiveness in achieving the objectives of the legislation.
Amidst the recommendations to delay the commencement of the ban and the media traction it’s garnering, the real value in the report comes via its other recommendations that have flown under the radar.
One of the recommendations discusses an accountability mechanism for social media companies that harvest and exploit the information of their users. This concept has been referred to as a ‘digital duty of care’, which has been, provisionally, defined in the report as:
‘A digital duty of care requires platform design to not only inherently minimise content or function-related harm but also severely restrict personal data extraction and prohibit algorithmic manipulation.’
Although the legislation empowering the social media ban already places the onus of age-verification on the social media companies, it ultimately fails to place any accountability measures on these companies when it comes to the exploitation of sensitive, personal data that is often harnessed when using these platforms.
A few years ago, the Economist referred to data replacing oil as the ‘world’s most valuable resource’. Thus, the phrase ‘information economy’ has become synonymous with the profitability of data and information in the modern economy. Data within the information economy is often used for personalised marketing services, sold to the highest bidder or targeted by hackers looking to profit from the data big-tech companies harvest from users on their platforms.
The ‘digital duty of care’ definition shifts the entire framework that underpins the legislation that empowers the social media ban, as it places the onus on social media companies to take proactive steps to reduce the harms that can result from the information economy – which prioritises profits over the well-being and protection of people online. As the legislation has already passed, it may as well be relied upon to increase the accountability of social media companies as opposed to its current objectives of merely restricting access to platforms and regulating online discourse between under-16s.
Implementing a digital duty of care would be an important step toward actually protecting Australian children from the manipulative practices by big-tech social media platforms that profit from their interactions and use their data against them to increase activity on their platforms.


















