Every day in university mathematics courses, tutors wax poetic about derivatives and integrals whilst their students scroll TikTok.
When the time comes for the students to solve problems, they flick from TikTok to their AI platform, take a photo of the math question, and copy an answer straight from the AI’s output.
Whilst this seems like a scene from Brave New World, my best friend lives this reality every day in his job as a university mathematics tutor. These are some of the state’s brightest school leavers – future engineers and economists – paying good money to get a degree.
But why pay attention in the tutorial when AI can do the job for me?!
Whilst undoubtedly a generational efficiency-boosting tool for the way we work, the AI revolution poses unprecedented dangers to our development, the way we learn, and how we interact with one another.
Just as social media has come to prey on our vanity and envy, all around me I see AI beginning to prey on our laziness. I see a culture emerging where many young people see AI as the all-seeing, all-powerful machine, as opposed to a useful partner in furthering their own ideas.
Governments around the world have acted decisively to regulate the risks AI poses to global supply chains and national security. The EU has implemented a comprehensive regulatory framework to categorise AI systems based on the risk that they could exploit people’s vulnerabilities or manipulate their decisions. Finland and Denmark have been praised for the way they have incorporated AI into government operations. However, with all the focus on exploitation and cyber-risk, a neglected area of policy making has been addressing the more cultural and existential issues posed by young people seeing AI as a short-cut to life.
We risk losing our work ethic. We risk losing our creativity. And we risk losing the human element central to the way we work, play, and live.
These concerns are not unfounded: MIT’s Media Lab recently published a study that should send our alarm bells ringing. The study divided subjects aged between 18 and 39 years old into three groups and asked them to write several SAT essays. One group could use an AI platform, one group a search engine, and one group nothing at all. The group using the AI platform had the lowest brain engagement, and over the course of several months, got lazier and lazier with each subsequent essay.
An early randomised controlled trial measured the impact of AI on experienced open-source developer productivity had fascinating results. The trial found that when developers used AI tools, ‘they expected AI to speed them up by 24 per cent’. In reality, they were 20 per cent slower. The conclusion to take from this is not that AI is not useful, far from that, but that blind uses of AI might trick our sense of productivity.
Plutarch in his Morals noted, ‘No one ever wetted clay and then left it, thinking it would become bricks by chance and fortune.’
We do not read Shakespeare in Grade 10 to perfectly recite every quote in King Lear. We do not learn complex algebra in Grade 12 to employ this knowledge around the house. We learn it because it teaches us how to work hard over a prolonged period of time, and because it teaches our brains how to think critically – the skills necessary to actually be able to employ AI platforms in a useful and meaningful way.
The same way that a junior lawyer gains knowledge from reading every page of a contract, or the apprentice chippy hones his skills by laying course after course of bricks, there is utility in working hard to become masters of our craft. This is because when we become masters of our craft, we can strive for innovative ideas – something that AI cannot currently do. It is, in its current state, a statistical gadget that, based on your input, outputs the most common thing that should follow.
We have to remember to ensure the human mind remains behind the wheel.
The Rudd government’s Digital Education Revolution policy platform largely got the balance right in recommending how to integrate new technologies into the way we learn and study. In the same way that Rudd’s scheme provided support mechanisms to assist schools in ICT learning and deployment, schools today must be equipped with the knowledge to guide students in how to use AI as an enhancer, as opposed to a cheat code.
The Albanese government’s current response to this cultural challenge is the 7-page Australian Framework for Generative AI in Schools. As of now, there is no national approach that strikes any balance between attempting to keep up with technical AI developments, whilst also addressing the cultural question of how do we as a society want to use AI?
This is a hard question. And it is a question lost in the current discourse centred around productivity and innovation.
It is incumbent not only on policymakers, but on schools, parents, and even peers to hold each other accountable to not simply view AI as a shortcut to life.
Fundamentally, it is crucial that when students turn AI systems on, they do not turn their brains off simultaneously in the process.


















