Artificial intelligence (AI) has been ‘the buzzword’ since the introduction of ChatGPT in late 2022.
Amongst concerns raised with the future of the human workforce and the rapid development of the technology itself are the ethical elements of AI usage. These are often overlooked or disregarded due to the benefits of producing content that can often substitute as human work.
However, issues are beginning to arise within the legal system around the globe with regard to ethical AI usage and the pressures of being a lawyer that can act as a catalyst for conditions that promote AI usage.
Recently, the United States Senate Committee on the Judiciary commenced an inquiry into two US District judges regarding an alleged utilisation of AI in their decisions. The committee cited several inconsistencies upon reviewing the orders given by the judges, noting alleged reliance on AI that may have produced misstated quotes and facts from the case, naming people who were not parties to the case and misquoting text within statute.
The AI trend has found its way to the Australian Courts, with a recent decision of note in August 2025 where a Chief Justice of the Supreme Court of New South Wales included a warning for AI usage within their judgment. The warning described the need for judicial vigilance with regard to the use of AI in courtrooms, with emphasis added to self-represented litigants who may find themselves relying on AI when trying to interpret the complexities of the Australian legal system.
Even high-ranking Australian lawyers are succumbing to the allure of AI. In August 2025, a senior lawyer had to apologise to a Victorian judge in a murder case for filing submissions that included errors generated by AI.
In September 2025, LawyersWeekly reported on the first Australian lawyer to receive sanctions for AI use within family law proceedings. This was a result of the Court being unable to confirm the citations used by the lawyer, who eventually admitted to the use of AI within the submissions and a failure to review the document before submission.
Amidst the reliance of AI within legal systems, both in Australia and overseas, it is not too far-fetched to assume that the judicial decision-makers may be next in line to frequently utilise AI when handing down judgments. A survey across 96 countries conducted by the United Nations Educational, Scientific and Cultural Organisation in 2024 revealed that 44 per cent of judicial operators, which includes judges, were using AI tools within their work. What was concerning is that the survey revealed that only 9 per cent of the legal profession were given proper instructions and guidelines surrounding the use of AI.
In the UK, which our legal system in Australia is based on, one Lord Justice referred to ChatGPT as ‘jolly useful’ when relying on it to summarise an area of law for a judgment.
It is seemingly naive to assume that AI will not continue to be relied upon within legal systems, including Australia. It is crucial, in my opinion, that there needs to be clear guidelines for legal practitioners, as well as legislation, that has the primary objective of protecting the administration of justice but also allowing for AI to be utilised to increase access to justice (i.e. cheaper legal fees as work can now be done quicker).
There are already guidelines that exist for legal practitioners in Australia, which are featured within the practice note from the Supreme Court of NSW and commentary from the Federal Court of Australia. However, both these guidelines were published in early 2025 and we have still witnessed continued instances of unethical AI usage within the legal system since then.
There is a threat of losing the human touch within the legal system and key components of advocacy that cannot be replicated by AI. How will AI read emotions in a courtroom? Even if it eventually could, how will it ensure that emotions are interpreted correctly or provide the correct amount of empathy, whilst also abiding to the principles of justice practitioners must adhere to, to navigate the court complexities of the court system.
Alternatively, we may see AI doing a better job in unbiased decision-making and legal representation in the future as it does not need to rely on human emotions to make judgments.
It could be the case that we will inevitably face a system where AI will be relied upon by both lawyers and judges to make legally binding decisions that impact Australians. As critical thinking, empathy and advocacy in the face of defending clients, who have the right to the presumption of innocence and legal representation, are essential components of being ethical lawyers, it is concerning to see where the future of the Australian legal system may end up without the implementation of effective AI legislation and governance to prevent its abuse in the legal system.
Although there may be some positive outcomes from the implementation of AI, which could be greater access to justice (if used as a tool and not abused) and cheaper costs for legal representation, there is a far greater potential for AI to damage principles that form the cornerstones of the legal profession that will impact the administration of justice.


















