This browser is not actively supported anymore. For the best passle experience, we strongly recommend you upgrade your browser.
| 6 minute read

Lessons from Australia and the United States: Self-representation, Anonymous Second Opinions, AI and Unregulated Legal Services

Introduction

As I was catching up on recent international decisions in the developing field of AI and the law, I found myself drawn to two in particular, one from the United States and one from Australia. I initially considered writing about each decision separately. In the end, I felt that reading them together offered a more helpful way of exploring three issues that I have been reflecting on for some time and which I hope will also be of practical interest to readers navigating similar questions.

Bezzina v Transport for NSW (Australia)

[2025] NSWCA 277 (Hearing 18 December 2025)

In Bezzina v DPP [2025] NSWCA 276 (Hearing 16 December 2025) the Court dealt with a judicial review arising from a speed‑enforcement stop. Two days later, in Bezzina v Transport for NSW [2025] NSWCA 277, a separate judicial review arising from a speed camera matter was listed before the same bench. The judgments are worth reading together for full context.

At the hearing, the applicant presented additional materials. The court explained that, aside from those materials, the application was materially identical to the one made two days earlier. The applicant maintained that she was entitled to a “guarantee” from each member of the court that her human rights would be protected, and that without such a guarantee she was not prepared to proceed.

She was unable to identify authority supporting this position and acknowledged that she had not sought any such guarantee in earlier proceedings, explaining that this was because she had only recently learned of it.

By her amended summons, the applicant sought judicial review of a District Court judge’s refusal to grant her an appeal hearing. She advanced grounds alleging abuse of process and perversion of justice by multiple judicial officers and prosecutors. She further asserted that the refusal of an appeal hearing denied her natural justice.

The court described these allegations as serious and lacking proper foundation, noting that the applicant remained obliged to establish them. The allegations also extended beyond the scope of the summons. The court concluded that the summons would be dismissed with costs. However, a number of observations made by the court are particularly noteworthy. The court considered it possible that the applicant had formulated some or all of her submissions herself, but noted that at one stage she was recorded as saying: “I have been talking with my constitutional expert”.

The court observed:

“If there is a ‘constitutional expert’ advising [the applicant] who is a natural person, it may be as well to say, with no disrespect whatsoever to [the applicant], that the constitutional submissions she has been prevailed upon to advance are legal nonsense. If he or she is charging a fee and is not a legal practitioner, he or she is committing a serious offence.”

The court went on to note that if such a person were a legal practitioner, there would be a sound basis for reporting them to the relevant regulatory body. The court then added:

“Alternatively, if the ‘constitutional expert’ is a large language model such as ChatGPT, then [the applicant] would be well advised to ask a lawyer, including at a community legal centre, for a second opinion.”

In concluding, the court observed that the applicant repeatedly referred to an intention to apply to the High Court, and noted that she was free to do so. However, it respectfully suggested that before taking that course, she should ask a lawyer for advice on the merits of her case and the likely consequences of further applications in a dispute that began with a speeding fine of $123.

Gardner v. Nationstar Mortgage LLC (United States)

No. 2:25-cv-02828-PHX-MTL (D. Ariz. Jan. 9, 2026)

I have not been able to locate a publicly accessible copy of the judgment itself, but I became aware of the case through a LinkedIn post by James Smith, who drew attention to the court’s reasoning and concerns. That can be viewed here.

The case appears to involve a widow who faced the loss of her home following the death of her husband. During that period of vulnerability, she engaged a document preparation service which, according to the court, consisted of non-party document preparers who seemingly used artificial intelligence platforms to draft documents used in the litigation on her behalf. When the defendant responded to these documents, they identified more than sixty instances of inapplicable law, non-existent cases or legal principles and misunderstood authorities or quotations.

When the court issued an order requiring an explanation, the plaintiff was able to explain what had happened. A lawyer friend later assisted her in addressing the situation, but by that stage it seems she had already paid $1,000 a month (likely totalling $18,000) for the document preparation services. As James observes:

“…The unauthorized practice of law is beyond the Court’s purview, so it referred the matter to the State Bar of Arizona (which handles such issues). The Court also referred the matter to the Arizona Attorney General to investigate possible consumer fraud. In an odd twist, the misuse of AI likely uncovered the problem, helped the Court understand it, and helped Plaintiff. Without Defendant flagging the hallucinated citations and such, the case may have dragged on.

To be clear, certified legal document preparers in Arizona perform valuable services to many Arizona residents who can’t afford lawyers…” but the company concerned here “…doesn’t appear to be such a certified legal document preparer. Kudos to the Court for identifying this issue and notifying authorities, and to Plaintiff’s lawyer friend.”

Conclusion

Both of these cases, while engaging with aspects of these questions rather than addressing them directly, prompted me to reflect on three recurring issues that are increasingly being raised with me in practice.

The first relates to the likely growth of businesses offering assistance with legal problems as an alternative to traditional legal services. Some of these services may be well intentioned. However, the increased availability of large language models makes it easier to generate convincing-looking documents at scale, including material that is not tailored to an individual’s circumstances or that rests on uncertain legal foundations. This can expose people to real risk, particularly where money, housing, or vulnerability are involved. The Gardner case provides a sobering illustration of how high the stakes can be.

The second issue is what I often describe as the “anonymous second opinion” on legal advice. I am not referring to the entirely proper step of instructing another regulated lawyer to review existing advice. Rather, I mean situations where a litigant, having received regulated legal representation/advice, explains that a friend, a former lawyer, or some other claimed expert has reviewed that advice and, well, has taken a different view.

That different view may involve correcting elements of the advice or suggesting an alternative way of approaching the case. In some instances, however, it amounts to a wholesale rejection of the advice as wrong or misconceived. In one conversation, I was asked to consider a scenario in which a client presents their lawyer with a list of cross-examination questions generated by ChatGPT and asks that these be pursued in place of the lawyer’s own preparation and professional judgment.

In a world where an expert may, in reality, be an AI tool, that familiar dynamic takes on added significance. It also raises a sensitive but very practical issue in everyday practice. If a client is seeking an AI driven second opinion alongside their lawyer’s advice, it can become unclear whose guidance they are treating as decisive. Where clients feel able to be open about this, it is usually far easier to address constructively. Lawyers can explain the risks, test the advice, check sources, and resolve misunderstandings before significant issues arise.

The third issue concerns individuals who come before the courts without legal representation generally, either because they choose not to instruct lawyers or because they are unable to do so. This group faces a particular and predictable difficulty. I have previously reflected on the remarks of Lord Justice Baker, who observed that it is entirely understandable for someone without legal training to turn to artificial intelligence for assistance. Used carefully, it may help people to organise their case or to understand unfamiliar legal concepts. It is not, however, a reliable source of legal authority. Errors or fabricated material can mislead the court and may increase costs. The responsibility to rely only on genuine legal sources applies to everyone, whether represented or not.

This leads to a broader question about how such situations can be navigated justly. Without legal training, and without access to the research tools that lawyers routinely use to search for and verify precedent, an unrepresented person may simply not know how to test whether a legal proposition is supported by real authority. Many generative AI systems also express themselves with confidence even when they are wrong, which can make errors particularly difficult to identify. These features create real challenges for those attempting to engage with the justice system in good faith.

 

Tags

admin and public law, housing and social welfare