This browser is not actively supported anymore. For the best passle experience, we strongly recommend you upgrade your browser.
| 2 minute read

Updated Artificial Intelligence (AI) Guidance for Judicial Office Holders

On 31 October 2025, the Courts and Tribunals Judiciary updated the “Artificial Intelligence (AI) Guidance for Judicial Office Holders”

There are some interesting updates. For lawyers:

“Some kinds of AI tools have been used by legal professionals for a significant time without difficulty. For example, TAR is now part of the landscape of approaches to electronic disclosure. Leaving aside the law in particular, many aspects of AI are already in general use: for example, in search engines to auto-fill questions, in social media to select content to be delivered, and in image recognition and predictive text.

All legal representatives are responsible for the material they put before the court/tribunal and have a professional obligation to ensure it is accurate and appropriate. Provided AI is used responsibly, there is no reason why a legal representative ought to refer to its use, but this is dependent upon context.

Until the legal profession becomes familiar with these new technologies, however, it may be necessary at times to remind individual lawyers of their obligations and confirm that they have independently verified the accuracy of any research or case citations that have been generated with the assistance of an AI chatbot.”

For litigants in person:

“AI chatbots are now being used by unrepresented litigants. They may be the only source of advice or assistance some litigants receive. Litigants rarely have the skills independently to verify legal information provided by AI chatbots and may not be aware that they are prone to error. If it appears an AI chatbot may have been used to prepare submissions or other documents, it is appropriate to inquire about this, ask what checks for accuracy have been undertaken (if any), and inform the litigant that they are responsible for what they put to the court/tribunal. Examples of indications that text has been produced this way are shown below.”

Deep Fakes:

“AI tools are now being used to produce fake material, including text, images and video. Courts and tribunals have always had to handle forgeries, and allegations of forgery, involving varying levels of sophistication. Judges should be aware of this new possibility and potential challenges posed by deepfake technology.”

White Text:

“Another form of fake material of which you must be aware is so called “white text”, which consists of hidden prompts or concealed text inserted into a document so as to be visible to the computer or system but not to the human reader. This possibility underscores the importance of judicial office holders’ personal responsibility for anything produced in their name.”

My full legal article on this new guidance can be read here