Practice Direction: Use of artificial intelligence (AI) in Tribunal proceedings

The Canadian Human Rights Tribunal is committed to the fair, just and efficient resolution of proceedings before it. This Practice Direction provides guidance to participants in Tribunal proceedings. It is not a rule within the meaning of the Tribunal’s Rules of Procedure.

Use of AI by Tribunal Members

Adjudication is a human responsibility. Tribunal members hear cases and make decisions based on the evidence and submissions. They do not use AI  to write decisions or analyze evidence. Tribunal members are fully accountable for their decision-making.

AI and Tribunal Proceedings

AI can be a helpful tool for litigants, but it’s not perfect. If you rely on AI for research or to prepare documents for the Tribunal, you must do so carefully. Keep these key points in mind:

1. Be Cautious 

  • AI results can be wrong. If you use AI to find legal sources or analyze information, double-check the results carefully.

2. Use Reliable Sources

  • AI might give you incorrect or made-up legal sources. Always verify the information by going directly to trusted sources, such as court websites, official publishers, or recognized legal databases like CanLii for case law.

3. Human Responsibility 

  • You are responsible for the accuracy of your written and oral submissions, even if AI helped prepare them. Always cross-check the information against reliable databases to ensure it is accurate and trustworthy. This protects the integrity of our justice system.