16-year-old wants to study law and wonders what will happen with the AI takeover
A law student speaks before a panel of judges at the moot court.
5 December 2025
Dilemma for future lawyers: Can AI deliver justice?
Imagine yourself as a lawyer a decade from now: AI drafts your legal arguments, predicts outcomes and even suggests negotiation strategies. Does it make your job easier, strengthen justice or simply replace human judgement by an algorithm?
As teenagers, we can feel a lot of pressure to choose a profession based on passion or quality of life. Now there is a new factor to consider: the impact of AI on our chosen fields.
I’ve always wanted to study law, and yet I keep hearing reasons why not to, and AI is one of the key reasons. Whether it is fearmongering or genuine cause for concern, AI is changing how many of us think about our future jobs.
Is AI a threat or a tool?
Even before reaching the courtroom, AI is changing the way we study. It can make it easier by generating flashcards and making quizzes to help study. Students can now generate summaries of cases or legal arguments. This is very convenient and can help make studying and working more efficient.
Legal educationmight have to adapt soon. Not just teaching constitutional principles and tort law, but also AI literacy and digital ethics. Knowing how to use AI responsibly could become just as important as knowing how to argue a case.
AI does have advantages in the field of law. In fact, it may make legal work more efficient, especially in understaffed organisations or for lawyers handling many cases at once, or for tedious, repetitive tasks. Language models such as ChatGPT can assist with legal research and process large volumes of documents in a shorter time.
But the dangers of using AI are not only hypothetical. Blind trust in these systems can lead to professional embarrassment and legal consequences. In 2023, an American lawyer in the Mata v Avianca casesubmitted a brief with fictitious cases generated by ChatGPT, leading to embarrassment and a court apology. The incident showed how easily AI can mislead even professionals, raising a question: what would happen if judges started relying on it for rulings?
This hypothetical isn’t as far-fetched as it seems. Some courts already use algorithms to assess bail or parole risks. But when justice is guided by data, who is held accountable when that data is wrong?
Between fear and excitement
As a student, I find myself caught between excitement and anxiety. On the one hand, I’m amazed by the potential of AI to democratise access to justice.
It could make legal advice more affordable, faster and available to those who could never afford a lawyer.
Imagine an AI assistant that can instantly translate complex legal jargon into plain language for anyone seeking help. That vision excites me because it could make the law more inclusive than ever before.
However, then comes the fear. AI may be efficient and make jobs easier but there is still the question of accountability. If an AI system provides incorrect legal advice, drafts a misleading argument or recommends an unjust outcome, who is responsible? The lawyer who used it, the law firm that implemented it, or the company that built the algorithm?
Law depends on responsibility and traceability, yet AI often functions as a “black box” – meaning we can see the input and output, but can’t see or understand how it made the decision. This produces results without clear reasoning or accountability.
Another serious concern is data privacy. Lawyers often deal with highly sensitive data and personal information and AI tools often rely on processing large amounts of data. If client data is entered into an AI platform, is it secure? Could it be stored, leaked or even used to train future models without consent? Misuse of this could lead to a breach of attorney-client privilege.
An even bigger question is: if AI becomes even more powerful and accessible to everyone, would lawyers even be needed in the same way? No one knows the answer yet. Some might argue that AI will help to create new roles – such as legal AI ethical officers, legal prompt engineers, legal bias anthropologists – rather than destroy them. In this view, the legal world will not disappear, it will evolve.
