logo

Harbingers’ Magazine is a weekly online current affairs magazine written and edited by teenagers worldwide.

harbinger | noun

har·​bin·​ger | \ˈhär-bən-jər\

1. one that initiates a major change: a person or thing that originates or helps open up a new activity, method, or technology; pioneer.

2. something that foreshadows a future event : something that gives an anticipatory sign of what is to come.

cookie_image

We and our partners may store and access personal data such as cookies, device identifiers or other similar technologies on your device and process such data to personalise content and ads, provide social media features and analyse our traffic.

introduction image

A law student speaks before a panel of judges at the moot court.

Picture by:

Article link copied.

Dilemma for future lawyers: Can AI deliver justice?

author_bio
Helena Bruździak in Warsaw, Poland

16-year-old Helena wants to study law and wonders what will happen with the AI takeover

Imagine yourself as a lawyer a decade from now: AI drafts your legal arguments, predicts outcomes and even suggests negotiation strategies. Does it make your job easier, strengthen justice or simply replace human judgement by an algorithm?

As teenagers, we can feel a lot of pressure to choose a profession based on passion or quality of life. Now there is a new factor to consider: the impact of AI on our chosen fields.

I’ve always wanted to study law, and yet I keep hearing reasons why not to, and AI is one of the key reasons. Whether it is fearmongering or genuine cause for concern, AI is changing how many of us think about our future jobs.

Is AI a threat or a tool?

Even before reaching the courtroom, AI is changing the way we study. It can make it easier by generating flashcards and making quizzes to help study. Students can now generate summaries of cases or legal arguments. This is very convenient and can help make studying and working more efficient.

Legal educationmight have to adapt soon. Not just teaching constitutional principles and tort law, but also AI literacy and digital ethics. Knowing how to use AI responsibly could become just as important as knowing how to argue a case.

AI does have advantages in the field of law. In fact, it may make legal work more efficient, especially in understaffed organisations or for lawyers handling many cases at once, or for tedious, repetitive tasks. Language models such as ChatGPT can assist with legal research and process large volumes of documents in a shorter time.

But the dangers of using AI are not only hypothetical. Blind trust in these systems can lead to professional embarrassment and legal consequences. In 2023, an American lawyer in the Mata v Avianca casesubmitted a brief with fictitious cases generated by ChatGPT, leading to embarrassment and a court apology. The incident showed how easily AI can mislead even professionals, raising a question: what would happen if judges started relying on it for rulings?

This hypothetical isn’t as far-fetched as it seems. Some courts already use algorithms to assess bail or parole risks. But when justice is guided by data, who is held accountable when that data is wrong?

Between fear and excitement

As a student, I find myself caught between excitement and anxiety. On the one hand, I’m amazed by the potential of AI to democratise access to justice.

It could make legal advice more affordable, faster and available to those who could never afford a lawyer.

Imagine an AI assistant that can instantly translate complex legal jargon into plain language for anyone seeking help. That vision excites me because it could make the law more inclusive than ever before.

However, then comes the fear. AI may be efficient and make jobs easier but there is still the question of accountability. If an AI system provides incorrect legal advice, drafts a misleading argument or recommends an unjust outcome, who is responsible? The lawyer who used it, the law firm that implemented it, or the company that built the algorithm?

Law depends on responsibility and traceability, yet AI often functions as a “black box” – meaning we can see the input and output, but can’t see or understand how it made the decision. This produces results without clear reasoning or accountability.

Another serious concern is data privacy. Lawyers often deal with highly sensitive data and personal information and AI tools often rely on processing large amounts of data. If client data is entered into an AI platform, is it secure? Could it be stored, leaked or even used to train future models without consent? Misuse of this could lead to a breach of attorney-client privilege.

An even bigger question is: if AI becomes even more powerful and accessible to everyone, would lawyers even be needed in the same way? No one knows the answer yet. Some might argue that AI will help to create new roles – such as legal AI ethical officers, legal prompt engineers, legal bias anthropologists – rather than destroy them. In this view, the legal world will not disappear, it will evolve.

Call to action

Of course, it’s very daunting to think that AI could ‘take over your job’ before even your career begins. The speed of technological advancement makes it easy for you to seem uncertain or even irrelevant, but it is important to stay open-minded about what’s to come.

Warnings about AI can sound apocalyptic and dramatic, but they exist for a reason. They remind us that innovation without ethics can spiral into harm. Instead of rejecting AI we should study it, question it and learn how to use it responsibly.

I don’t think that AI marks the end of law, it marks its transformation. The challenge for our generation should not be to fight off AI but rather learn to work with it and ensure that technology does not replace justice but helps to serve it.

Written by:

author_bio

Helena Bruździak

Writer

Warsaw, Poland

Helena Bruździak was born in 2009 in Warsaw, Poland. She is passionate about writing, with a particular interest in history and English at school, and aspires to study law in the future. In March 2025, she launched a human rights subsection for the magazine called, Crisis Zones, alongside her peer, Kexin Shi, where they aim to raise awareness among young people about the challenges refugees and displaced people face.

In her free time, she enjoys listening to music, playing the piano, and reading poetry.

Helena speaks English and Polish, and is currently learning French.

 

Edited by:

author_bio

Charlotte Wejchert

Human Rights Section Editor 2025

Warsaw, Poland

AI & tech

🌍 Join the World's Youngest Newsroom—Create a Free Account

Sign up to save your favourite articles, get personalised recommendations, and stay informed about stories that Gen Z worldwide actually care about. Plus, subscribe to our newsletter for the latest stories delivered straight to your inbox. 📲

Login/Register