logo

Harbingers’ Magazine is a weekly online current affairs magazine written and edited by teenagers worldwide.

harbinger | noun

har·​bin·​ger | \ˈhär-bən-jər\

1. one that initiates a major change: a person or thing that originates or helps open up a new activity, method, or technology; pioneer.

2. something that foreshadows a future event : something that gives an anticipatory sign of what is to come.

cookie_image

We and our partners may store and access personal data such as cookies, device identifiers or other similar technologies on your device and process such data to personalise content and ads, provide social media features and analyse our traffic.

introduction image

Sam Altman, CEO of OpenAI, creator of the very popular AI chatbot ChatGPT.

Picture by: Aflo Co. Ltd. | Alamy

Article link copied.

Do not let AI think for you, it’s not there yet

16-year-old Charlotte examines the role of AI in our daily lives and offers advice on how to use it wisely

We all use AI chatbots in our daily lives. But what do we actually use them for, and how seriously do we take their responses?

It feels like everyone now uses AI for everything, sometimes even when it’s unnecessary. To be fair, chatbots are incredibly useful. They can save us time, help us brainstorm, draft emails and even teach us things. But there’s a catch: we cannot believe everything they tell us and, more importantly, we cannot outsource our thinking to them.

The rise of everyday AI

Many people assume that AI chatbots such as ChatGPT, DeepSeek, Grok or Gemini are just ‘better’ versions of Google. They are not.

Search engines fetch information from the web. Chatbots, built on Large Language Models (LLMs), generate responses by predicting patterns in language. The difference is subtle but crucial: search engines anchor you to sources you can verify, while LLMs produce text that sounds right, whether it is or not.

Even Sam Altman’s OpenAI, the company behind ChatGPT, admits that its models can “hallucinate” – a polite way of saying they make things up – with an “error rate” as high as 75%. OpenAI describe hallucinations as “instances where a model confidently generates an answer that isn’t true”.

That confusion between fluency and truth isn’t just a minor detail – it has real-world consequences.

Systemic risks

I’m sure many of us have heard these stories: a teen killed himself after “months of encouragement from ChatGPT”; a man ended up in hospital after ChatGPT assured him he was fine despite clear symptoms of a manic episode; and a New York lawyer submitted fake legal cases generated by AI into a court filing.

And you probably think – that this will never be you, because you have common sense, awareness and the ability to tell fact from fiction. But that’s exactly the trap. These examples show how easy it is for convincing-sounding words to override your judgement, especially when they come from something we have come to treat as authoritative.

Health experts warn the stakes are even higher when misinformation touches healthcare. Researchers at the University of South Australiacautioned that inaccurate medical claims, if widely shared, could have “serious health consequences.”

These are not glitches. They are systemic risks.

It’s not critical – it’s convincing

Here’s a hard truth: AI chatbots are not built to be critical. They are built to be convincing.

That’s because of how they are trained. LLMs absorb billions of pages of internet text, which means they also absorb the internet’s biases – political, cultural, emotional. When you ask a leading question, the model doesn’t push back; it mirrors your premise and expands on it.

This is confirmation bias at scale: machines amplifying the same human tendency to seek out information that supports what we already believe.

Sometimes, that bias tips into something darker. Earlier this year, Grok (created by Elon Musk’s company xAI) was caught spitting out far-right talking points after being fed provocative prompts. Instead of resisting, the bot leaned in, amplifying the ideology. This was not a malfunction – it was the logical consequence of a system designed to echo back the patterns it has seen.

This dynamic is especially concerning for Gen Z, the first generation to come of age who have been immersed in algorithmic feeds and now the most likely to lean on AI for advice, learning or even emotional support.

Researchers warn that the risks are layered:

For a generation already caught in algorithm-driven “filter bubbles,” chatbots risk creating a new kind of echo chamber – one that feels more persuasive precisely because it sounds conversational.

‘ChatGPTing’ vs ‘Googling’

To be clear, search engines are not perfect. They can still deliver spammy, SEO-optimised content. But their architecture is different: they retrieve sources you can check.

Even as Google integrates AI-powered summaries through Gemini, those answers remain tied to underlying sources. Chatbots, by contrast, often can’t – or won’t – show their work.

If you are looking for a brainstorm, a draft or a thought-starter, chatbots are excellent. But if you are looking for facts, a search engine is still the safer bet.

There’s also the long-term risk of what happens when we let AI ‘think’ for us.

A Duke University study warns that relying too heavily on AI tools can dull problem-solving and reflective thinking. Researchers at Cornell add that AI can subtly shape decision-making by nudging users toward answers they might not otherwise choose.

In other words, it’s not just our knowledge at stake – it’s our agency.

The solution is not to abandon AI, but to use it wisely.

AI can absolutely make our lives easier. But if we don’t stay aware of its limits, it could also make us intellectually lazier and more vulnerable to misinformation.

By all means, you can use AI to help write your essays or emails if you want to, but come up with the ideas yourself. Think for yourself. Don’t expect AI to do the thinking or the fact-checking for you – it’s not there yet.

Written by:

author_bio

Charlotte Wejchert

Human Rights Section Editor 2025

Monaco

Born in 2008 in Zurich, Switzerland, and raised in Warsaw, Poland, Charlotte has studied in Monaco for the last eight years. She is interested in the humanities and plans to study History and English.

Charlotte joined Harbingers’ Magazine in August 2024 as a contributor. She took part in a reporting trip to Yerevan, Armenia, covering the refugee crisis in the aftermath of the Nagorno-Karabakh (Artsakh) war and collaborating with students from the Harbingers’ Armenian Newsroom. The trip resulted in several thought-provoking articles, earning her a regular spot at the magazine.

In the autumn of 2024, after completing the Essential Journalism Course, Charlotte became a writer focusing on social affairs, human rights, politics, and culture. Her exceptional writing skills and dedication to the magazine led to her appointment as Human Rights Section Editor in March 2025. Simultaneously, she will serve as the Armenian NewsroomEditor.

In her free time, Charlotte loves painting and photography. She won the International King’s College art competition in 2023 and was a runner-up in 2024. She also takes up leadership roles and public speaking, being in her school’s student senate for the last three years and attending conferences at UN headquarters primarily regarding human rights and the climate.

Charlotte speaks Polish, English, French and Italian.

Edited by:

author_bio

Arnav Maheshwari

Economics Section Editor 2025

Georgia, United States

AI & tech

🌍 Join the World's Youngest Newsroom—Create a Free Account

Sign up to save your favourite articles, get personalised recommendations, and stay informed about stories that Gen Z worldwide actually care about. Plus, subscribe to our newsletter for the latest stories delivered straight to your inbox. 📲

Login/Register