AI is like a very convincing advisor who sometimes lies

The problem? You don’t know when. AI doesn’t say “I’m not sure” or “I made that up.” It sounds confident. Always. Even when it states false facts, invents sources, and draws wrong conclusions.

Prof. Jerzy Surma identifies 5 key risks in his lecture on generative AI threats that affect every user — and your child especially.

5 risks of blindly trusting AI

1. You say things you don’t understand

AI gives an answer. You repeat it — to a teacher, parents, friends. But you don’t understand what you actually said. You don’t verify. You don’t think.

Real-life example: A child writes a “report” for history class using ChatGPT. It sounds professional. The teacher asks for details — the child can’t answer, because they don’t understand what they “wrote.”

2. You make decisions that aren’t yours

AI suggests, you “decide.” But who really decided?

  • AI recommends a movie → you watch it
  • AI suggests a purchase → you buy it
  • AI suggests an answer → you send it

These aren’t your decisions. They’re decisions of an algorithm serving someone else’s goals.

3. You can’t verify AI hallucinations

AI “hallucinates” — it generates content that sounds convincing but is false. It invents quotes, sources, dates, facts. And it does this with the same confidence as when stating the truth.

Test: Ask ChatGPT about a detailed historical fact about your city. Half the time you’ll get an answer that sounds great — but is completely made up.

4. You’re being steered by the model

This is the hardest risk to grasp. When you use AI, you’re fulfilling the goals of:

  • Model creators — OpenAI, Google, Meta decide what AI can say and what it can’t
  • Service providers — you receive targeted, personalized ads
  • Government/authorities — content may be censored in ways you don’t know about

The result: you get a filtered reality. You don’t see the full picture — you see what someone wants you to see.

5. You lose independence

Prof. Surma uses a perfect analogy: “It’s like having GPS turned off and not being able to read a map.”

If you’ve used navigation for years, you probably can’t drive to a new place without it anymore. The same happens with thinking. If your child “thinks through AI” from childhood, they won’t develop the skills of independent thinking, analysis, and reasoning.

Critical thinking — your shield

Critical thinking in the context of AI means 4 key skills (following Prof. Surma):

  1. Reliable evaluation of AI-generated results — don’t take answers at face value. Ask: “how do you know?”, “what are the sources?”, “can I verify this?”

  2. Identifying low-quality information, inaccuracies, and hallucinations — learn to recognize when AI is making things up. Cross-reference with other sources. Check dates, names, quotes — that’s where AI lies most often.

  3. Interpreting GenAI results with awareness of their potential bias — AI learned from internet data. The internet is not objective. So AI isn’t either. Always ask: “whose perspective does this represent?”

  4. Constructive use — treating AI results as a starting point, not the final product — treat an AI answer as a first rough draft. Improve it with your own critical judgment, knowledge, and experience.

Test: is your child vulnerable?

Answer 5 questions honestly:

  1. Does your child copy AI answers without changing a single word?
  2. Do they trust that “the internet/AI is always right”?
  3. Can they explain in their own words what they wrote with AI’s help?
  4. Have they ever checked whether AI gave them true facts?
  5. Can they write an essay without AI’s help?

If 3+ answers worry you — it’s time for exercises.

How to build resilience: 3 family exercises

”Catch the hallucination”

Ask ChatGPT 10 factual questions (dates, names, events). Then check the answers in Wikipedia or a textbook. Whoever finds more AI mistakes — wins.

Goal: Your child learns that AI regularly gets things wrong — and that fact-checking is normal.

”Who’s deciding here?”

For a week, note every time AI influences your decisions:

  • TikTok suggested what to watch? ✓
  • Google suggested what to buy? ✓
  • ChatGPT wrote an answer for you? ✓

At the end of the week, count them up. The result is usually surprising.

Goal: Awareness of how often AI makes decisions “for us."

"A day without AI”

Spend one day (ideally a weekend) without any AI:

  • Navigation? Paper map or asking for directions
  • Questions? Encyclopedia, book, conversation with someone
  • Entertainment? Board game, walk, cooking together
  • Planning? Pen and paper

Goal: Discovering what you can actually do without technology — and how much fun you can have offline.

How AI can (paradoxically) help

Critical thinking quiz

Create a "critical thinking about AI" quiz for a child aged [X].
10 questions as scenarios, e.g.:
"AI tells you that Chicago is the capital of the USA. What do you do?"
a) Believe it because AI is smart
b) Check in an atlas
c) Ask your teacher
For each answer, explain why it's good or bad.

Bias recognition exercise

Give me 3 examples of questions where different AI models
(ChatGPT, Claude, Gemini) give different answers.
Explain why the answers differ and what this tells us
about AI bias.
Present it in a way that's understandable for a [X]-year-old.

”Biased AI” role-play game

Pretend to be a biased AI answering my child's questions.
Deliberately mix facts with opinions and small errors.
After each answer, wait for the child to try to find the mistake.
If they find it — congratulate. If not — give a hint.
Topics: history, animals, space, sports.
Difficulty level: for a [X]-year-old.

The most important rule

AI is a starting point, not the finish line. Every AI answer is a first rough draft — your job (and your child’s job) is to evaluate, verify, and improve it with your own thinking.

Because whoever doesn’t think for themselves — thinks the way someone else wants them to.


Want to prepare a critical thinking quiz for your child? Our AI assistant will help you create exercises tailored to their age.