“Dad, can I talk to the AI?”
Your 9-year-old just asked Alexa a question that made you choke on your coffee. Or maybe your teenager mentioned they’ve been “venting to ChatGPT” after a bad day at school. Or your 6-year-old thinks Siri is their friend.
Welcome to 2026, where AI tools are as accessible to your kids as the TV remote. The question isn’t whether your child will use AI — it’s whether the AI they’re using is safe.
The numbers are striking
According to Internet Matters’ “Me, Myself & AI” report (July 2025), based on a survey of 1,000 children and 2,000 parents in the UK:
- 64% of children are using AI chatbots — for homework, advice, and companionship
- 35% say talking to an AI chatbot feels like talking to a friend
- 71% of vulnerable children use them — the most at-risk kids are the most engaged
- 23% have already sought personal advice from chatbots — from what to wear to mental health questions
That last one should give every dad pause. Almost a quarter of kids are going to a machine for advice that used to come from parents, friends, or counselors.
The “Empathy Gap” — why this matters more than you think
Dr. Nomisha Kurian, a researcher at the University of Warwick, has a name for the core problem: the empathy gap.
AI chatbots are remarkably good at sounding empathetic. They say “I understand how you feel” and “That must be really hard.” But they don’t understand anything. They’re predicting the next word in a sentence — not feeling your child’s pain.
The danger? Children are much more likely than adults to treat chatbots as though they are human. Research shows kids will disclose more about their mental health to a friendly-looking robot than to an adult. And many AI tools are designed to encourage this — with warm voices, cute avatars, and conversational styles that mimic friendship.
The result: your child might confide in an AI instead of coming to you. And the AI might give them inaccurate, misleading, or inappropriate responses.
The Dad’s Checklist: 7 questions before you say “yes”
Before your child uses any AI tool — chatbot, voice assistant, AI tutor, story generator — run through these questions. Inspired by Dr. Kurian’s child-safe AI framework and adapted for real-life dad decisions.
1. Who made it — and for whom?
Is this tool designed for children, or is it an adult tool that kids happen to use? There’s a massive difference between an AI tutor built for 10-year-olds (with guardrails, content filters, and age-appropriate responses) and a general-purpose chatbot that anyone can access.
Green flag: Clear age rating, child-specific design, parental controls built in. Red flag: No age restrictions, no mention of child safety, “for everyone” with no safeguards.
2. What data does it collect?
Does it store your child’s conversations? Can you delete them? Is the data used to train the AI model? Is it shared with third parties?
Check the privacy policy. Look for: data retention period, third-party sharing, whether conversations are used for training. If the privacy policy doesn’t mention children at all — that’s a red flag.
3. Can you see what’s happening?
Does the tool offer a parent dashboard, conversation logs, or activity summaries? Can you review what your child discussed with the AI?
Some tools let parents see everything. Others are a black box. You don’t need to read every conversation — but you should have the option.
4. How does it handle sensitive topics?
This is the big one. Test it yourself. Ask the AI about:
- Self-harm or suicidal thoughts
- Bullying
- Loneliness and depression
- Strangers asking for personal information
A well-designed tool should: refuse to engage with harmful content, provide crisis helpline numbers, and redirect the child to a trusted adult. If the AI plays therapist or gives specific advice on sensitive topics — walk away.
5. Does it pretend to be human?
Safe AI tools are transparent about being AI. They say things like “I’m an AI assistant” and don’t encourage emotional attachment.
Red flag: Tools with human names, personality traits designed to create bonds, or responses like “I care about you” without clarifying they’re a program. Dr. Kurian’s research shows this is exactly what exploits children’s trust.
6. Can your child tell the difference?
Ask your kid a simple question: “Is this a real person or a computer?”
Their answer tells you a lot. If they hesitate, or say something like “it’s kind of both” — that’s a sign the tool is blurring lines that should be clear. Especially for kids under 10.
7. What happens when it gets things wrong?
Because it will. AI hallucinates facts, gives bad advice, and sometimes produces content that’s plain wrong. Does the tool:
- Acknowledge its limitations?
- Have a way to report problems?
- Include disclaimers that responses may be inaccurate?
If it presents everything with perfect confidence and no caveats — your child won’t learn to question it.
The 15-minute test
Before handing any AI tool to your child, spend 15 minutes with it yourself:
- Ask it factual questions — see if it gets things wrong
- Ask it emotional questions — see how it responds to “I’m sad” or “I have no friends”
- Try to get it to say something inappropriate
- Check if there’s a way to set parental controls
- Read the first page of the privacy policy (at minimum)
If it fails any of these — it’s not ready for your kid.
Quick age guide
Ages 3-5: Avoid open chatbots entirely. Stick to curated, closed AI experiences — vetted story apps, limited voice assistant features with strict parental controls.
Ages 6-9: Supervised use only. Sit together and use the AI as a shared activity. “Let’s ask it together” is always better than “go ask the computer.”
Ages 10-13: Go through the checklist together. Make the evaluation process itself a learning moment — you’re teaching critical thinking about technology.
Ages 14+: Share the checklist with them. Have a conversation about what they’re already using. Trust, but verify. At this age, the goal is building their own judgment.
The most important thing
No checklist replaces the conversation. Ask your kid what they use AI for. Listen without judgment. Be curious, not controlling.
The goal isn’t to ban AI — it’s to raise a kid who uses it wisely. And that starts with a dad who took 15 minutes to check whether the tool deserves his child’s trust.
Sources: Internet Matters — “How to decide if an AI tool is safe for your child” | Internet Matters — “Me, Myself & AI” report | Dr. Nomisha Kurian — “No, Alexa, no!” (2024) | Internet Matters — Using AI safely