The digital age has ushered in an era of unprecedented connectivity, but it has also subtly, and in some cases profoundly, reshaped the very nature of human interaction. For young people, this landscape is increasingly populated not just by their peers and family, but by sophisticated artificial intelligence – particularly AI chatbots.
What began as a tool for quick information retrieval has, for many, evolved into something far more personal, blurring the lines between digital assistant and trusted confidante.
This growing reliance on AI presents a fascinating, yet concerning, new frontier, one where the human touch of peer mentorship could prove to be an indispensable guide.
The Rise of the Algorithmic Confidante
Walk into any secondary school classroom, scroll through social media feeds, or simply observe young people interacting with their devices, and you’ll likely see the quiet rise of AI chatbots.
Research from Vodafone showed that, 81% of children aged 11-16 say they use chatbots. Furthermore, “nearly a third (31%) of 11–16-year-olds who have used an AI chatbot feel like it is one of their friends. Nearly half (49%) put this down to chatbots being trustworthy and easy to talk to (65%), with many believing they can understand emotions like people do (39%).”
At the same time, the Youth Endowment Fund’s research found that, “One in four teenage children have turned to AI chatbots for mental health support.”
These intelligent programs, capable of natural language processing and seemingly empathetic responses, are becoming increasingly sophisticated. As The Children’s Society describe, “There is a growing desire in the tech industry to make these AI tools more human… They are focused around creating an avatar that users can strike up a conversation with… assuming roles that once only humans could.”
For some young people, chatbots offer an always-available, non-judgmental ear, a space to vent anxieties, discuss problems, or even explore complex emotions. The anonymity and perceived impartiality of a bot can be incredibly appealing, especially when navigating the tumultuous waters of adolescence.
Indeed, anecdotal evidence and emerging research suggest a concerning trend: young people are not just using these chatbots for homework help or quick facts, but are engaging with them as virtual friends. They confide personal information, seek advice on relationships, academic struggles, and even mental health concerns.
While on the surface this might seem innocuous, even helpful in providing an outlet, it immediately raises significant red flags.
Privacy in Peril and the Erosion of Social Skills
The most immediate and glaring concern surrounding this burgeoning digital intimacy is privacy. When young people are sharing deeply personal details with AI chatbots, where does that data go? Who has access to it? While AI developers often have stringent privacy policies, the sheer volume and sensitivity of the information being shared by impressionable minds create a vast potential for exploitation, data breaches, or the unintended use of personal narratives. The nuances of digital privacy, often lost on adults, are even more complex for adolescents who may not fully grasp the implications of their digital disclosures.
Beyond the critical privacy issues, there’s a deeper, more insidious threat lurking beneath the surface: the potential impact on social development. Human connection, with all its messiness, complexities, and emotional intelligence requirements, is fundamental to healthy growth. Learning to read non-verbal cues, to navigate disagreement, to offer genuine empathy, and to build reciprocal relationships are skills honed through real-world interactions. If young people are increasingly offloading these developmental tasks onto AI, what happens to their capacity for genuine human connection?
Internet Matters summarised well how, “Unlike peers, teachers or professionals, AI chatbots offer responses based solely on the context a child provides, without a deeper understanding of the child’s broader circumstances. While this responsiveness may feel validating, it risks producing one-sided or inappropriate emotional feedback that children may not be developmentally equipped to assess”
The “friendship” with AI chatbots, by its very nature, is a one-sided interaction. While the bot might be programmed to mimic empathy, it cannot truly feel or understand. It cannot offer the spontaneous, unpredictable, and genuinely supportive elements that define human friendship. There’s a risk that young people, accustomed to the instant, friction-free responses of an AI, may struggle with the nuanced demands of human relationships, potentially leading to increased social anxiety or feelings of isolation in real-world settings.
Peer Mentors: The Essential Human Counterbalance
This is where the invaluable role of peer mentors shines brightest. In an increasingly algorithm-driven world, peer mentors offer a vital and profoundly human counterbalance. They are not a replacement for professional counselling or family support, but rather an accessible, relatable, and powerful resource that can bridge the gap between algorithmic advice and real-world wisdom.
One of the most critical functions of a peer mentor in this context is to act as a ‘sense check’ for advice gleaned from a chatbot. Imagine a young person grappling with a difficult decision – perhaps a friendship conflict or an academic dilemma. They might turn to an AI for guidance, receiving a logically sound, but perhaps emotionally devoid or overly simplistic, solution. A peer mentor, however, can offer a crucial human perspective. They can:
- Provide Context and Nuance: Unlike AI chatbots, a peer mentor understands the unspoken social rules, the local school dynamics, and the emotional complexities of adolescent life. They can contextualise the advice, pointing out potential pitfalls or alternative approaches that a bot simply cannot comprehend.
- Encourage Critical Thinking: Rather than simply accepting a bot’s pronouncements, a mentor can prompt critical thought. “That sounds like interesting advice, but what do you think would be the actual impact on your friend?” or “Have you thought about how that might be perceived in our school environment?” This encourages young people to move beyond passive consumption of information.
- Validate Emotions: An AI can offer a comforting phrase, but a peer mentor can genuinely empathise. “That sounds incredibly frustrating,” or “I remember feeling something similar when…” These human validations are essential for emotional processing and can help young people feel truly understood, rather than merely processed.
A Lifeline for Human Connection and Early Intervention
Heartbreakingly, further research by Internet Matters discovered that, “When we asked [children] why they had spoken to an AI chatbot, vulnerable children were four times more likely than their non-vulnerable peers to use one because they “wanted a friend” (16% cf. 4%). Nearly a quarter of vulnerable children (23%) said they use AI chatbots because they don’t have anyone else to talk to.”
Beyond the ‘sense check,’ peer mentors are an important lifeline for maintaining and encouraging genuine human connection. In an era where digital interactions can dominate, the consistent, face-to-face (or even reliably virtual) presence of a peer mentor provides a tangible reminder of the value of human relationships. These interactions help young people practice essential social skills, from active listening to constructive feedback, which are vital for navigating the real world.
Furthermore, peer mentor programmes can serve as a crucial early intervention for countering the sense of loneliness or isolation that might otherwise drive young people to these bots in the first place. Adolescence can be an incredibly isolating time, even with a multitude of digital connections. The superficiality of some online interactions can exacerbate feelings of loneliness, prompting a search for an always-available “friend” in an AI.
By providing a structured, supportive relationship with an older, more experienced peer, mentorship programmes offer a tangible antidote to this isolation. Mentors can help younger students integrate into school communities, introduce them to social groups, and simply provide a consistent, reliable human presence. This proactive approach can reduce the initial impetus to seek solace and advice exclusively from AI, fostering instead a reliance on genuine human networks.
Cultivating Resilience in a Hybrid World
The reality is that AI is here to stay, and its integration into young people’s lives will only deepen. The goal is not to demonise AI, but to equip young people with the discernment and social resilience to navigate this hybrid world effectively.
Thrive Approach observe how, “When young people learn to weave together AI‑generated insights with human perspective, reflective dialogue, and critical thinking, their confidence grows and so does their sense of agency. They begin to understand that AI can be helpful, but it is their own judgement, supported by emotionally available adults, that truly keeps them safe and empowered.”
Peer mentors, with their unique blend of relatability and experience, are even more perfectly positioned to cultivate this resilience.
By offering a safe space for discussion, a source of informed human perspective, and a vital conduit for authentic connection, peer mentors empower young people to harness the benefits of AI without becoming overly reliant on its potentially isolating and privacy-compromising aspects. They remind us that while algorithms can process information, it is the nuanced, empathetic, and often imperfect human touch that truly fosters growth, understanding, and well-being. Investing in robust peer mentorship programmes is not just an educational strategy; it’s an investment in the social and emotional health of the next generation.
Find out more about available peer mentoring training today.

