Ai of the Tiger
/When Eye of the Tiger hit the airwaves, it became an anthem of grit and energy. Today, AI enters the ring with speed and energy no human can match, but it still lacks what defines the human journey: resilience forged through struggle. As we stand at a pivotal moment in society – and education, keeping humans at the centre has never been more essential.
Artificial intelligence now sits beside young people in their everyday learning - drafting stories, solving equations, even offering advice late at night. A recent Alan Turing Institute study found that more than one child in five aged 8-12 already uses generative tools every week, mainly for creativity and research. An OECD review reports that almost every 15-year-old owns a smartphone and many spend more than 50 hours a week online, with clear links to anxiety and broken sleep. Researchers agree on three priorities: teach all learners to question what a system produces, protect their wellbeing through strong relationships and make sure every child has fair access to the opportunities AI brings. Learning communities therefore carry a new mandate - keep human connection at the centre while helping every learner treat AI as a thoughtful teammate, creative spark and careful checker of facts.
Generative AI has moved seemingly overnight from experiment to everyday habit. While children across the globe can create a draft, design or dataset in seconds; what they still need are the habits that decide whether those outputs are useful, fair or even real. While enthusiasm runs high, almost half of the Turing study’s young respondents admitted they do not know where the LLM words come from. Two kinds of literacy are needed: knowing how to prompt and knowing how to probe. When learners check a model’s sources in history, test its number sense in mathematics, or debate its moral frame in humanities, they practise genuine agency rather than passive consumption.
Yet research also reminds us that relationship - not processing power - remains the strongest predictor of wellbeing. The OECD links excessive, unstructured screen time to rising anxiety, especially for adolescents already navigating identity pressures. The answer is not withdrawal but deliberate balance. Purpose-driven learning communities build patterns that move between online and offline work: hands-on making, outdoor inquiry, quiet reflection circles. These steady practices keep relationships at the centre, so AI becomes a member of the creative team rather than its replacement.
Friendship brings another layer of complexity. The report Me, Myself & AI shows a third of 9- to 17-year-olds describe chatbot conversations as “like talking to a friend,” with the keenest users often those who feel vulnerable. Mentors therefore should invite learners to compare a chatbot’s “empathy” with a peer’s, exploring where machine responsiveness ends and genuine understanding begins. The goal is not to blame the technology but to grow discernment - what has been created by an algorithm, what comes from human experience and why the difference matters.
Every study raises equity concerns, pointing to gaps in AI access and confidence that follow income, gender and geography. Universal design is helpful in this context: multilingual interfaces, low-bandwidth options, shared devices and available income for passion projects all assist. Peer mentoring offers a clear route ahead; when early users teach their peers, diverse abilities shift from a gap to a common advantage.
Teachers and guides often feel they are catching up. In the Turing workshops, many adults said they were not sure how to use AI ethically. Learning communities can respond with open “sandbox time” - regular sessions where guides and learners experiment side-by-side, publish reflections and refine shared ground rules. Such cycles mirror the inquiry we expect of young people: practical, collegial and intentionally evolving.
Regulatory oversight needs to advance along the same step-by-step pathway. The OECD calls for safety-by-design - age assurance, clear explanations and independent tests built in before a tool is released and used. At the local level, that principle should guide every technology choice: no application is adopted unless its data flows are transparent and its biases openly discussed and understood. Young people should sit on the review groups, making real the Turing Institute’s insistence that their voices belong at the centre of AI policy.
Equally important is the shift from seeing AI as a rival mind to viewing it as partner. When AI draws a comic and students turn it into a short film, or writes a essay draft that the group edits together, the machine extends imagination without dictating it. This partnership mirrors the collaboration now common in research labs and start-ups - spaces where humans bring purpose and judgement, algorithms bring speed and pattern-finding.
Checking for truth completes the picture. As deepfakes and false references spread, every learner needs a practical toolkit: source-tracing searches, evidence triangulation, close reading. A capstone project might have teams build “fake-spotter” guides, making careful checking a clear skill demonstrating critical thinking in action. Research shows why this matters - children who cannot separate reliable from unreliable media risk both misinformation and growing distrust.
Final Thoughts
The evidence from the Alan Turing Institute, the OECD and other studies points to a balanced imperative: nurture human connection and ethical judgment while embracing AI as an inevitable, even inspiring, component of the learning ecosystem. By weaving technical inquiry across areas of learning, setting healthy rhythms for relationships, supporting fairness and giving young people a seat at decision tables, learning communities can turn a potentially disruptive tool into an energy for shared purpose. The algorithms will keep advancing; our task is to grow curiosity, compassion and the courage to ask, “Does this help? Is it true? “Does it help everyone thrive?”
References
Alan Turing Institute (UK)
Understanding the Impacts of Generative AI Use on Children (May 2025). https://www.turing.ac.uk/sites/default/files/2025-05/understanding_the_impacts_of_generative_ai_use_on_children_-_wp2_report.pdf
Children’s Manifesto for the Future of AI – outcome paper from the inaugural Children’s AI Summit (Feb 2025). https://www.turing.ac.uk/sites/default/files/2025-02/childrens_manifesto_for_the_future_of_ai_0.pdf
Other studies
Children and Generative AI in Australia: The Big Challenges – ARC Centre of Excellence for the Digital Child (Jun 2025). https://issuu.com/digitalchild/docs/children_and_generative_artificial_intelligence_g
Me, Myself & AI: Understanding and Safeguarding Children’s Use of AI Chatbots – Internet Matters (UK, Jul 2025). https://www.internetmatters.org/wp-content/uploads/2025/07/Me-Myself-AI-Report.pdf
The State of the World’s Children 2024: The Future of Childhood in a Changing World – flagship report (Nov 2024). https://www.unicef.org/media/165156/file/SOWC-2024-full-report-EN.pdf
How’s Life for Children in the Digital Age? – OECD Policy Insights paper (June 2025).https://www.oecd.org/content/dam/oecd/en/publications/reports/2025/05/how-s-life-for-children-in-the-digital-age_c4a22655/0854b900-en.pdf