Our stance on AI
in interactive learning

Our stance on AI in interactive learning

A position piece from the IVAR team

Last revised: April 2026

We think AI will reshape learning

That means we should tread carefully.

As we are heading into a world where AI becomes part of everyone's environment, including young learners, we see both great promise and potential pitfalls.

This is a statement on what we base our decisions on, as we design interactive experiences for adult audiences as well as younger learners - like our Expedition AI project.

We are currently experimenting with and exploring AI-driven interactive narratives. Nothing has publicly launched yet.

Some of these experiences might reach adult audiences in museums. Others might reach students in classrooms. Because the technology talks directly to the user, often in natural language, we want to be clear from the start about who it is safe and appropriate to build for, and how.

This is our current position. We expect to revise it as the field matures.

The promise

Picture a learner with a tireless guide at their side. One that answers any question without judgement, explains a tricky idea in three different ways until it lands, follows curiosity wherever it leads, and never runs out of patience. Picture what that could mean for a student in a classroom with too few teachers, for an adult finally opening a subject they thought was out of reach, or for a child who asks the questions no one ever has time to answer.

That is what AI in learning could one day unlock: a world where no learner is left alone with a question, and no subject is too big to step into. It is a possibility we take seriously, and it is the promise we want to build toward.

The early evidence is encouraging. A 2025 meta-analysis of 228 studies found large positive effects of AI on cognition, with generative AI showing the largest effects among AI types.1

Dr. Ying Xu's experimental work at the Harvard Graduate School of Education has repeatedly shown that children who dialogue with a well-designed AI character comprehend stories better and learn more science vocabulary than children without it, in some contexts with gains comparable to human partners.2 The biggest beneficiaries are often children with the least access to a knowledgeable conversation partner at home.

Taken together, the positive evidence points to a consistent pattern: tightly scoped content, supervision by a trusted adult, and AI used to deepen human learning rather than replace it. That is the shape of the promise as we understand it today, and it is the pattern every AI-driven experience we build is designed to support.

What the science says about age

Conversational AI interacts differently with developing minds at different stages.

Children under roughly age 9 consistently anthropomorphise AI agents - attributing feelings, trustworthiness, and authority to them - and often continue to do so even when told the agent is a machine.3

The American Psychological Association's 2025 advisory defines adolescence as ages 10–25 and warns that young people are particularly vulnerable because they may be less likely to question AI-generated information.4

Three international frameworks converge on age 13 as a floor below which conversational AI should not be deployed without substantial additional safeguards:

  • UNESCO recommends a minimum age of 13 for generative AI in classrooms.5
  • The EU AI Act (Article 5, in force since February 2025) prohibits AI systems that exploit vulnerabilities due to age.6
  • UNICEF's December 2025 guidance includes a dedicated requirement to address risks from AI-enabled chatbots and companions.7

Swedish, Norwegian, and Finnish education authorities are aligned in direction, emphasising cautious experimentation, teacher leadership, and particular care with younger children.8

Our design principles

The research points to specific levers that reduce risk without sacrificing learning. These five shape every AI-driven experience we build.

01

Age-appropriate, supervised contexts

All our AI-driven experiences are built for supervised learning environments - classrooms, museums, guided sessions - not for unsupervised, standalone use by children.

When conversational AI in our experiences can reach young learners, we set 13+ as the working floor until the science on younger audiences is more settled. This is a conservative floor, not a ceiling of ambition.

Within 13+, we adapt content and framing for lower-secondary (13–15), upper-secondary (16–18), and adult audiences - with tighter scaffolding and more explicit machine framing at the younger end.

02

Clearly non-human characters

We design our AI guides as stylised non-human figures - robots, droids, and imagined characters - rather than photorealistic humans with human names.

Sometimes we take it further and let the story itself talk back - imagine an interactive pike in a Baltic Sea expedition, or questions asked directly to an iceberg. These would be clearly non-human layers where the learner converses with the subject itself, not a character pretending to be a person.

Photorealism and human identity cues are the strongest drivers of mental-state and moral attribution in children. Stylised machine aesthetics reduce this without removing the conversational warmth that makes learning effective.9

03

Machine-consistent self-reference

Our AI characters do not simulate personal relationship toward the learner. No "I feel," no "I missed you," no simulated friendship. They refer to themselves as programs or systems, and they disclose their nature repeatedly - not once at the start.

The strongest experimental evidence we have shows that when an AI explicitly acknowledges its machine nature, children's over-attribution of mind drops.10

04

Pre-scripted factual content from vetted sources

Our AI-guided narratives are built on content reviewed by subject-matter experts and cross-checked against primary scientific references. The AI guide delivers and contextualises this content; it does not improvise on facts.

This doesn't completely rule out hallucination - that's an inherent problem in working with all LLM-based AI systems - but it it ensures the story stays on track with pre-determined facts about Baltic ecology, heritage sites, or any subject we work on reaching the audience.

05

Teacher-mediated deployment

When we build for classrooms, our experiences are designed to be used with a teacher, not instead of one. This is not a fallback. It is the operating assumption.

We design our AI experiences as frameworks for learning specific subjects, with clearly-artificial guides.

They are not companions. They are not friends. They are not emotional-support systems. We see this distinction as fundamental.

The risks most prominently flagged in the literature - parasocial attachment, over-trust, emotional over-reliance - arise from AI designed to simulate personhood and relationship. That is the opposite of what we intend to build.

The role of teachers and parents

Every major framework we draw on treats adult mediation as central, not optional. UNESCO's position is that generative AI should augment rather than replace educators.5 UNICEF's guidance is built around a child-rights framework in which adults carry responsibility for how AI reaches children.7

The APA emphasises that parents and caregivers play an irreplaceable role and recommends that AI systems actively direct young users toward real-world experts when appropriate.11

Dr. Xu frames this well: rather than asking whether to introduce AI to children, ask whether AI can enrich the interactions they already have with trusted adults.12

That framing shapes how we build. The classroom, the parent at home, the expedition leader in the field, the museum educator in the gallery - these are part of the system, not external to it.

Cautious optimism

The current moment around AI and young people is dominated, understandably, by stories of harm: companion chatbots simulating friendship, emotional reliance on AI in place of human support, ungrounded information presented with authority.

These concerns are real. The mitigations we've laid out are a direct response to them.

But AI used well, in the right context, with the right guardrails, can help young people learn deeply and engage with subjects they might otherwise never encounter. That is worth building for.

The research is clear that the design matters more than the technology category. We take the evidence-based caution seriously and we believe in the upside - which is why we invest our time and effort in exploring AI-driven storytelling for documentary stories.

We expect to revise this position as the field matures.

Join the conversation

We welcome new studies, critique, and perspectives that could shape how we work with AI. If you have research or experience we should know about, we'd like to hear from you.

Get in touch

References

  1. 1. Wang, S., et al. (2025). Effects of Artificial Intelligence on Educational Functioning: A Review and Meta-Analysis. Educational Psychology Review. 228 studies, 464 effect sizes; large positive effects on cognition (r = 0.53). link.springer.com/article/10.1007/s10648-025-10085-5 ↩︎
  2. 2. Xu, Y., Aubele, J., Vigil, V., Bustamante, A. S., Kim, Y.-S., & Warschauer, M. (2022). Dialogue with a conversational agent promotes children's story comprehension via enhancing engagement. Child Development, 93(2), e149–e167. doi.org/10.1111/cdev.13708 · See also Xu's Harvard lab: ying-xu.com ↩︎
  3. 3. Flanagan, T., Wong, G., & Kushnir, T. (2023). The minds of machines: Children's beliefs about the experiences, thoughts, and morals of familiar interactive technologies. Developmental Psychology. Children ages 4–11 attributed mental and moral features to AI agents along a gradient, with strongest attributions to more humanlike systems. pubmed.ncbi.nlm.nih.gov/37036664 ↩︎
  4. 4. American Psychological Association, Health Advisory on Artificial Intelligence and Adolescent Well-being (June 2025). Defines adolescence as ages 10–25 and calls for AI systems providing information to young people to ensure accuracy or provide explicit and repeated warnings of inaccuracy. apa.org - health advisory (PDF) ↩︎
  5. 5. UNESCO, Guidance for Generative AI in Education and Research (2023). Recommends a minimum age of 13 for use of generative AI tools in classrooms and states that generative AI "should augment rather than replace the role of educators." unesco.org - guidance for generative AI ↩︎
  6. 6. European Union, Artificial Intelligence Act (Regulation (EU) 2024/1689), Article 5(1)(b), in force since 2 February 2025. Prohibits AI systems that exploit vulnerabilities due to age. The European Commission's February 2025 guidelines clarify that acceptable risk for adults may constitute unacceptable harm for children. artificialintelligenceact.eu/article/5 ↩︎
  7. 7. UNICEF, Guidance on AI and Children, version 3.0 (December 2025). Ten requirements for child-centred AI including a dedicated provision to address risks from AI-enabled chatbots and companions, and an explicit instruction to "prevent anthropomorphizing" AI systems for children. unicef.org - policy guidance on AI and children ↩︎
  8. 8. Skolverket (Sweden), Råd om AI, chattbottar och liknande verktyg; Utdanningsdirektoratet / Udir (Norway), Råd om kunstig intelligens i skolen (2024), which explicitly states that pupils' age and maturity must be taken into account with particular caution toward younger children; and Opetushallitus (Finland), Tekoäly varhaiskasvatuksessa ja koulutuksessa - lainsäädäntö ja suositukset (2025). skolverket.se · udir.no · oph.fi ↩︎
  9. 9. Flanagan, T., Wong, G., Howard, E., & Ullman, T. (2024). The Minds That Matter: How Robots' Mental Capacities Shape Children's Evaluations and Trust. Open Mind, MIT Press. Children ages 6–9 shifted their trust and evaluations of a robot depending on how its mental capacities were described, independent of embodiment. Strong evidence that the framing of AI as humanlike or machinelike is causally consequential. direct.mit.edu/opmi ↩︎
  10. 10. Van Straten, C. L., Peter, J., & Kühne, R. (2020). Transparency about a Robot's Lack of Human Psychological Capacities: Effects on Child-Robot Perception and Relationship Formation. ACM Transactions on Human-Robot Interaction, 9(2). When a robot explicitly disclosed it lacked human psychological capacities, children aged 8–10 showed reduced anthropomorphism and reduced trust. dl.acm.org/doi/10.1145/3365668 ↩︎
  11. 11. American Psychological Association, Health Advisory on the Use of Generative AI Chatbots and Wellness Applications for Mental Health (November 2025). Emphasises the irreplaceable role of parents and caregivers, warns specifically against humanlike avatars and personas simulating personhood, and recommends AI systems direct young users to real-world experts. apa.org - chatbots & wellness advisory (PDF) ↩︎
  12. 12. Xu, Y. (2024). Different but complementary: Navigating AI's role in children's learning and development. Joan Ganz Cooney Center. On framing AI as additive to existing learning ecosystems rather than replacing interaction with teachers and peers. joanganzcooneycenter.org ↩︎