Thursday, January 23, 2025
HomeNewsInterviewsMathilde Cerioli on Prioritizing Children’s Well-being and AI

Mathilde Cerioli on Prioritizing Children’s Well-being and AI

At Kids AI, we are dedicated to exploring how artificial intelligence intersects with children’s development, well-being, and education. In this interview, we had the pleasure of speaking with Mathilde Cerioli, Ph.D., co-founder and Chief Scientist at Everyone.AI, an organization committed to advancing ethical AI practices.

With a background in neuroscience and extensive experience working with EdTech and video game companies, Mathilde provides a unique perspective on the opportunities and challenges of integrating AI into children’s lives. She highlights the importance of interdisciplinary collaboration, transparency, and cognitive safety in AI tools designed for young users.

Her work underscores the urgency of incorporating child development insights into AI frameworks to create tools that are not only innovative but also safe, inclusive, and supportive of children’s growth.

Evren Yiğit: Mathilde, welcome to Kids AI! As the co-founder and Chief Scientist at Everyone.AI, you’re dedicated to advancing ethical AI practices with a particular focus on children’s cognitive and socio-emotional development. Could you tell us about Everyone.AI’s mission and your role in guiding its initiatives to ensure that AI technologies benefit young users?

Mathilde Cerioli: Everyone.AI focuses on anticipating the benefits and risks of AI products for children and adolescents, particularly their cognitive and socio-emotional development. Ethical AI development for children must prioritize their learning abilities and mental well-being. Our work targets three key audiences:

  1. Regulators and policymakers – We help them understand the nuances of cerebral development in children and how AI may impact it. This enables them to tailor regulations to address these considerations alongside the standard ethical AI guidelines.
  2. Tech companies – We provide training and consultation to ensure they create solutions that support, rather than hinder, children’s development.
  3. Parents and schools – We guide them in navigating the increasingly complex digital landscape.

In my role, I lead transdisciplinary collaborations that bring together experts from human and social sciences (education, psychology, sociology, neurosciences) and AI fields (researchers, engineers, product developers). This collaborative effort integrates current knowledge of child development and human-machine interactions to develop frameworks for evaluating AI tools for children across different age groups. The aim is to ensure that AI products for children are safe, responsible, and beneficial to their overall development.

Evren Yiğit: Mathilde, you have a fascinating background in neuroscience. Could you share how this experience shaped your journey into ethical AI? 

Mathilde CerioliAfter completing my Ph.D. in cognitive neurosciences, I worked with EdTech and video game companies to help them better understand children’s functioning and develop tailored beneficial solutions for their audiences. I have also been an entrepreneur, which further underscored the critical importance of collaboration between experts in child development and those creating products for children. This collaboration is even more essential in the AI era, where machines interact directly with children, who often have a limited understanding of the world.

Evren Yiğit: I completely agree. In your research, you’ve emphasized that ethics in AI for children must go beyond privacy and transparency to include cognitive safety. Could you elaborate on what cognitive safety entails and why it’s so essential for AI tools designed for young users?

Mathilde Cerioli: The digital world increasingly incorporates AI, which profoundly influences children’s environments, behaviors, and the feedback they receive from those environments. This, in turn, shapes how they learn and can have significant consequences on their cognitive, social, and emotional skill development.

When we talk about cognitive safety, it involves ensuring that digital solutions are not only age-appropriate in terms of content but also in the nature of their interactions. These tools should provide sufficient stimulation to support the development of language, attention, memory, and critical thinking. Additionally, they must foster a safe environment that helps children learn emotional and social engagement and how to regulate their emotions.

This is particularly critical for children due to the sensitive periods of development, during which the brain is especially receptive to learning specific skills, as well as more vulnerable. For example, when a child is learning to interact with peers and build an understanding of social connections, the type of interaction offered by digital tools must align with these developmental needs. During these stages, introducing conversational AI that employs highly emotional interactions (ex: “I missed you”) is not an appropriate choice, as it could hinder the natural progression of these crucial skills.

Evren Yiğit: Anthropomorphism in conversational AI products has become a widely discussed topic. Let us dive deeper. As AI tools and media become more integrated into children’s lives, children often start to view these technologies as ‘friends’ or companions. What do you see as the ethical implications of anthropomorphizing AI in children’s tools and media, and how can developers ensure that young users maintain a healthy understanding of AI’s actual role?

Mathilde Cerioli: When we talk about anthropomorphism, we’re referring to the human tendency to attribute human-like intentions and characteristics to objects or animals. Evolutionarily, this served us well by helping us assess potential threats from animals, for instance. However, in the AI era, this tendency carries significant risks because, for the first time, machines can interact with us in ways that sound human. This directly taps into human vulnerabilities, as we are prone to this bias and often quickly forget that we are not interacting with a human being.

When it comes to children, they are even more susceptible to anthropomorphism than adults. They lack a fully developed understanding of social interactions as they are still learning, and they often have limited digital literacy. This means they may not be aware of what AI is, how it functions, or its limitations, which further increases the tendency to anthropomorphize AI. This can create highly engaged parasocial relationships where children interact with an entity that lacks reciprocity at a critical moment in their development when learning reciprocal social skills is key, potentially impacting their social growth.

Moreover, the more anthropomorphic the AI design—when it mimics human interactions by pretending to have emotions or feelings, or uses voices that sound human—the stronger this effect becomes. This creates a challenge for developers: there’s often a gap between what children want (highly anthropomorphic and engaging designs) and what they should be exposed to (less anthropomorphic designs to safeguard healthy development). Developers might need to make difficult decisions to prioritize children’s well-being over product popularity. This could mean designing solutions that are less engaging but better aligned with children’s developmental needs, ultimately ensuring their safety and fostering healthier interactions.

To address these concerns, developers can implement educational components within AI tools that help children understand the artificial nature of these technologies. Transparency is key; making it clear that they are interacting with a machine can help mitigate excessive anthropomorphism. Additionally, involving parents and educators in the process can support children in navigating these interactions, fostering digital literacy and a healthy understanding of AI’s actual role.

Evren Yiğit: As you said, transparency is key, especially when it comes to children and AI. What measures do you think are essential for ensuring transparency in AI applications used by children, and how can developers make sure these measures are accessible to parents and educators?

Mathilde Cerioli:  Transparency in AI applications used by children is absolutely essential. We need clear disclosures about data use and decision-making processes. Incorporating child-friendly explanations within the apps can help demystify how these models are developed and how data is utilized. However, we must acknowledge that children and even adolescents often struggle to fully understand the notion of online privacy. In fact, they might be more prone to accepting questionable privacy settings simply because the restrictions make what’s behind them seem more enticing.

So, we can’t rely solely on child-friendly safety measures when it comes to protecting young users. Sometimes we’re asking them to make decisions they don’t have the cognitive or emotional maturity to handle. In this context, having clear age recommendations is a good way to protect users. Ultimately, parents and educators play an essential role in guiding young users, helping them understand AI, and safeguarding their digital interaction

Evren Yiğit: This brings us to regulations actually. In your view, what are the most ethical practices for collecting and using data from children within AI applications? With regulations like the UK’s Age Appropriate Design Code already in place and the EU AI Act now entering its implementation phase, do you think these frameworks will be sufficient to protect young users, or is there a need for additional safeguards?

Mathilde Cerioli: While the UK’s Age Appropriate Design Code is a significant step toward setting standards that consider children’s needs in the digital space, the EU AI Act does not specifically focus on children or clearly address how to safeguard them at different developmental stages. Therefore, additional safeguards are necessary to protect young users effectively.

These safeguards should account for the varying cognitive and emotional capacities of children at different ages. An interdisciplinary approach that includes child development experts can help tailor regulations and AI designs to meet these specific needs. By collaborating across disciplines, we can enhance existing frameworks to ensure that AI applications not only comply with legal standards but also genuinely support healthy development and protect children from potential risks associated with data collection and use.

Evren Yiğit: Through your work, you’ve emphasized an urgent need to incorporate child development insights into AI design. What are the key areas of child cognitive and emotional development that AI developers need to understand to create safer, more supportive technology for kids?

Mathilde Cerioli: AI developers need to understand key aspects of child cognitive and emotional development to create safer, more supportive technology for kids. One crucial area is the concept of sensitive periods—times when children are especially receptive to learning certain skills. AI solutions should safeguard these periods by not overly assisting or interfering with the natural experiences children need for healthy development.

It’s important to recognize that children are still forming their understanding of the world. The experiences and information we provide shape their perceptions and knowledge. Therefore, algorithms should avoid limiting, restricting, or manipulating the content children are exposed to. Instead, they should offer a broad range of information to support well-rounded development.

When designing products, developers should consider the cognitive and social-emotional skills appropriate for each age group—such as language abilities, attention span, critical thinking, empathy, and reciprocity. Introducing complex technologies before children are ready can interfere with the development of these essential skills. For example, calculators are beneficial for older children who have a solid grasp of basic math concepts, as they allow students to focus on more complex problem-solving without getting bogged down in calculations they already understand.

Similarly, if generative AI is used to write essays for adolescents who haven’t yet mastered constructing arguments or critical thinking, it can hinder their ability to develop these skills independently. In an era where critical analysis of information is crucial to avoid manipulation, it’s essential that AI tools support rather than replace the learning process.

Ultimately, AI interactions should be introduced at stages when children are developmentally ready to understand and benefit from them. By aligning technology with children’s cognitive and emotional growth, we can enhance their learning experiences without compromising the development of vital skills.

Evren Yiğit: Considering the diversity in cognitive development among age groups, what guidelines would you recommend for age-appropriate AI design? How can AI developers ensure that tools are both engaging and developmentally suitable for different age ranges?

Mathilde Cerioli: Creating age-appropriate AI designs is indeed challenging due to the varying needs, understanding, and abilities of children at different developmental stages. What might be highly detrimental or risky for a four-year-old could be perfectly acceptable for a twelve-year-old. To ensure AI tools are both engaging and developmentally appropriate, it’s crucial for developers to consult with child development experts to understand what children need at each age, what they are learning, and to assess whether a product supports and promotes growth or inadvertently replaces crucial experiences.

One effective approach is to design products that adapt to children’s age and developmental milestones. For example, an interface could be tailored differently for a six-year-old, a ten-year-old, and a thirteen-year-old, ensuring that it aligns with their cognitive and emotional capacities.

Sometimes, developers may need to decide to create tools that are less engaging to maintain age appropriateness. For instance, a highly anthropomorphic robot toy designed for six-year-olds might increase engagement but could interfere with their understanding of emotional and social reciprocity. At that age, children are still learning about real human interactions, and introducing a highly human-like robot might confuse their perceptions of relationships and emotions.

Balancing engagement with developmental suitability is key. By prioritizing children’s developmental needs over maximum engagement, AI developers can create tools that are not only enjoyable but also support healthy growth.

Evren Yiğit: Dr. Cerioli, you’ve been actively involved in promoting interdisciplinary collaboration in ethical AI, especially as it pertains to children’s development. What role do you believe this kind of cross-disciplinary approach plays in building ethical, effective AI tools for young users, and how can organizations like Kids AI and Everyone.AI support these efforts?

Mathilde Cerioli: Interdisciplinary collaboration is absolutely essential in building ethical and effective AI tools for young users. By bringing together experts from fields like child psychology, education, and technology, we ensure that AI developers understand key aspects of children’s cognitive and emotional development. This collective expertise helps create technology that is safe, supportive, and aligned with the natural experiences children need for healthy growth.

One crucial aspect is recognizing sensitive periods in a child’s development—times when they are especially receptive to learning certain skills. Cross-disciplinary teams can design AI solutions that safeguard these periods by avoiding over-assistance or interference. It’s important that algorithms do not limit or manipulate the content children are exposed to but instead offer a broad range of information to support well-rounded development.

Organizations like Kids AI and Everyone.AI play a pivotal role by facilitating collaboration between various disciplines. They can support the development of guidelines and frameworks that help AI tools align with children’s cognitive and emotional growth. By promoting this cross-disciplinary approach, these organizations ensure that AI tools enhance learning experiences without compromising the development of vital skills, ultimately leading to more ethical and effective outcomes for young users.

Evren Yiğit: We’re truly excited to have you here for our interview series, and we see this as the beginning of a wonderful collaboration.

Lastly, what is your vision for the future of AI in children’s media? How can organizations like Kids AI and Everyone.AI work together to create a safe, inclusive AI-driven environment that supports children’s learning and development.

Mathilde Cerioli: My vision for the future of AI in children’s media is one where technology genuinely supports and benefits children’s development in a safe and inclusive manner. Organizations like Kids AI and Everyone.AI have an essential role to play in this vision because they are independent entities with a clear mission to create AI-driven environments that prioritize children’s well-being.

As AI becomes ubiquitous and increasingly shapes learning experiences, daily interactions, and social environments, our definition of safety in the digital world must evolve. It should not only address immediate physical risks and short-term harms but also consider long-term impacts on cognitive and socio-emotional development.

To achieve this, we need clear regulations that guide product developers in creating solutions that are safe and developmentally appropriate. Establishing industry standards is also crucial so that everyone follows similar guidelines, making it clear how to best develop AI products for children.

Moreover, parents and educators must be empowered to guide children and adolescents through this complex and ever-evolving digital world. This can be a challenging task, as they are often still figuring out AI themselves while trying to support their kids. Improving education for them and promoting clear guidelines will help them make the most of AI, enabling them to effectively guide and protect their children.

By working together, organizations like Kids AI and Everyone.AI can foster collaboration between developers, regulators, parents, and educators. This collaborative approach will help create a safe, inclusive AI-driven environment that supports children’s learning and development, ensuring that technology serves as a positive force in their lives.

Kids AI Closing Insights:

Dr. Mathilde Cerioli’s interview shines a spotlight on the critical need for ethical, developmentally aligned AI tools for children. She emphasizes the importance of interdisciplinary collaboration, cognitive safety, and transparency in creating AI that truly supports children’s learning, emotional growth, and digital literacy. From addressing the challenges of anthropomorphism to aligning tools with sensitive developmental periods, her insights offer a clear path forward for designing AI that prioritizes children’s well-being.

As Kids AI, we share a unified goal with Everyone.AI: ensuring AI becomes a positive force in children’s lives. We invite our readers to join this important conversation by exploring Everyone.AI’s open letter and contributing to the global movement for ethical and child-centered AI.

 

 

 

RELATED ARTICLES