Wednesday, December 4, 2024
HomeNewsInterviewsKidsAI Interview Series: Jeffrey Kluge on AI Ethics and Child-Centric Design

KidsAI Interview Series: Jeffrey Kluge on AI Ethics and Child-Centric Design

In this edition of the KidsAI Interview Series, our co-founder Evren Yiğit sits down with Jeffrey Kluge, an AI Policy Advisor and AI Ethics Consultant with deep expertise in AI governance, compliance, and child safety. Jeffrey shares these values through his extensive work in operationalizing AI ethics, helping businesses navigate the challenges of transparency and accountability in child-centric AI design. Today, we explore how businesses can align with these principles to build responsible AI systems for young users.

Evren Yiğit: Jeffrey, it’s great to have you with us and to welcome you as part of our consultants’ hub here at Kids AI. You have an impressive background in AI governance and ethics. Can you tell us what sparked your interest in this field and how you became an expert in AI ethics, especially in areas related to children’s safety?

Jeffrey Kluge: I think the origin story is helpful to understand my resolve in ethical kids technology.

My parents brought home an Apple II computer when I was 12. I had a few games, and being curious about how things worked, I taught myself how to reprogram games. Unfortunately, my world view was not as big as it is today, and I followed what people told me to do in the day, and that was going into finance. I gravitated to building a great business serving founders and senior executives. Along their journey, I was an ad-hoc mentor and advisor. Here I saw first hand their struggles and success, along in hearing the other side of the story from their VC.

Evren Yiğit: Your journey from technology to finance and then to AI ethics is fascinating! At Kids AI, we emphasize the importance of translating complex legal guidelines into practical steps that ensure both compliance and child safety. Can you share your approach to helping businesses achieve this balance?

Jeffrey Kluge: As a Fellow at ForHumanity, I’ve helped draft certification schemes that businesses can use to demonstrate compliance with global regulations. My approach focuses on translating the legal principles of global data protection, privacy, safety, and kids’ codes into actionable business steps. For instance, we guide ethics committees to understand how their decisions affect the Best Interests of the Child. We simplify complex requirements into binary criteria, making it easier for top management to grasp their roles and responsibilities in ensuring a duty of care. Ultimately, my goal is to operationalize compliance through design, ensuring that protecting children becomes an integral part of the business process.

Evren Yiğit: That’s a practical approach. Transparency and accountability are central to our mission at Kids AI, especially when it comes to training large language models. What steps can businesses take to ensure ethical data collection and use, particularly when engaging with children?

Jeffrey Kluge: The first step businesses must take is an honest assessment, asking, “Who is our user, customer, or the person interacting with our online product or service?” While legacy laws often define a child as under 13, new regulations recognize the gap for those aged 13-18. It’s crucial to consider how we treat this group within the same ethical framework. Are we respecting the requirements placed on us? Do we have clear, informed consent, and was it obtained in a way that’s easy for this age group to understand? Too often, privacy policies are written by attorneys for attorneys, which fails to meet the needs of younger users.

Evren Yiğit: Absolutely, ensuring transparency in a way that young users can actually understand is critical. You’ve often highlighted that designing for child safety requires more than just legal compliance. How can businesses move beyond a compliance-driven approach to a genuinely ethical, child-centric design framework for their AI products?

Jeffrey Kluge: Moving beyond compliance requires businesses to know the age of their users, allowing design and interfaces to naturally align with their needs. For example, if you know your user is 16, then you must consider how you’re delivering recommendations, moderating their spaces, and deciding who they’re connected with. All of this should circle back to their Best Interests. The key is recognizing that users aged 13-18 are not simply younger adults but their own segment, with unique needs and concerns. When you design with their interests at the forefront, ethical and child-centric design emerges naturally, and compliance becomes a byproduct rather than the primary focus.

Evren Yiğit: That’s an excellent point, and it’s something we focus on heavily at Kids AI-understanding users’ need first. You’ve pointed out the accountability gap for children aged 13-18 in many AI systems. This age group often falls into a gray area where they are treated neither as children nor as fully autonomous adults, leaving them particularly vulnerable to ethical risks like data misuse, privacy violations, and inappropriate content. How can businesses better address the unique needs of this age group, ensuring their AI systems provide both transparency and ethical protections while promoting safe, age-appropriate engagement?

Jeffrey Kluge: The foundation of addressing the needs of 13-18-year-olds lies in platforms being honest about users’ ages and implementing robust age verification practices that align with the risks of the AI, algorithmic, or autonomous (AAA) systems they interact with. I’ve long advocated for a child-centric Data Protection Impact Assessment (DPIA), which I liken to a middle school math problem—if you do your homework and show your work, you’ll get significant credit. This means clearly documenting the tensions, tradeoffs, and decisions made by the business along the way.

If you don’t have this DPIA completed, and a regulator requests it, you’re unlikely to receive much leniency. From my perspective, if a company has done the work, transparently shown its reasoning, and presented this to a regulator, it’s difficult to see how they wouldn’t accept it—unless, of course, the company ignored basic safety protocols. Transparency and preparedness are key to ensuring this age group is protected while engaging safely and appropriately with AI systems.

Evren Yiğit: Exactly, transparency and preparedness are essential. At Kids AI, we advocate for Child-Centric Data Protection Impact Assessments (DPIAs) as a way to understand and mitigate risks. How do DPIAs help businesses ensure that their AI systems ethically engage with children and align with best practices for safety and transparency?

Jeffrey Kluge: I touched on this earlier, but let me expand. A child-centric Data Protection Impact Assessment (DPIA) helps businesses by asking the right questions from the outset—whether their product or service is likely to be accessed by a child. It’s straightforward, but essential to revisit. From there, the DPIA dives into design-related issues: What data are you collecting? How is it being used? How are recommendations made? Who are users being connected to? All of these questions must be answered through the lens of the Best Interests of the Child.

The beauty of a child-centric DPIA is that it takes the high-level standards championed by many organizations and breaks them down into actionable steps. When designing for the 13-18 age group, businesses have to move beyond traditional hierarchies and silos, encouraging cross-team collaboration to understand how decisions will impact both the business and the children they’re serving.

Evren Yiğit: Cross-team collaboration is vital, especially when decisions impact young users. The regulatory landscape around AI and children’s safety is evolving rapidly. What role do you see global governments or institutions playing, and how can businesses stay ahead of these changes while maintaining ethical standards?

Jeffrey Kluge: The key to staying ahead of the evolving regulatory landscape is independent third-party audits of AI, Algorithmic, and Autonomous (AAA) Systems. These audits provide a safeguard, ensuring businesses have a clear process for remedying unknown or previous violations while also promoting transparency through public disclosure. This closes the loop on accountability. If a business is found lacking in certain areas, a grace period could allow them to address these gaps. Conversely, if the business meets all standards, achieving certification should be seen as a mark of excellence.

For non-BigTech businesses, this could offer a significant competitive advantage. If a small or medium enterprise (SME) successfully designs around criteria like the Digital Services Act’s Article 28 Online Protections for Children, why wouldn’t they highlight it as a public commitment to protecting kids on their platforms? In today’s market, that kind of certification could be a powerful differentiator.

Evren Yiğit: That’s very interesting. You’ve noted that big tech has historically resisted regulation, but there’s a growing shift towards ethical AI. How can businesses align themselves with this movement, and what competitive advantages can come from leading in transparent and ethical innovation?

Jeffrey Kluge: The key is to embrace ethical AI early and get there first. While BigTech generates tens of billions each quarter, they often delay taking meaningful steps toward ethics, seeing it as potentially reducing profits. But the reality is, kids are highly attuned to how these platforms make them feel, and they don’t like it when their experiences become stressful due to poor oversight of User-Generated Content or neglected codes of conduct. When businesses fail to create safe environments, kids leave.

For smaller businesses, aligning with ethical AI and transparent innovation not only builds trust but provides a competitive advantage. By leading with integrity, companies can differentiate themselves from BigTech, fostering loyalty and creating a safer, more enjoyable experience for younger users.

Evren Yiğit: Cross-functional collaboration is something we strongly advocate for at Kids AI. How can companies ensure that their teams work together effectively to create AI products that not only comply with regulations but also align with a shared moral framework that prioritizes children’s safety and well-being?

Jeffrey Kluge: The first step is ensuring the business operates from a clear moral and ethical framework. Many companies have Codes of Conduct or Ethics, but these are typically focused on employee behavior, not on how users should behave. For example, one of the largest platforms advocates for open-source AI, but without a code of conduct for users, people are left to interpret the rules on their own. This lack of structure makes it difficult to hold the community accountable, which could ultimately work against the Best Interests of the Child.

To foster effective cross-functional collaboration, businesses need to establish a shared moral framework that prioritizes children’s safety and well-being. Teams from different departments—legal, product development, ethics, and design—must come together, guided by this framework, to create AI products that not only comply with regulations but also ensure the ethical engagement of children on their platforms.

Evren Yiğit: Having that shared moral framework is indeed fundamental. Generative AI is evolving quickly, and you’ve emphasized the need for continuous access to expertise. How can businesses balance the fast pace of innovation in generative AI while maintaining ethical standards, particularly for products aimed at children?

Jeffrey Kluge: I see many businesses trying to solve this problem using outdated approaches. Just the other day, I noticed a large gaming company hiring another legal expert for a specific line of business, when allocating those resources to AI Ethics and kids’ codes would have been far more impactful. The status quo has shifted—there’s now a growing demand for ethical and responsible systems designed specifically with children in mind.

To balance the fast pace of innovation in generative AI with maintaining ethical standards, businesses need to embed expertise in AI Ethics and children’s safety into their teams. Rather than just focusing on legal compliance, they must prioritize building ethical frameworks that evolve alongside their technology. This not only ensures products are safe for children but also aligns with the broader societal push for responsible innovation.

 

Kids AI Closing Insights

Our conversation with Jeffrey Kluge highlights the critical importance of merging AI innovation with child-centric ethical design. At Kids AI, we firmly believe that creating safe, ethical, and transparent digital environments for children goes beyond regulatory compliance—it’s a moral imperative. Jeffrey’s insights emphasize the need for businesses to move beyond a compliance-driven approach, building ethical frameworks that truly protect young users. Cross-functional collaboration, transparency, and a shared moral framework are essential for ensuring AI systems are designed with the Best Interests of the Child in mind. By prioritizing these principles, companies can build trust, foster loyalty, and lead the way in responsible AI development. As regulations continue to evolve, those who proactively embrace transparency and accountability will not only gain a competitive advantage but also contribute to a safer digital future for the next generation. At Kids AI, we are committed to advocating for AI systems that go beyond compliance, ensuring they genuinely protect and empower children.

RELATED ARTICLES