Can we make AI safe for children?

I’m working with the IEEE Standards Association to create a global standard for Artificial Intelligence (AI) systems impacting children. The IEEE (Institute of Electrical Engineering Engineers) was founded in 1963 as an international trade body for computing engineers. Today it aims to provide a non-political platform enabling diverse communities to contribute to responsible, sustainable and ethical standards development. It develops standards for worldwide use, driving technological innovation that works for all.

It’s great to be building on my experience with Design Club, the Department for Education and Women Leading in AI‘s education working group. I hope I can combine this with ethical approaches from digital anthropology to help create a standard that is useful and universal. I spent 15 years in social media, watching business models being increasingly honed to extract maximum profit. So I’m concerned about the risks new technologies can bring, specially for younger users.

One out of every 3 people online is under 18. And they’re using technology nearly always designed for adults. The current debate about social media bans for under 16s shows that concern for children’s wellbeing is at an all-time high. And last week’s twin rulings against Meta and YouTube regarding social media addiction and misleading the public over child safety have only inflamed anxieties.

Many argue that a ban will backfire. creating a less safe environment for everyone, so it’s good the UK government has launched a broad consultation. It’s also starting a series of trials restricting children’s use of digital technology across different areas.

Children’s rights are human rights

The opening keynote at this month’s AI Standards Summit was from the UN’s Peggy Hicks. She talked about the importance of human rights when setting standards around AI. Cindy Parokkil (ISO) echoed this later, saying that society and technology are intrinsically linked and must be considered holistically. Cindy referenced the Seoul Statement which was agreed at last year’s summit.

The Seoul Statement outlines four key commitments to be prioritised by AI developers:

  1. Actively incorporate socio-technical dimensions in standards development.
  2. Deepen the understanding of the interplay between international standards and human rights, recognising both their importance and universality.
  3. Strengthen an inclusive, dynamic multi-stakeholder community to develop and apply international standards for the design, deployment, and governance of AI.
  4. Enhance public-private collaboration on AI capacity building.

The UN Convention on Human Rights gives children the same fundamental rights as adults, with additional rights considering their vulnerability. But these are seldom considered by software engineers and developers.

Meanwhile…things break

Technology moves fast while regulation struggles to keep up. And this struggle is about more than just the length of time it takes for lawmakers to agree a way forward. It’s also an issue of legacy systems. For example, it’s hard for a 125 year old organisation like the British Standards Institution to move faster when it’s still publishing standards in PDF format. (As one of BSI’s directors, David Cuckow, pointed out at the summit).

This is why last week’s rulings against Meta and YouTube were helpful – because they pave the way for courts to act decisively and immediately when needed.

At a recent LSE event on edtech, Jen Persson (Defend Digital Me) said classroom technologies are creating an environment where children are “being tasked like Amazon workers”. She added that digital apps were asking children increasingly intrusive questions. In theory these features are designed to help with wellbeing assessment – in reality they invade privacy.

It’s clear children need additional protections online, but what more can be done?

Beyond tech exceptionalism

Beeban Kidron says the phrase “tech exceptionalism” sums up everthing that’s wrong with technological innovation. “Why should tech live outside the normal boundaries and values of all other aspects of society?” she asked at the recent DFC Research Insights Day.

The 5Rights Foundation (founded by Kidron) and the Digital Futures for Children Centre have already made considerable progress regarding children’s safety online. Five years ago they published General Comment No.25 (GC25). This important UN document explains how children’s rights should work in a digital environment.

The Children’s Parliament in Scotland recently completed a trail-blazing project on AI literacy and children’s rights. They’ve published a report series exploring children’s rights, a film explaining AI by children and a useful teaching pack.

Some other AI and pedagogy/ edtech examples:

  • Philosopher Tom Chatfield is developing a “Socratic AI” at City St George’s University.
  • Sonia Livingstone’s 4Cs framework has been widely cited. This asks adults to consider four things (content, contact, conduct and contract) when using AI with children. The framework has been updated to include a fifth “C” – context – which underscores each of the other four points.
  • Last September, Brazil was the first country to pass a specific law protecting all children while using digital tools.

Setting the standard

The IEEE working group will base the new standard on existing principles of age-appropriate design  defined in IEEE Standard 2089). The new standard will specify a full lifecycle process for the design, development, and deployment of all AI systems that may affect children. It will take the Children & AI Code as a starting point.

This is a fascinating area and I’m looking forward to some interesting discussions. I’ll report back on how the working group progresses.

Photo: Jacob Wackerhausen


Jemima Gibbons

Ethnography, user research and digital strategy for purpose-led organisations. Author of Monkeys with Typewriters, featured by BBC Radio 5 and the London Evening Standard.

All author posts

Get in touch

I love to work on purpose-led, collaborative projects with positive social impact.Let me know what you need. We can chat on the phone, online or over coffee.

jemimag@gmail.com+44 (0)7958 357 334

Privacy Preference Center