Can we make AI safe for children?
I’m working with the IEEE Standards Association to create a global standard for AI systems impacting children. The IEEE (Institute of Electrical Engineering Engineers) was founded in 1963 as an international trade body for computing engineers. Today the organisation develops standards for worldwide use, aiming to provide a non-political platform that enables diverse communities to contribute to responsible, sustainable and ethical standards development, driving technological innovation that works for all.
It’s great to be building on my experience with Design Club, the Department for Education and Women Leading in AI‘s education working group, combining this with ethical approaches from digital anthropology, to help create a standard that is useful and universal. After 15 years working in social media and witnessing how business models were honed in favour of maximum profit, I’m deeply concerned about the risks new technologies can bring – especially for younger users.
One out of every 3 people online is under 18. And they’re using technology nearly always designed for adults. The current debate about social media bans for under 16s shows that concern for children’s safety is at an all-time high. And last week’s twin rulings against Meta and YouTube regarding social media addiction and misleading the public over child safety have only inflamed anxieties.
While many argue that a blanket ban will backfire. creating a less safe environment for everyone, the UK government is pressing ahead not only with a consultation into social media, gaming and chatbots but also with a series of trials restricting children’s use of digital technology across different areas.
Children’s rights are human rights
The opening keynote at this month’s AI Standards Summit was from the UN’s Peggy Hicks. She talked about the importance of human rights considerations in setting standards around AI. Cindy Parokkil (ISO) echoed this later in the day when she said there’s a growing recognition that standards require a socio-technical approach: society and technology are intrinsically linked and must be considered holistically. Cindy referenced the Seoul Statement which was agreed at last year’s summit.
The Seoul Statement outlines four key commitments to be prioritised by AI developers:
- Actively incorporate socio-technical dimensions in standards development.
- Deepen the understanding of the interplay between international standards and human rights, recognising both their importance and universality.
- Strengthen an inclusive, dynamic multi-stakeholder community to develop and apply international standards for the design, deployment, and governance of AI.
- Enhance public-private collaboration on AI capacity building.
Meanwhile…things break
But technology moves fast while regulation struggles to keep up. And this struggle is about more than just the length of time it takes for diverse groups of law-makers to democratically agree a way forward. It’s also the issue of legacy systems. For example, it’s hard for a 125 year old organisation like the British Standards Institution to move faster when it’s still publishing standards in PDF format, as BSI Digital ICT director, David Cuckow, pointed out at the summit. This is why last week’s rulings against Meta and YouTube were helpful – because they pave the way for courts to act decisively and immediately when needed.
At a recent LSE event on edtech, Jen Persson (founder of Defend Digital Me) said that robotic classroom technologies were creating an environment where “children are being tasked like Amazon workers” while digital apps were asking children increasingly intrusive questions. In theory these questions are designed to help with wellbeing assessment and monitoring but in reality they invade privacy.
It’s clear children need additional protections online, but what more can be done?
Work done so far
The United Nations’ Convention on Human Rights stresses that children are entitled to the same fundamental human rights as adults, but with additional rights specifically addressed to their vulnerability.
The 5Rights Foundation, founded by Beeban Kidron, and the Digital Futures for Children Centre, based at LSE, have already made considerable progress in this area. Five years ago they created General Comment No.25 (GC25) – a UN document which explains how children’s rights should work in a digital environment.
The Children’s Parliament in Scotland has also done some great work around AI literacy and children’s rights.
Some other AI and pedagogy/ edtech examples:
- Philosopher Tom Chatfield is developing a “Socratic AI” at City St George’s university.
- Sonia Livingstone’s 4Cs framework which asks adults to consider four things (content, contact, conduct and contract) when using AI with children. The framework has been updated to include a fifth “C” – context – which runs alongside all the other four points.
- Last September, Brazil was the first (and so far only) country to pass a specific law to protect all children while using digital tools.
It’s a fascinating area and I look forward to learning more. I’ll be reporting back on how the IEEE working group progresses!
Photo: Jacob Wackerhausen
Jemima Gibbons
Ethnography, user research and digital strategy for purpose-led organisations. Author of Monkeys with Typewriters, featured by BBC Radio 5 and the London Evening Standard.
Related Posts
25 March 2025
Navigating the manosphere
30 September 2024



