As you may know, there’s a huge AI impact summit in India this week. More than 25,000 people have gathered in New Delhi to discuss AI growth, governance and sustainability. All the big tech CEOs are there, as are the UK’s Deputy Prime Minister, along with former Prime Minister Rishi Sunak (now advisor to Microsoft), ex chancellor George Osbourne (now with OpenAI) and a whole bunch of UK trade delegates.
But you may not have heard about PAIRS (the Participatory AI Research & Practice Symposium). PAIRS is a fringe event at the summit, run both online and in person. It was set up last year (alongside the Paris AI impact summit) to create a space to explore participatory and community approaches to AI, focusing on practical solutions and aimed at civil society and public sector organisations. The steering group includes members from Connected by Data and Mozilla Foundation.
I joined PAIRS on Tuesday for its online programme. It was a full day (and 10 hours staring at a laptop is enough to make you think in a slightly jaundiced way about human-computer interaction) – but deeply interesting.
Here are 5 things I learned:
1. Practicing participatory AI is like being an activist
This work is like activism, said Maria Luce Lupetti (PARJAI) in the opening talk: because it extracts a heavy toll on individuals in terms of emotion and labour. We’re challenging prevailing narratives and working against the grain. Her words were echoed in the final session: we’re in a time when there’s a need for activism and contestation, said Anna Colom (The Data Tank). We can’t always work pleasantly with government and institutions, agreed Renee Sieber (McGill University), we have to challenge things when needed.
2. The “participatory turn” in AI is a thing
The ontological turn in anthropology was a time in the early 2000s when different worlds became seen as equally valid, so I was interested to hear that there’s been a recent “participatory turn” in AI. This was raised by Ye Ha Kim (UCL) who is promoting a “Community in the Loop” (CITL) framework for AI governance. (For example, one way of better supporting communities could be by including dataset “nutrition labels” on different AI programmes). Participation has “become a real buzzword in AI ethics” agreed Sanjay Sharma (University of Warwick) – but participation is not the same as empowerment. It is often used by companies to legitimise existing or pre-made decisions. Stephanie Camarena (Source Transitions) cited the participatory model of consult > include > collaborate > own (Delgado et al 2023) and noted that many initiatives tended to get stuck around the consult/ include stages.
3. Enchanted determinism is a risk
Enchanted determinism is a term coined by researchers Alexander Campolo and Kate Crawford (2020) and cited by Joshua Green (UAL): a narrative of enchantment protects AI’s creators from responsibility. How can we be sold something that is magical, outside the scope of current understanding, yet reliable and trustworthy enough to be used for unprecedented decision-making? The prevailing narrative is of a heady, opulent future where problems are solved without friction. Is this even possible? And, as Omer Bilgin (University of Oxford) asked: are we pursuing utopias that people don’t actually want? But actually the real magic is in collaboration: we’re seeing a global pandemic in mental health and loneliness, said Pierre Noro (Sciences Po): participatory approaches are fun, joyful, creative and a shared experience, people sharing values and goals: “We should never lose this as our north star”.
4. Power must change sides
We’re tired because we’re always challenging AI’s “epistemic apartheid”, said Maria Luce Lupetti, referring to the prejudicial knowledge system where some views are excluded or not considered important. As researchers, it’s important to avoid performative and extractive engagements. Governments love AI because all other growth is lacklustre, said Pierre Noro: AI development is one thing that contributes positively to GDP. We need to “flip the power structure” – participatory AI can save governments a lot of money (for example, when the Dutch government had enough grassroots feedback to cancel its discriminatory welfare pilot). However, we must also be wary of “dark participation”, said Renee Sieber (McGill University): in polarised countries like the US, this is a risk – we have to have “conflict sensitive participatory design” while still supporting conflicting opinions as much as we can.
5. Ethnography makes a difference
I watched around 30 presentations throughout the day. Over ten hours, it was easy for many of them to blur into one. Certain key words (privacy, data, governance, empowerment, inclusion, etc) were constantly repeated throughout. It made me realise and appreciate the importance of all ethnographic and culturally specific materiall. For example, one talk that stood out was Viktor Bedö (Royal College of Art) describing his 3 year project with SMUC Kitchen – a rescued food redistribution service in Basel, Switzerland. The project focused on “machine teaching” – this involved individuals from the community training the algorithm that would then drive the AI-powered delivery service. Video footage of distinctive Basel streets and photos and quotes from participants brought the project alive.
Join the community
According to the Guardian’s Robert Booth, we’re seeing “a battle between a new kind of AI colonialism from the US tech firms and an alternative “techno-Gandhism”, in which AI is used for social justice and to benefit marginalised people.” If that’s true, it sounds like participatory AI is more than just a fringe conversation.
If you’re interested, you can join it too: PAIRS is an ongoing community of practice for anyone working or interested in the field of participatory AI. The community hangs out on Discord and you can sign up here.
Photo: Kathryn Conrad / Datafication / Licenced by CC-BY 4.0
Jemima Gibbons
Ethnography, user research and digital strategy for purpose-led organisations. Author of Monkeys with Typewriters, featured by BBC Radio 5 and the London Evening Standard.
Related Posts
26 March 2025
The age of AI with everything
10 January 2025
Kicking off 2025 with a Campaign Lab hack-day
4 October 2023
The anthropology of misinformation
29 September 2023



