While the UK’s Online Safety Bill gets held up by continuous political infighting, what are technology companies doing to address the thorny issue of better protection for users?
Recently, I’ve seen 3 new ways to police the internet:
- Community notes
- Subscription focus
- Undercover moderators
Here are more details on each approach:
1. Community notes
Towards the end of James Clayton’s interview with Elon Musk (entertaining and excruciating in equal amounts – worth a listen on BBC Sounds), Twitter’s billionaire owner sings the praises of “community notes” – a crowdsourced fact-checking system which enables users to add comments to tweets, warning about incorrect content or disinformation.
The notes feature, says Musk, “is extremely powerful for addressing misinformation”.
Musk likes to style himself as a free speech absolutist, and has been widely criticised for allowing previously banned or suspended users back onto Twitter, while at the same time apparently bowing to pressures from leaders of national governments (like India) to block or deprioritise content criticising their policies.
Community notes were previously known as Birdwatch, and were only available in the US until last December, when Twitter announced plans to roll out the feature to international users.
But have any non-US users actually managed to sign up to be a Community Notes contributor? For some reason I can’t seem to verify my phone number – even though I’ve had the same number ever since I set up my first Twitter account.
In the interview, Clayton points out that community note-makers have indeed been active – to the extent of highlighting errors on Musk’s very own tweets (here’s Clayton sharing that “corrected” tweet about Musk’s father owning – or not – an emerald mine). Musk clearly isn’t amused by this – which may be why the roll-out of community notes – after initial fanfare – has been slow.
Verdict: I think community notes as a concept is fine: crowdsourced fact-checking works for Wikipedia. But Elon Musk’s “Animal Farm” approach (all moderators are equal but some are more equal than others) makes it ultimately unworkable.
2. Subscription focus
Rather than tinker around with new features, maybe a whole new business model is needed? In a recent episode of her podcast, Kara Swisher interviewed Chris Best and Hamish McKenzie – cofounders of newsletter platorm, Substack.
Best and McKenzie claim that Substack’s subscription-focused business model avoids the worst excesses of advertising-based social media. First, they say, because the content is long-form and therefore deeper (longer, complex engagement is prioritised). And second, because the incentivisation model is different.
A long-term feature of Substack has been the recommendation function which encourages newsletter writers to share links to other newsletters they like on the platform. This feature is now augmented by Notes which enables users to share not only newsletters but pretty much any type of content – eg: posts, quotes and comments.
Notes is essentially a social newsfeed and has been criticised for looking much like Twitter. But Substack’s co-founders are keen to emphasise the difference. As Best says, long-form content on a traditional social media platform will “tank your metrics” – because it diverts attention away from the constant excitement and dopamine hits of the main (ad-filled) timeline. Instead, on Substack:
“If you’re going into Notes and [you find] a post that you love and you want to go and read 5,000 words – we’re winning. We love that. That helps our business model. We want to encourage deep engagement, because that means that you might actually go and give your email to that person. You might go on to pay to subscribe to them.”
Verdict: my newsletter is on Substack and it’s been a smooth experience so far. LinkedIn also follows a subscription-based model and it’s the least toxic social media platform around. Rather than prioritising inflammatory (and often hate-filled) content, I can see how business models with a subscription focus are able to encourage more thoughtful interactions.
3. Undercover moderators
While it’s clear that we’ve yet to resolve online safety issues in the traditional social web, it’s unfortunate the methods that we have developed don’t translate well into the three dimensional space of virtual reality.
Last year, a reporter for Channel 4’s Dispatches programme visited the metaverse. She found users at risk of experiencing racism, sexually explicit behaviour and sexual assault. And little evidence that Meta was in control of the situation. And as virtual worlds become ever more sophisticated, safety problems will only intensify.
In an article for MIT Technology Review, Tate Ryan-Mosley interviewed Ravi Yekkanti – one of a new group of workers patrolling the Metaverse as an undercover moderator. Yekkanti poses as an “average user” in order to maximise his chances of directly witnessing any antisocial behaviour. He and his colleagues (who work for a company called WebPurify) are trained how to catch and report any rules violations, and then respond appropriately – by muting, reporting or removing offenders.
As Ryan-Mosely points out:
“Traditional moderation tools, such as AI-enabled filters on certain words, don’t translate well to real-time immersive environments [so] mods like Yekkanti are the primary way to ensure safety”.
“The immersive nature of the metaverse means that rule-breaking behavior is quite literally multi-dimensional and generally needs to be caught in real time. Only a fraction of the issues are reported by users, and not everything that takes place in the real-time environment is captured and saved.”
Verdict: like real world policemen, undercover moderators can no doubt help detect and prevent online extremism, abuse and hate. But I imagine they are expensive to recruit and retain, and not particularly scaleable. Also, undercover moderators need to track everything they experience, including overhearing other players’ conversations – creating a privacy issue. This feels like a relatively short-time solution that is more likely to be solved in the longer term by AI.
Photo by King’s Church International on Unsplash
Jemima Gibbons
Social media consultant and author of Monkeys with Typewriters (featured by BBC Radio 5 and the London Evening Standard). Get your social marketing up and running with my Social Media Launch Pack!