“Soon, you will be free to build!” tweeted Elon Musk early on 6th November, once it was clear Donald Trump was winning the 2024 US Presidential election. Musk has been given a key ‘efficiency’ role in Trump’s incoming administration. And a bonfire of regulations is promised. But innovation needs to go hand in hand with social responsibility: not all regulation is bad.

For example, regulation is great when it’s protecting children from the darker side of social media apps. Or ensuring public health standards. Post-Brexit, we in the UK may have to choose between a compromising trade deal with the US or a worse-than-before trade deal with the EU. Chlorinated chicken and hormone-injected beef from America are highly innovative – but that doesn’t mean we want to eat them.

Last year, Musk seemed more in favour of regulation. He was one of many tech leaders who signed a letter expressing fears that the race to develop artificial intelligence (AI) was “out of control” and asking for a 6 month pause in large AI experiments. Needless to say, this didn’t happen (although the leading AI labs got some nice free publicity).

Unfettered tech – good or bad?

In an ideal world, innovation and responsibility should go hand in hand – but are the two things mutually exclusive?

This question was asked at the Tech for Good: Responsible AI event I attended two weeks ago. The guest speaker was Andrew Strait, Associate Director at the Ada Lovelace Institute. Strait says research shows that public opinion is overwhelmingly in favour of regulating AI. (See the report from Ada Lovelace Institute’s 2022 survey on public attitudes to AI). He likens a desire for ethical AI to the same desire people have to walk into a supermarket and not have to worry that the apples might be poisoned.

There’s a massive pressure on businesses to harness AI, says Strait. But the reality is that many businesses and organisations are struggling to adopt AI products and processes in the workplace. The stakes are particularly high in fields like healthcare or education – lives and welfare are at risk if things go wrong. Strait recommends that sectors foster a “ground-up approach” to complement any regulation from above – this means organisations within the same industry talking to each other more about the issues, putting ethics and responsible innovation at the heart of their discussions.

“The big question is how do we create more of a demand for responsible AI? People aren’t using Google Gemini because it’s more ethical, they’re using it because it fits other requirements.”

So, for example, some organisations are using ChatGPT to sort CVs, says Strait, but that’s not what the tool is designed for. It’s clear people need more education around AI, but the pace of innovation is moving so fast, it’s difficult for people in the industry to keep up – let alone everyday users.

Please build responsibly

Strait notes a push towards AI safety in UK policy circles. An AI white paper was published by Rishi Sunak’s conservative government in 2023 with a follow-up consultation and government response in March 2024. Although the white paper was headlined “A pro-innovation approach”, this recent summary seems to suggest the Starmer government will take a more cautious path. The authors cite “an opportunity for the UK to become a global regulatory leader”.

Given all the uncertainty around AI, it’s hardly surprising that digital teams in government are working on developing their own AI tools – Redbox was launched recently by the Department of Business and Trade (how generative AI is accelerating outcomes in DBT) while Government Digital Service is trialling an AI chatbot to help people set up small businesses.

While AI is particularly topical right now, there’s a host of other technological innovations at play. And, sadly, we humans can’t always be trusted to do what’s right. Just look at the state of our environment. It’s clear from history that relatively unfettered growth has led to our current unhappy reality. I mean, the world may not feel dystopian if you’re floating in your waterfront pool in Mar-a-Lago. But pretty ropey if you try swimming regularly anywhere off the English coast. Or walking down the street in Paiporta, Valencia, come to that.

The big picture

The planet and its people are not infinite resources for external extraction. But try telling that to an incoming US president who’s reclaimed the slogan “Drill baby, drill”! It’s clear that Trump’s plans for economic growth aren’t going to be hampered by social responsibility.

Last week, digital strategist Ann Longley ran an experiment. She asked Claude AI to analyse Trump’s communication style and use the same patterns to generate an alternative, greener vision for the future.

One snippet:

“We’re going to have the most incredible organic farms, urban gardens – food growing EVERYWHERE! Fresh, healthy, local – no more of this processed junk! Our farmers? They’re going to be the envy of the world! And they’re going to be using methods that actually HELP the environment – it’s genius!”

Wouldn’t it be nice if we could harness populist rhetoric to get people excited about responsible innovation? As this alternative vision suggests, environmental sustainability could be at the heart of new methodologies. Technological development could take place in a ethical environment with green values front and centre.

The digital realm and outer space are both new frontiers for humans. The potential to colonise, to develop, to build in these spaces is infinite. Unsurprisingly, these spaces are inspirational to many – especially billionaires. Because they are ripe for innovation. They are today’s wild west.

But…what about the old frontier – our existing back garden? We already have wealth beyond imagination in the natural world around us: clean air, water, trees, fruit and vegetables are all free. Or at least, they used to be.

We need to find a way to make responsibility great again. I’d love to know if you have any ideas!