The UK government is holding an AI Safety Summit starting on Tuesday. With prime minister Rishi Sunak at the helm, it’s being touted as a global event that will tackle the serious challenges raised by the rapid acceleration of AI models and set the course for innovation worldwide.
But the focus of the summit, and who has and has not been invited to take part, is poised to make things worse, not better, by allowing the harms to remain unchecked and stifling innovation.
There are two fundamental problems with this summit that suggest its aim is not genuine progress: the first is that the guests are mainly from tech companies like Google, who aren’t financially or philosophically incentivised to police themselves, much less allow for regulation that potentially weakens their power. Not invited to the summit are the type of world-leading experts in tech, labour and law who are not employed by Big Tech and who are often critical of these companies. These experts are best positioned to speak to practical and actionable concerns around AI, craft meaningful regulation and carve a pathway for equitable innovation. But their voices won’t be heard at the summit.
Given that this misguided approach has been the issue with much of the government response to the rise of Big Tech generally, and AI specifically, one would not be remiss to assume that creating performatively loud and ultimately toothless policy is intentional. It’s important to understand that world governments have been captured by Big Tech and their unmatched donating and lobbying power. This presents a conflict of interest that has been glaring in many of the attempts at regulating the internet and in several of the AI summits thus far. This includes the recent summit held in the US, which was a closed-door affair that primarily hosted the most powerful tech companies, and this conflict is present in the upcoming conference in the UK.
It’s the job of government leaders to serve their constituents, but it’s impossible to protect citizens from the harms and unethical practices of the very companies that the government is beholden to. That’s a big and systemic problem. And leads to the second issue with the UK summit; its framing of the challenges of AI around doomsday rhetoric, a narrative widely dismissed in serious technology circles and known to be a diversionary tactic to stifle meaningful criticism of more realistic and immediate problems with the rise of AI.
These very real challenges include the threat to the workforce from an unregulated AI space. There already exists an unfriendly culture towards the labour force. Companies like Google, Amazon and Meta are not supportive of marginalised groups and the working classes, and are engaged in highly unethical union-busting practices towards their labourers. AI will only accelerate the ability of these companies to create cost-cutting technology that eliminates jobs.