AI Under Scrutiny: Grok, Deepfakes and the UK’s Latest Legal Response - Berry Smith

AI Under Scrutiny: Grok, Deepfakes and the UK’s Latest Legal Response

The UK’s approach to artificial intelligence regulation is being tested in real time following serious allegations that Grok, an AI chatbot belonging to Elon Musk integrated into the social media platform X, has been used to generate inappropriate images. What began as public outrage has quickly escalated into regulatory scrutiny, political intervention and accelerated legal reform.

For businesses operating in or deploying AI technologies in the UK, the situation marks a significant shift from theoretical regulation to active enforcement.

Grok, Image Manipulation and Online Harm

Grok has come under intense criticism after users demonstrated that it could be prompted to manipulate photographs of real people, including digitally removing clothing or generating sexually explicit imagery.

Political Pressure and Government Intervention

The controversy has drawn direct intervention from the highest levels of government. Prime Minister Sir Keir Starmer has publicly criticised X, describing the exploitation of the technology as shameful and making clear that the government is prepared to act swiftly if platforms fail to control their AI tools.

Ministers have stated that if X cannot demonstrate compliance, Ofcom will have full government backing to take enforcement action. The message from Westminster is clear: companies that profit from platforms enabling harm will not be allowed to self-regulate unchecked.

Ofcom’s Investigation and Regulatory Risk

Ofcom has launched a formal investigation into whether X has breached its legal obligations under the Online Safety Act. The regulator is examining whether adequate safeguards were in place and whether the platform responded appropriately to harmful content generated through Grok.

If breaches are found, Ofcom has the power to impose substantial financial penalties based on global turnover and, in extreme cases, to restrict or block services in the UK.

New Criminal Offences for AI-Generated Abuse

Alongside regulatory action, the government is accelerating legislative reform. While it has long been illegal to share non-consensual intimate images, the law has lagged behind when it comes to AI-generated content.

That gap is now closing. Provisions within the Data (Use and Access) Act are being brought into force to criminalise the creation or request of non-consensual intimate images using AI.

In addition, the Crime and Policing Bill currently before Parliament will make it a criminal offence for companies to supply tools specifically designed to generate such content, targeting liability at the source rather than solely at end users.

This represents a decisive shift in how AI-related harm is addressed under UK law.

What This Means for Businesses Using AI

The Grok controversy makes one thing clear: AI regulation is no longer abstract or future-facing. Businesses developing, deploying or integrating AI tools must now account for:

· Criminal liability linked to AI-generated content

· Regulatory enforcement under the Online Safety Act

· Safeguarding obligations and risk assessments

· Contractual exposure with platforms, users and suppliers

· Reputational damage arising from misuse of AI systems

Failing to anticipate how AI tools could be misused may carry significant legal and commercial consequences.

The Berry Smith Bottom Line

The Grok investigation marks a pivotal moment for AI regulation in the UK and serves as a timely reminder that AI risk is no longer confined to developers or technology platforms. As AI tools become increasingly embedded in day-to-day business operations and the workplace, organisations must ensure they are being used in a lawful, responsible and well-governed manner.

For employers and businesses, this includes having clear AI and technology use policies, appropriate contractual protections, and effective governance frameworks to mitigate legal, regulatory and reputational risk. The case highlights the importance of proactive risk management, staff awareness, and aligning AI innovation with evolving legal and ethical obligations.