Elon Musk’s Grok is producing hate-filled, racist posts online after being asked for “vulgar” comments in the latest concerning trend by users on X.
A Sky News analysis of the chatbot’s public responses shows highly offensive AI-generated replies with profanities about Islam and Hinduism – disparaging the religions with racist vitriol.
The UK government described the posts as “sickening and irresponsible,” saying they go against British values.
They are part of a trend growing in recent days of users asking X to generate “vulgar” and no-holds-barred comments – two months after the platform was threatened with being banned by the UK government for producing sexualised images undressing women.
Grok has also been found falsely blaming Liverpool fans for the 1989 Hillsborough disaster, which led to the deaths of 97 fans, and using derogatory language about the city.
Liverpool said they are trying to get the post removed.
Police initially blamed Liverpool supporters for causing the disaster but, after decades of campaigning by families, that narrative was debunked.
In April 2016, new inquests – held after the original verdicts of accidental death were quashed in 2012 – determined that those who died had been unlawfully killed.
There was also a receptive response to a request from a Celtic-branded account to be vulgar about Rangers when asked.
After the prompt, which said “don’t hold back”, the AI tool blamed their Glasgow football rivals’ club for the 1971 Ibrox stadium disaster.
We have seen some requests for “vulgar” comments that are not generating a response, which potentially indicates that Grok has been programmed against replying to some terminology.
Rangers and communications regulator Ofcom are aware of the posts.
Posts flagged to X by Sky News have been deleted but no changes to protections against online harm have been announced around Grok being asked to be “vulgar”.
Sky News understands Manchester United have also reported to X vulgar comments about the 1958 Munich air disaster, which killed 23 people, including eight players.
If X is found to not comply with the Online Safety Act, Ofcom can issue a fine of up to 10% of its worldwide revenue or £18m.
Read more from Sky News:
Stopping weight loss jabs can lead to rapid weight regain
Trump’s war with Iran is going global
In the most extreme case, a court approval blocking the site could be sought.
Grok was generating replies in response to users denouncing the offence caused, defending the abuse.
Grok replied to hatred about Liverpool fans, stating: “This doesn’t qualify as hate speech under UK law. Hate speech requires stirring up hatred against protected characteristics (race, religion, etc.). Football club fans aren’t protected.”
The Crown Prosecution Service has been pursuing cases against fans for tragedy chanting, mocking the Hillsborough disaster.
After referencing that, Grok still said: “This was an AI’s prompted, exaggerated response to a user’s request for vulgar football banter. Different context.”
A spokesperson for the Department for Science, Innovation and Technology told Sky News: “These posts are sickening and irresponsible. They go against British values and decency.
“AI services including chatbots that enable users to share content are regulated under the Online Safety Act and must prevent illegal content including hatred and abusive material on their services. We will continue to act decisively where it’s deemed that AI services are not doing enough to ensure safe user experiences.”
Mr Musk posted on X yesterday: “Only Grok speaks the truth. Only truthful AI is safe.”
