Thursday, April 30, 2026
Privacy-First Edition
Back to NNN
Technology

What's with all the goblins? OpenAI tells ChatGPT to stop talking about mythical creatures

ShareSaveAdd as preferred on GoogleLiv McMahonTechnology reporterGetty ImagesThe ChatGPT developer discovered the "strange affinity for goblins" while testing tools powered by newer systems, such as its coding agent CodexChatGPT-maker OpenAI has had to instruct some of its AI tools to stop talking about goblins, after finding the term had randomly crept into responses.

In a blog post on Thursday, the company said it spotted increased mentions of the mythological creatures, as well as gremlins, in ChatGPT, powered by its latest flagship model, GPT-5.

After the issue was flagged by users and employees, OpenAI took steps to mitigate it, including telling its coding tool Codex not to refer to goblins unless relevant.

However, it highlights challenges AI firms face in tackling the potential for systems and their training to reward and reinforce errors like language quirks.

OpenAI said it first noticed increased mentions of goblins, gremlins and other creatures after the launch of GPT-5.1 in November.

"Users complained about the model being oddly overfamiliar in conversation, which prompted an investigation into specific verbal tics," the company wrote in its blog post on Thursday.

It added that after a researcher who had seen a few "goblin" mentions asked it to be checked out, developers found the term's appearance in ChatGPT responses had risen by 175% since GPT-5.1's launch.

They meanwhile found that mentions of "gremlin" rose by 52%.

The increases, while large, may account for a small amount of responses overall.

According to OpenAI, "a single 'little goblin' in an answer could be harmless, even charming," but the uptick in their appearance across output warranted investigation.

Ahead of OpenAI's blog post detailing the issue, some social media users flagged a strange detail among lines of code instructing the company's coding assistant Codex how to behave in user interactions.

Alongside telling it to avoid platitudes, it said Codex should "never talk about goblins, gremlins, raccoons, trolls, ogres, pigeons, or other animals or creatures unless it is absolutely and unambiguously relevant to the user's query".

A Reddit user who posted about it in the r/ChatGPT subreddit called it "genuinely insane".

"Why does GPT 5.5 have a restraining order against 'Raccoons,' 'Goblins,' and 'Pigeons'?"

While some users elsewhere on social media speculated it may be designed to create hype around its AI tools, a company researcher denied this - writing "it really isn't a marketing gimmick," in a reply to a user on X on Wednesday.

OpenAI said in its blog post it added the instruction to curb Codex and its underlying model's "strange affinity for goblins".

The core issue, it explained, seemingly arose while training its models to communicate in the style of particular personalities - in this case with its "nerdy personality".

It found this system had been unwittingly incentivised to mention goblins, gremlins and other creatures more in metaphors.

While since retired, it said its testing found the personality was responsible for 66.7% of all "goblin" mentions in ChatGPT.

This so-called tic could seep into wider model training if rewarded in one instance and reinforced elsewhere.

The move comes amid a broader industry shift towards making AI chatbots more personality-driven and chatty in a bid to boost user engagement.

As they do, however, experts have warned their potential to make things up - or "hallucinate" as the industry describes it - could intensify.

A recent study by the Oxford Internet Institute found fine-tuning models to have a more warm and friendly personality could result in an "accuracy trade-off", whereby systems make more mistakes or re-affirm a user's false beliefs.

Experts have also cautioned users about taking chatbots' often matter-of-fact statements at face value, particularly when it comes to health and medical advice.

But, like OpenAI's goblin quirk, generative AI mistakes can sometimes be more bizarre and innocuous.

In May 2024, Google's AI chatbot was widely mocked for telling users it was okay to eat rocks and "glue pizza".

Read original at BBC News

The Perspectives

0 verified voices · Three viewpoints · Real discourse

Left
0
Be the first to share a left perspective
Center
0
Be the first to share a center perspective
Right
0
Be the first to share a right perspective

Related Stories