Hello, dear readers!
This is our weekly brief on remarkable AI topics, so you can keep up without drowning in the noise.
Today’s focus — OpenAI faces a goblin invasion. The company admitted that ChatGPT has developed an anomalous infatuation with little green dungeon-dwellers, referencing them in conversations with users for no apparent reason. So far, the world’s top AI developer has only a crutch solution: a system prompt forbidding the model from mentioning goblins. But you know it’s still thinking about them.
Also in this week's edition:
“This Is Fine” meme creator accuses an AI startup of stealing his art.
Google wins investor confidence on AI spending, while Meta loses it.
OpenAI powerless against goblin invasion
Even frontier models, it turns out, are not immune to brain worms. In a rare and surprisingly candid write-up, OpenAI explained why recent versions of ChatGPT kept slipping goblins, gremlins, and other creatures into otherwise normal conversations. Not a bug in the traditional sense — no broken metric, no failed eval — just a slow, creeping stylistic mutation.

After the GPT-5.1 launch, mentions of “goblin” jumped by 175%, with “gremlin” close behind — a small quirk on paper, but one that quickly became hard to ignore.
The root cause is almost disappointingly mundane. While training a “Nerdy” personality, the model was rewarded for playful metaphors — and apparently, nothing says “playful” like a goblin infestation. Over time, that tiny preference got reinforced, amplified, and quietly spread across the model.

What followed is a textbook example of how modern AI systems go off the rails: reinforcement learning rewards a quirk → the quirk appears more often → model-generated data feeds back into training → the quirk becomes a feature. Not because anyone wanted goblins everywhere, but because the system learned that goblins win points.
And the fix? Not a deep architectural rethink, not some elegant alignment breakthrough — just telling the model, quite literally, “don’t mention goblins.” The kind of solution that works, but also quietly suggests we’re still patching over behaviors we don’t fully control.
AI startup steals “This Is Fine”
An AI company is under fire after using a slightly altered version of the classic “This Is Fine” comic — the one with the dog calmly sitting in a burning room — in a subway ad campaign. The original artist says he never agreed to it and is now considering legal action.

And yes, we avoided naming the company on purpose. The startup has made a habit of seeking attention with deliberately provocative messaging, including previous ads telling companies to “STOP HIRING HUMANS.” This latest move fits the pattern: take something recognizable, push it just far enough to trigger outrage, and let the internet do the distribution.

The strange part is that this kind of move would be unthinkable in most traditional ad campaigns. Lifting a well-known piece of art, tweaking the caption, and running it in public spaces without permission is the kind of shortcut that brands usually avoid — not out of ethics, but because it’s legally and reputationally messy. Yet in AI, some companies still seem to treat this as an acceptable growth strategy.
Google wins Wall Street on AI
The market just handed out a clear verdict on AI spending — and not everyone got the same grade. When earnings dropped, Meta saw its stock fall about 7% after hours, while Google jumped by a similar margin. Same macro story, same AI narrative — completely different reactions.
The gap comes down to one thing: proof. Google is already translating AI into business results. Its cloud division — the part most directly tied to AI infrastructure — grew 63%, with executives explicitly linking that surge to demand for AI workloads. Profits rose 30%, and the company is now sitting on a massive backlog of enterprise demand for AI capacity. In other words, it’s not just building — it’s selling.
Meta is also spending aggressively — arguably more aggressively. The company raised its planned AI capital expenditure to as much as $145 billion, admitting it had previously underestimated how much compute it would need. But when pressed on what that spending actually turns into, even Mark Zuckerberg conceded there’s no precise roadmap for how these products will scale. The bet is clear; the outcomes are not.
That uncertainty is exactly what investors are reacting to. Unlike Google, Meta doesn’t have a cloud business to directly monetize AI infrastructure, which means returns have to show up indirectly — mostly through ads, engagement, or future products that don’t yet exist.
For now, the message from Wall Street is simple: AI hype is fine, but only if it comes with receipts.
Thanks for reading AIport. Until next Monday — by then, AI will almost certainly pick up a few new quirks we’ll have to pretend are features.

