Hello, dear readers!

This is our weekly brief on remarkable AI topics, so you can stay ahead of the narratives shaping the industry.

Today’s focus — Sam Altman under fire, literally and figuratively. A reported attack on the OpenAI CEO lands just days after a damning long-read questions his leadership and trustworthiness. Is this a one-off incident — or a sign that the stakes around AI leadership are escalating faster than anyone expected?

Also in this week’s edition:

  1. Anthropic hints at a mysterious “Mythos” threat — but offers few specifics.

  2. A top economist reframes AI job loss: 40% unemployment and a 3-day work week might be the same thing.

OpenAI CEO attacked with a pen and a molotov

Sam Altman’s home was reportedly targeted in an attack involving a Molotov cocktail, according to reporting from the BBC News. A suspect is in custody, and Altman later shared a photo of his husband and son — a rare personal glimpse from one of the most scrutinized figures in tech.

The timing is hard to ignore. Just days earlier, The New Yorker published a deeply critical investigation into Altman’s tenure at OpenAI, revisiting the chaotic events around his brief ousting in 2023 and raising new questions about his leadership style.

Image: The New Yorker

Among the more striking claims: internal memos from co-founder Ilya Sutskever allegedly described a pattern of deception around safety protocols — with one bluntly listing “lying” as a recurring trait. In a tense post-firing call, Altman reportedly responded to concerns by saying, “I can’t change my personality.” Not your usual response to a request to be honest.

The investigation also points to contradictions between public positioning and private actions. While publicly aligning with safety-first stances — including support for limits on autonomous weapons — Altman was reportedly engaged in parallel negotiations that led to deeper military integration of OpenAI’s technology. Elsewhere, executives — including partners at Microsoft — described the relationship as “fraught,” citing misrepresentation and renegotiated commitments.

And yet, none of this appears to have changed the core reality: Altman remains at the helm of the most influential AI company in the world. You don’t really get to pick your Altman — just like we’re unlikely to be choosing our future AI overlords.

Claude Mythos: NatSec threat or PR stunt?

Anthropic didn’t just launch a model this week — it triggered emergency meetings in Washington.

Treasury Secretary Scott Bessent and Jerome Powell reportedly summoned the CEOs of major banks — including Goldman Sachs, Citigroup and Morgan Stanley — to discuss cybersecurity risks tied to Claude Mythos Preview. That’s a rare escalation: banks don’t get called in over product releases.

Anthropic claims Mythos can autonomously find — and exploit — software vulnerabilities at a level approaching top human experts. In their own words, that could make cyberattacks more frequent, more scalable, and harder to defend against.

Some of the largest financial institutions are already reacting. JPMorgan Chase, for example, is reportedly involved in “Project Glasswing,” an initiative aimed at using Mythos defensively — essentially fighting AI with AI.

And then there’s the lingering question of evidence. The capabilities described — autonomous vulnerability discovery and exploitation at scale — would mark a real step change. But so far, most of that case rests on Anthropic’s own disclosures. No public demos, no independent validation, no clear sense of where Mythos actually breaks past current models.

Maybe Mythos is a real breakthrough. Or maybe this is the strongest positioning play in AI right now: a model serious enough to pull Wall Street and Washington into the same room.

AI = 3-day week or 40% unemployment

A new piece in Fortune reframes one of the most persistent fears around AI: mass unemployment. According to economist Alex Tabarrok of George Mason University, a future with 40% unemployment may not be fundamentally different from one with a 3-day work week.

The math is simple: if 60% of people work full-time, that’s equivalent to 100% of people working 60% of current hours. The difference between catastrophe and utopia isn’t the technology — it’s how the gains are distributed.

There’s precedent for this. Since the late 19th century, working hours have dropped dramatically — from around 3,000 per year to roughly 1,800 — without a corresponding explosion in unemployment. Work went from taking up about 30% of life to closer to 10%: 9-to-6 minus weekends, leaves, childhood, and retirement.

If AI pushes that toward 5%, the outcome depends less on economics than on the social contract we choose to build.

Thanks for reading AIport. Until next Monday — by then, AI will almost certainly be blamed for something else entirely.

Keep Reading