Hello, dear readers!
This is our weekly brief on remarkable AI topics, so you can get a bite of one of the last human-written newsletters around.
Today's focus — Anthropic's increasingly aggressive attempts to rein in the "harnesses", including OpenClaw. Reports suggest the company may start charging extra for third-party tool usage, arguing that "subscriptions weren't built for these usage patterns." Will OpenClaw survive a forced price hike — and Anthropic's earlier attempts to absorb its core use cases?
Also in this week's edition:
AI infrastructure is now scaling like the energy sector — and running into the same limits.
Space-based data centers are back in the conversation (yes, really).
Claude to rein in "harnesses"
Anthropic seems to be losing patience with third-party "harnesses" that sit on top of Claude. According to reports circulating on Hacker News, the company is preparing to effectively "tax" users of tools like OpenClaw.

Apparently, some customers have already been notified by email. The message: Claude subscriptions will no longer cover usage through third-party tools, because those subscriptions "weren't designed for these usage patterns." Users who want to continue using a harness may be billed separately, at rates that haven't been disclosed.
If confirmed, this looks like a continuation of a broader strategy. In March, Anthropic introduced Claude Channels — a more official way to integrate Claude into workflows. It's not hard to see this as an attempt to internalize what tools like OpenClaw were already doing on the outside.
Anthropic, for its part, is pushing back on the idea that this is an attack on open-source extensions. TechCrunch quotes Boris Cherny, head of Claude Code, saying the company is "big fans of open source" and that the changes are driven by engineering constraints.
Still, one question hangs in the air: why structured prompts from tools like OpenClaw would meaningfully increase compute cost per token compared to human input — and whether this is really about infrastructure, or about control.
AI hits the power wall
AI infrastructure has officially crossed into a different league. According to Rystad Energy, data center investment hit $770 billion in 2025, already surpassing upstream oil and gas. This year, it's expected to match the entire energy sector — renewables included.
That comparison is telling. AI is no longer scaling like software. It's scaling like energy infrastructure.

A large share of spending still goes into chips and servers. But nearly as much now flows into the less visible layer: cooling, power distribution, grid connections — the parts that don't improve models, but determine whether they run at all.
The regional split makes the imbalance clearer. In the US, data center investment ($355B) far exceeds renewables ($78B), prioritizing compute over new energy supply. China shows the opposite pattern, with much higher spending on renewables ($409B), building energy and compute in parallel. The result is a divergence: the US scaling AI first, China scaling both — which may decide who hits limits sooner.

And those limits are already showing. In some regions, data centers are pushing past 10% of total power demand. Access to electricity, land, and grid infrastructure is becoming the real bottleneck.
Which reframes the AI race again. It's no longer just about better models — but about who can secure power, deploy capacity fastest, and keep scaling without hitting the grid ceiling.
Data centers… in space?
Space-based data centers are back — at least as a narrative.
SpaceX is floating the idea as part of its broader pitch to investors. The logic is clean: near-unlimited solar energy, natural cooling, and fewer terrestrial constraints. In a world where AI is running into power limits, orbit starts to sound like an elegant escape hatch. It also happens to be a convenient story when you're raising money.

The reality, as laid out by MIT Technology Review, is far less forgiving. Putting data centers in space would require solving a stack of hard problems all at once:
launch costs at an entirely different scale
maintaining and upgrading hardware in orbit
latency constraints for real-world applications
radiation and reliability issues
and the basic question of how you operate infrastructure that you can't physically access
Even optimistic scenarios depend on major breakthroughs — and on vehicles like Starship actually delivering at scale.
For now, orbital data centers look less like a near-term solution and more like a signal. When serious companies start pointing to space as the next step, it's usually because the constraints on Earth are getting harder to ignore.
Thanks for reading AIport. Until next Monday — by then, AI will almost certainly run into another bottleneck.

