Hello, dear readers!
This is our weekly brief on remarkable AI topics, so you can focus on signal, not noise.
Today's focus — Anthropic's new “channels” for Claude Code. You can now connect Claude to Telegram or Discord and just message it like a person — it runs tasks in the background and replies when it's done. That's basically what tools like OpenClaw were doing. So: are we about to ditch keyboards and apps and just talk to computers — or is this another overhyped demo?
Also in this week's edition:
Meta is building an internal AI agent to help Zuckerberg do his job — which raises some awkward questions.
Workers tell Guardian AI tools often slow them down instead of helping.
Claude Wants You to Message It
Anthropic just shipped “channels” for Claude Code — some are already calling it an “OpenClaw killer”.
The idea is simple. Instead of sitting in a terminal or app, Claude is always on. You message it on Telegram or Discord, it does the work, and replies when it's done. Not a chat loop — more like assigning a task and waiting.

That's exactly why OpenClaw got popular. People liked having a kind of “AI worker” they could ping anytime — fix code, run stuff, send files. But those setups were messy. Security risks, weird configs, sometimes even a dedicated machine running 24/7.
Anthropic is basically taking that idea and packaging it nicely. Less setup, more guardrails, official support. Under the hood, it runs on their Model Context Protocol, which is just a way to plug the AI into tools and systems.
But here's the part people gloss over: this means giving AI real access. Not just “write me code,” but “go do things based on what I typed while ordering coffee.” That's a different level of trust. And it's not obvious everyone actually wants that.
Also — does this change how people work day to day? Maybe. Messaging your dev environment sounds cool. But it doesn't fix reliability, or the fact that you still need to check everything. The pitch is “AI worker.” The reality might still be “AI intern you babysit.”
Zuck Trains His Own AI Replacement
Mark Zuckerberg is apparently building an AI agent to help him run Meta.
Right now it sounds pretty basic — pulling info faster, skipping layers of people. Internally, Meta already has tools like “Second Brain,” which some describe as an “AI chief of staff.” The idea is clear: fewer layers, faster decisions, more output per person. Classic “do more with less,” now with AI.

Alternatively, this could just be a PR move. Look — I made myself more efficient with AI, and I'm still rich. You should probably do the same. It wouldn't be the first time a buzzword took over the narrative and then quietly disappeared — remember “Big Data”? Now it almost sounds like a joke.
Finally, there's an uncomfortable angle here. If AI can do a meaningful part of a CEO's job, what exactly justifies the role? And if it can be replicated, do those seven- or eight-figure paychecks still make any sense?
Top-Down AI Hinders Work
Meanwhile, on the ground, things look a lot messier.
As reported by the Guardian, some Amazon employees say AI tools are slowing them down, not speeding them up. One developer summed it up pretty well: it feels like “trying to AI my way out of a problem that AI caused.”
At the same time, people are being pushed to use these tools anyway. In some cases, even judged on how often they use them — whether they actually help or not.

In universities, it's a different kind of mess. Professors say students are outsourcing thinking itself. One called AI “the bane of my existence.” Not exactly a glowing review.
So you get this weird split. At the top, companies are talking about autonomous agents doing real work. On the ground, people are stuck fixing bad outputs and figuring out when to trust the tools.
That gap is still very real.
Thanks for reading AIport. Until next Monday — by then, AI will almost certainly promise to do everything for you — while still needing you to double-check it.

