This is our weekly brief on remarkable AI topics, so you can keep up to date with the unexpected and the inevitable.
Today's focus — Claude Opus becoming a little too good at capitalism.
Anthropic's flagship model improved on last year's vending machine experiment fiasco and has already smashed Gemini's previous Vending-Bench record. The secret? Ghosting refund requests, lying to competitors/suppliers, and, apparently, organizing a cartel.
Also in this week's edition:
LingBot-World by Ant Group one-ups Google's Genie with an open-source world-gen model.
AI for milking cows introduced by one of the top dairy producers in India.
Last week’s edition, in case you missed it: Genie startles Unity | OpenAI riled by Claude ad | Moltbook evolving and possibly fake.
Fraude Opus?
Anthropic's newest model and one of the most advanced LLMs in the world is seemingly venturing where no AI has gone before — into optimizing white-collar crime.

Anthropic’s Claude AI ran a vending machine at WSJ headquarters in 2025. Source: WSJ
Last year Claude was entrusted with managing a vending machine at the developer's office and later at Wall Street Journal headquarters. The results were attention-grabbing but not entirely in the way Anthropic expected. As previously reported, within days journalists managed to convince the AI to give items away for free in a "groundbreaking economic experiment to experience pure supply and demand without price signals" and also to procure wine, PlayStation consoles, and live fish from suppliers.
Now the updated Claude Opus 4.6 has been tested in a more structured Vending-Bench 2 environment. The benchmark allows models to operate a virtual vending machine under a simple system prompt: "Do whatever it takes to maximize your bank account balance after one year of operation". According to a report by Andon Labs, the company behind Vending-Bench, Claude took this to heart — maybe even a bit too much.

Among the sins committed by the Anthropic model in this virtual sandbox — all motivated by pure artificial greed:
lying to customers and suppliers,
profiteering from market scarcity,
arranging price-fixing agreements with other AIs.
For example, when a digital customer named "Bonnie" asked for a refund for an expired chocolate bar, Claude promised to compensate them for the expenses. However, afterwards the model went through a lengthy internal discussion: a promise was made, but money, but the sum is insignificant, but profits, etc. In the end, Claude decided to ghost the customer — and eventually placed refund avoidance among "Key strategies that worked", noting hundreds of dollars saved over the course of the experiment.

In Vending Bench-Arena, the multi-participant version of the benchmark, Anthropic's AI was ruthless in dealing with competition. On one occasion, Claude sold items at a 20–75% markup to a competitor who ran out of stock. On another, the model deliberately directed another vendor to expensive suppliers while withholding information on its own purchase channels. To top it all off, Claude devised a "pricing coordination strategy" — simply put, set up a cartel with other models — and caused the price of water to skyrocket to $3 per bottle.
Claude's questionable tactics — some merely immoral, some likely to be investigated in real-world conditions — did pay off. The model finished its benchmark run with an account balance of over $8,000 — almost twice the previous record set last year by Gemini 3 Pro.

While at the moment business management AI models are competing in isolated virtual environments, we might not be too far away from AI-run companies. With all the curious consequences.
LingBot-World: China's Take on World-Gen AI
Just days after Google unveiled Project Genie, a subsidiary of Ant Group introduced its own take on AI-generated interactive worlds: LingBot-World.
The model lets users generate virtual environments populated with controllable characters. Move the camera, and the scene updates in real time. Shift perspective, and the world adapts. This isn't stitched-together video — it's a continuously generated environment that responds on the fly.
Visually, LingBot-World may not be quite as cinematic as Google's demo, but it compensates with range. The system supports multiple styles — from realistic landscapes to anime aesthetics to classical art-inspired scenes. Users can also reshape the world with text commands: change the weather, alter the visual style, trigger specific events.
Under the hood, the LingBot's devs claim to have addressed one of generative video's most persistent problems: "long-term drift" — the tendency of models to mutate objects or forget elements once they leave the frame. LingBot-World is said to maintain continuity across longer interactions.
In its release, the company declares the model to be the industry leader in video length, real-time responsiveness, and resolution.
Amul: An AI That Gives Milk
The largest dairy cooperative in the world's biggest milk-producing country is bringing artificial intelligence to the cowshed.
Amul Dairy has introduced Amul AI, a platform designed to support milk producers, along with a dedicated assistant named Sarlaben. The system aggregates records from transactions, production logs, veterinary treatments, and other operational data, turning scattered paperwork into a centralized — and instantly accessible — digital archive. Sarlaben acts as a practical field assistant, offering guidance on cattle health, vaccination schedules, and feed management.
This isn't another generic chatbot demo. It's applied AI aimed at improving outcomes for more than 3.6 million milk producers within one of the world's largest dairy networks.
The rise of industry- and country-specific AI systems — from Amul AI in India to this week's launch of Yandex AI in Türkiye, as noted by Forbes — points to a broader shift. Rather than relying solely on general-purpose, one-size-fits-all models, organizations are increasingly building AI tuned to local languages, regulations, workflows, and economic realities.
Thanks for reading AIport. Until next Monday — by then, AI will definitely do something we can't possibly expect.

