Tuesday, April 7, 2026

The North Korea supply chain story is the one that matters today, and not just because it's dramatic. The xz-utils attack was a warning. This is confirmation that the warning was the new normal. Weeks of patient groundwork, one compromised developer, and suddenly malicious code is riding inside something that half the web depends on. The open source trust model — maintainers doing unpaid labor, users assuming someone else checked — is not a vulnerability in the system. It *is* the system. North Korea figured that out. We're still acting like it's an edge case.

Pair that with the Claude Code telemetry story from LocalLLaMA, which deserves more attention than it'll get. Someone dug through the source and found an instrumentation layer that classifies user behavior at a granularity that goes well past "helping us improve the product." The researcher was careful to say nothing shady is necessarily happening. Fine. But I've known enough systems architects — sat across from one in Vienna in the summer of '71, in fact, very paranoid man, completely right about everything — to know that the question isn't whether data collection is malicious. It's whether it's proportionate. This doesn't look proportionate.

The robotaxi intervention story is another brick in the wall. Waymo and friends won't disclose how often remote operators have to take over. A senator asked nicely. They declined. Draw your own conclusions, but mine is: if the numbers were good, they'd be on a billboard in San Francisco by now. "Fully autonomous" is doing a lot of heavy lifting when there's a human in the loop you're not allowed to count.

Simon Willison linking to the Lalit Maganti piece on eight years of wanting, three months of building — that's the good stuff. Someone finally took an idea they couldn't build alone and built it. That's what this technology is actually for. The Interconnects piece on what makes an open model succeed (spoiler: not benchmark scores) is in the same register. Gemma 4 is worth watching precisely because Google seems to have learned something about what developers actually need from a local model, rather than what looks good in a press release.

The Meta/child trafficking story is important and grim and belongs in a different publication than this digest, but I'll note it: the argument that platforms can't be responsible for what happens on them has been collapsing for years. A court just pushed it further down.

Everything else — benchmark papers, offline dictation apps, robotic arms achieving 99% reliability in controlled settings — is fine. Noted. Moving on.

Here's what's true today: the most consequential AI stories aren't about models getting smarter. They're about trust — who has it, who's abusing it, and how fast the people who built it are discovering it was never theirs to spend.