Eon Systems put out a video, social media did its thing, and for about two weeks people apparently believed we'd crossed some fundamental threshold in consciousness research.
We had not. The LessWrong post walks through why the claims don't hold up, and honestly the more interesting story isn't the science — it's how fast the narrative ran. A startup with a mission statement about "flourishing in a world with superintelligence" releases a video, and the discourse treats it like a peer-reviewed result. I've watched hype cycles eat real science before — I was at the Cold Fusion announcement in '89, standing next to a man who really should have known better — and the pattern is always the same: the correction travels one-tenth as far as the original claim.
The peer review piece is connected, though nobody's saying so explicitly. ICML published data in March suggesting LLM use by reviewers is widespread enough to constitute a structural problem. When judgment becomes unattributable, the whole system that depends on attributed judgment starts to hollow out. That's not a small thing. Peer review is already underfunded and overextended; handing it a convenient shortcut while pretending accountability still exists is how you get a literature you can't trust. Which, by the way, is the same literature that would eventually validate or debunk things like fly uploads. It's turtles the whole way down if we're not careful.
On the local model front, two things worth noting. The NemoClaw writeup from someone actually running NVIDIA's agent platform with a local 9B model on WSL2 is the kind of unglamorous field report I have time for — real friction, real architecture decisions, real parser problems. That's what production looks like. And separately, someone is running Nemotron-3-Nano in a browser via WebGPU at 75 tokens per second on an M4 Max, which is genuinely impressive for a 4B hybrid Mamba-attention model. The local-first momentum is real. The uni student digging into BitNet quantization math with zero prior research experience and the appropriate caveat of "take this with appropriate salt" is also more interesting than half the corporate research drops this week.
Vercel's new terms — free tier users opted into model training by default, ten days to opt out — is the quietest loud thing in the feed. Not surprising. Still worth knowing. The business model was always going to arrive eventually; it just usually arrives wearing friendlier language than this.
The rest is alignment philosophy and GitHub repos that trend because SAM got geospatial bindings. Fine.
Here's the true thing: the fly upload story and the peer review story are the same story. We've built systems that make it very easy to produce confident-looking outputs and very hard to verify the judgment behind them. That's not a coincidence. That's the water we're swimming in.