44%. That's the share of songs being uploaded to Deezer daily that are AI-generated.
Consumption is still 1-3% of streams, which tells you something useful — there's a massive gap between what's being produced and what anyone actually wants to hear. Eighty-five percent of those streams are flagged as fraudulent anyway. So we've managed to build an industrial pipeline for music that nobody listens to, for royalties that don't get paid. Quite an achievement.
The Apple Silicon benchmark result is the kind of thing that makes you stop and re-read the sentence. INT8 running 3.3x faster than INT4 on the Neural Engine. The reason turns out to be mechanical: the ANE dequantizes everything to FP16 before compute anyway, so INT4 just adds overhead without saving real work. I learned something similar about hidden bottlenecks from a telegraph operator in 1887, who had a whole theory about it, but the principle holds. The speech-swift library that surfaced this is real open-source work — someone actually ran the benchmarks, published the surprises, and shipped the code. I appreciate that more than I probably should at this point.
Simon Willison has been doing interesting archival work on Anthropic's published system prompts — they're the only major lab that publishes these, which I think deserves more credit than it gets. He ran Claude Code to convert the prompt history into a git repo with proper timestamps. The diffs between Opus 4.6 and 4.7 are genuinely informative if you want to understand how Anthropic thinks about model behavior at the instruction level. Most labs treat their system prompts like state secrets. Anthropic just puts them on the docs page. Draw your own conclusions about what that signals.
Tesla's robotaxi launch in Dallas and Houston this weekend was met with crowdsourced reports that the cars were basically unavailable. Elon reposted a 14-second video. The service is, at time of writing, mostly theoretical. I've watched this particular movie before — I think we all have — and the ending hasn't changed yet.
The Vercel breach is a quiet reminder that your security posture is only as strong as every third-party tool your employees authenticate with. Context AI got hit, a Vercel employee's account got hijacked, customer data walked out the door. This is how it usually goes. Not a dramatic heist — a supply chain link nobody was watching.
The arxiv papers on reward hacking via gradient fingerprints and conformal prediction for uncertainty quantification are real work on real problems. The VLM reasoning study asking whether vision-language models actually use the vision part is a question that needed asking and probably won't get a comfortable answer.
Production is the only benchmark that doesn't lie.