An SSG benchmark across five React frameworks, from one thousand You're building a marketplace. Or a documentation site. A wiki, Five minutes. Ten. Twenty. Maybe an hour. Maybe a stack trace. You don't know in advance — and the public benchmarks won't tell So I built a benchmark for the gap. Five frameworks in a pnpm workspace, each rendering one dynamic /posts/[id] from a shared deterministic d
Book: TypeScript in Production Also by me: The TypeScript Library — the 5-book collection My project: Hermes IDE | GitHub — an IDE for developers who ship with Claude Code and other AI coding tools Me: xgabriel.com | GitHub You have seen the shape of this incident before. A 500 lands in production. The frontend says "checkout failed". The Hono service that owns /checkout called the prici
Every observability vendor has bolted "AI" to their landing page. Half of those features are genuine improvements. The other half are autocomplete in a costume. After a few years of running these tools across enterprise estates, here is where AI-augmented SRE actually pays off, where it doesn't, and what we'd advise teams adopting it today. The single most defensible use case. A medium-sized estat
Iris v0.4.0 ships today. It's the release where protocol-native eval crosses from "deterministic rules" into "semantic scoring" — without giving up any of what made the deterministic layer work. Three headline features plus a lot of infrastructure work that quietly compounds. I'll go through each, why it matters, and how it fits the thesis. Heuristic rules catch a lot: length, keyword overlap, PII