The email arrived on a Tuesday morning: "Your cloud bill for last month: $2.4 million." The CFO's response was immediate: "That's 3x our budget. What the hell are we running?" The answer? Nothing special. Just a standard data analytics workload that happened to cross availability zones. A lot. Turns out, 80% of that bill—nearly $2 million—was data egress fees. Not compute. Not storage. Just the pr
OK, let's talk about Microsoft's new Fairwater "AI factory,” (The quotes here are doing a lot of work… do we REALLY need a new name for this? It’s so dumb). They're calling it the world's most powerful AI datacenter. Cool. Millions of GPUs. Liquid cooling. Storage stretching five football fields. Here's what they're NOT telling you: the math on utilization is going to be BRUTAL. If these chips ran
I finished an English series on the way I think ordinary people can start using AI for real work. The point is not to become an AI expert first. The point is to have one place where you can say what you want, give the tool access to the right folder, and check the result. Anything important still needs a human pause: publishing, deleting, paying, or authorizing. My preferred starting point is simp
In the previous post, I walked through the compensation logic in each service. The code looks clean on paper. But sagas have a lot of moving parts, and bugs tend to hide in the transitions between services, not inside a single service. This post covers how I test the saga system: unit tests for each service, orchestrator routing tests, and the edge cases that caught me off guard. The orchestrator'
I've been a frontend dev for a few years now, and there's a pattern I kept seeing across almost every small team I worked with. New feature ships. Everyone's happy. Then three days later something completely unrelated breaks and nobody caught it. The problem was always the same: automating that required Playwright or Selenium, and that was "a dev thing". And the devs were busy shipping the next fe
The Model Context Protocol (MCP) has become the default standard for connecting AI agents to external tools and APIs. Governed by the Linux Foundation since early 2025 and adopted by OpenAI, Anthropic, Microsoft, and Vercel, MCP is the USB-C port of the AI ecosystem — one protocol that lets any LLM application talk to any tool server. But there's a gap between reading the spec and building somethi
If you have spent any real time with Claude Code, you have probably noticed the same problem I did. You write the same instructions in the prompt every other day. "Use four-space indentation here." "Always run the linter after edits." "Format commit messages this way." After the third or fourth repeat, it stops feeling like a prompt and starts feeling like missing config. Skills are how Claude Cod
You have unit tests in Vitest (or Jest). You have E2E tests in Playwright. CI runs both. Coverage works for each, until you try to look at a single number. Then it gets weird. Unit tests run in Node, instrumented by V8 or istanbul. Playwright runs your real app in a real browser. Each produces its own coverage data. Stitching them together usually means: nyc merge (or a custom step) combining cove