Testing Firefox Extensions with Playwright: End-to-End Testing Guide Extension testing is one of those things everyone knows they should do but few actually do. I've been using Playwright for end-to-end tests on the Weather & Clock Dashboard extension and it's changed how I think about extension quality. Unit tests don't cover the biggest failure modes: Does the extension actually load in Firefo
Why I built another Ruby test runner inspired by Playwright Test Ruby already has great testing tools. If you are building Rails applications today, you probably use one of these combinations: RSpec + Capybara Minitest + Capybara Rails system tests Maybe Selenium, Cuprite, Ferrum, or Playwright through Ruby bindings These tools are mature, battle-tested, and widely used. So the natural question
I wanted to test my web app. That's it. A Next.js portfolio and a SaaS chat — run some accessibility checks, catch console errors, verify nothing's broken on mobile. The kind of thing you do before pushing to production. I opened Claude Code, connected Playwright MCP, typed "test the app" and watched it burn through tokens like there was no tomorrow. Then /compact fired at 18% text context. Then I
We had ArgoCD running perfectly. Every deployment was reconciled from Git. Drift detection worked. Rollbacks were one-click. Our GitOps setup was clean. Developers still couldn't provision a staging environment without pinging the platform team. That gap — between "GitOps in place" and "developers can actually self-serve" — is where most platform engineering teams get stuck. GitOps solves a real p
Anthropic now ships at least three different memory models inside the Claude product family, and they don't behave the same way. Claude.ai has a chat memory feature for Pro, Max, Team, and Enterprise users that summarizes prior conversations and injects that summary into new chats. Claude Code has CLAUDE.md files plus a separate "auto memory" directory the model writes to itself, both loaded at se
Part 2 of 5 in The New Engineering Contract - what it means to lead engineers when AI is doing more of the coding. Stripe never skipped the boring stuff. They ship 1,300 AI PRs a week. Amazon skipped it. Their storefront went down for six hours. Kent Beck wrote the answer in Extreme Programming Explained in 1999. We read it. Then chose velocity anyway. A friend of mine leads engineering at a funde