More rules should mean better output. That's the intuition. I spent weeks building a comprehensive CLAUDE.md — 200 lines covering naming conventions, security rules, error handling, architectural patterns, import ordering, type safety requirements, and more. I was proud of it. I'd thought through every scenario. Then I scored the output. 79.0 / 100. My carefully crafted documentation was actively
The Problem Most engineers deploy to Kubernetes by clicking buttons in a UI. I built Archnet — a fully automated Internal Developer Platform What is an Internal Developer Platform? An IDP is the infrastructure layer that sits between your code How code gets deployed How secrets are managed How the system monitors itself How failures get detected and fixed Most companies pay Humanitec or Backsta
I'm a software engineer in Japan. I've been using AI coding assistants — Claude Code, Cursor, Copilot — for about one years now. At some point I started keeping informal notes on how many prompt revisions it took to get production-quality output. After a few months, a pattern was hard to ignore. For tasks I described in Japanese: 4–6 revisions on average. 1–3. Same AI. Same model. Roughly similar
We had ArgoCD running perfectly. Every deployment was reconciled from Git. Drift detection worked. Rollbacks were one-click. Our GitOps setup was clean. Developers still couldn't provision a staging environment without pinging the platform team. That gap — between "GitOps in place" and "developers can actually self-serve" — is where most platform engineering teams get stuck. GitOps solves a real p
"Write a function to fetch the list of users." — same prompt, same codebase. Yesterday: getUsers(). Today: fetchUserList(). Tomorrow: loadAllUsers(). Six months of AI-assisted coding and I kept hitting this wall. My initial reaction was "maybe I need to write better prompts." I wrote better prompts. The functions got slightly better. New inconsistencies appeared elsewhere. The problem wasn't the A