More rules should mean better output. That's the intuition. I spent weeks building a comprehensive CLAUDE.md — 200 lines covering naming conventions, security rules, error handling, architectural patterns, import ordering, type safety requirements, and more. I was proud of it. I'd thought through every scenario. Then I scored the output. 79.0 / 100. My carefully crafted documentation was actively
I'm a software engineer in Japan. I've been using AI coding assistants — Claude Code, Cursor, Copilot — for about one years now. At some point I started keeping informal notes on how many prompt revisions it took to get production-quality output. After a few months, a pattern was hard to ignore. For tasks I described in Japanese: 4–6 revisions on average. 1–3. Same AI. Same model. Roughly similar
For years, I called myself a web designer. Then a developer. Then a digital consultant. None of those titles ever felt quite right. Because clients weren't just asking me to build things. They were asking me to solve problems. Slow sites, broken checkouts, confusing navigation, teams that couldn't figure out how to update their own content. That's when I realized what a technology solutions profes
"Write a function to fetch the list of users." — same prompt, same codebase. Yesterday: getUsers(). Today: fetchUserList(). Tomorrow: loadAllUsers(). Six months of AI-assisted coding and I kept hitting this wall. My initial reaction was "maybe I need to write better prompts." I wrote better prompts. The functions got slightly better. New inconsistencies appeared elsewhere. The problem wasn't the A