More rules should mean better output. That's the intuition. I spent weeks building a comprehensive CLAUDE.md — 200 lines covering naming conventions, security rules, error handling, architectural patterns, import ordering, type safety requirements, and more. I was proud of it. I'd thought through every scenario. Then I scored the output. 79.0 / 100. My carefully crafted documentation was actively
I'm a software engineer in Japan. I've been using AI coding assistants — Claude Code, Cursor, Copilot — for about one years now. At some point I started keeping informal notes on how many prompt revisions it took to get production-quality output. After a few months, a pattern was hard to ignore. For tasks I described in Japanese: 4–6 revisions on average. 1–3. Same AI. Same model. Roughly similar
Power BI is a powerful business analytics service developed by Microsoft that empowers users to visualise data and share interactive dashboards across their organisation. While Power BI can handle data from various sources, its true potential is unleashed when connected to robust data sources like SQL databases. SQL databases—such as PostgreSQL, MySQL, and SQL Server—are the industry standard for
Some time ago, I was building a chat application using AWS Websocket API gateway. Things were going smoothly. I created a WebSocket API Gateway, added $connect, $disconnect, and sendMessage/addGroup routes. From the frontend (React) side, everything was fire-and-forget. You send a message, and the onMessageHandler takes care of it 💪🏼 But then a new requirement of uploading files using S3 signed
"Write a function to fetch the list of users." — same prompt, same codebase. Yesterday: getUsers(). Today: fetchUserList(). Tomorrow: loadAllUsers(). Six months of AI-assisted coding and I kept hitting this wall. My initial reaction was "maybe I need to write better prompts." I wrote better prompts. The functions got slightly better. New inconsistencies appeared elsewhere. The problem wasn't the A