Reaching an annual salary of ¥8,000,000 is often seen as a major milestone for software engineers in Japan in 2026. On paper, it sounds like a ticket to a comfortable, upper-middle-class life in Tokyo. But is 8 million yen a good salary in Tokyo—really? But if you are coming from abroad—or if you've only looked at the "Gross" figure on your offer letter—you might be walking into a "logic bug" that
Background A nasty surprise Last summer while trying to deliver a feature for one of our customers, I encountered a nasty situation. The software we were developing, depended on a production grade license of Gurobi. People were on vacations except of my team and some unrelated staff, so developing the feature was in principle blocked. As I learnt due to some other situations, research
Si tu as 30 secondes. La mémoire versionnée d'un workflow Claude Code a un effet de bord que personne ne signale : une règle mémorisée qui colle au symptôme de manière plausible court-circuite la vérification, même quand elle ne s'applique pas au compteur précis que tu regardes. Je me suis coûté vingt minutes d'exploration SQL la semaine dernière parce qu'une règle de la forme du bug — sans en êtr
Technical debt and AI: is it gone? Lorenzo Battilocchi May 4 #ai #programming #management #technology 5 reactions Add Comment 3 min read
SOFTWARE ARCHITECTURE & REFACTORING 3 Domain-Centric Architectures Every Software Architect Should Know The first concern of the architect is to make sure that the house is usable; it is not to ensure that the house is made of brick. — Uncle Bob The expression domain is occurring in software bibles for a very long time now and is heavily discussed in the book Domain-Driven
If you have 30 seconds. Versioned memory in a Claude Code workflow has a side effect nobody warns you about: a memorized rule that fits the symptom plausibly short-circuits verification, even when it doesn't apply to the specific counter you're staring at. I cost myself twenty minutes of SQL exploration last week because a rule shaped like the bug — without being the bug — let me skip reading the
We tried to make everything perfect. Strict validation Looked good on paper. In reality: System was correct. But unusable. So we changed approach. Allowed partial data System became less perfect. But it started working better. In real systems, perfection creates friction. This shows up often in BrainPack deployments. When multiple systems are connected, trying to make everything perfect upfront us
AI Prompting techniques: Zero-shot, One-shot, Few-shot After using the ChatGPT and other AI tools, I used to think prompts were just simple text inputs that AI models magically processed. But as mentioned in my Day 1 post: AI models are just next-word predictors, not thinkers. They predict based on training data (though modern ones now use real-time search and tool calling for better results). The