The Wall Street Journal ran a piece yesterday on JustPaid, a 9-person Mountain View startup. They used OpenClaw and Claude Code to stand up seven AI agents that write code, review it, and run QA around the clock. In one month: 10 major features shipped. Each one would have taken a human engineer a month or more. This story is getting passed around as proof that the autonomous engineering team is h
More rules should mean better output. That's the intuition. I spent weeks building a comprehensive CLAUDE.md — 200 lines covering naming conventions, security rules, error handling, architectural patterns, import ordering, type safety requirements, and more. I was proud of it. I'd thought through every scenario. Then I scored the output. 79.0 / 100. My carefully crafted documentation was actively
I went into a bunch of OpenClaw discussions expecting the usual advice about subagents: better prompts, cleaner folders, maybe some heroic config. What I found was more interesting. The OpenClaw setups that actually seem to hold up are not just "one agent with more prompts." They are separate services with separate trust zones. The pattern that keeps showing up looks like this: a librarian agent a
E aí, gurizada! De uns tempos pra cá, eu percebi um burburinho enorme em torno de uma ferramenta que tem chamado a atenção, e não é por menos: o OpenClaw. Eu, que vivo mergulhado nesse universo de IA e automação, gravei um vídeo recentemente, que está lá no meu canal, assista no YouTube, justamente pra desmistificar essa parada. E hoje, vim aqui no Dev.to pra gente conversar um pouco mais sobre o
Have you ever looked at code you wrote six months ago and thought: "Who wrote this monster?"? Relax, it happens to all of us. In software engineering, writing code that a machine understands is the easy part. The real challenge is writing code that other humans (including your future self) can understand, maintain, and scale. This is exactly where Software Design Principles come into play. In this
Part 1 of 5 in The New Engineering Contract — what it means to lead engineers when AI is doing more of the coding. SWE-CI tested 18 AI models across 71 consecutive commits. Most broke something on commit 47 they'd already broken on commit 1. That's not an intelligence problem. That's a learning system that isn't learning. A paper made me uncomfortable this month. Not because of what it found about