The blog you're reading right now was built in a single conversation with Claude Code, Anthropic's CLI, in about 30 minutes. No all-nighter, no purchased template, no WordPress. One working session in the terminal. Here's exactly how it went — real code, real commands, and what almost went wrong. My portfolio web-developpeur.com does its job: showcasing my background and projects. But a static sit
When you have 5 unrelated questions, should you pack them into one message to the LLM, or send 5 requests simultaneously? Which is faster? Splitting into multiple independent parallel requests is almost always faster. This isn't a gut feeling — it's determined by the underlying inference mechanism of LLMs. Let's walk through the reasoning from first principles. To understand this problem, you firs