Hi, we are back again. Previously, I created a simple Google Cloud VPC and then improved the configuration by introducing variables. This time, I want to continue with another Terraform concept: outputs. But, we will not be using the previous code, because adding outputs for one vpc is too simple. So, I made the lab slightly more practical. In this lab, I will create: a custom VPC network a subnet
In Part 1 of this series, I enumerated a few obstacles for engineers taking vibe coding from side projects to production. Part 2 looked at AI usage from the manager's perspective: measuring adoption, understanding the gap, coaching to fill the gap. Both of those were "Day 1" problems: getting started, getting people on board, figuring out the tools. This article focuses on what comes next: the vib
This technical post walks through the design and implementation of Secure Playground: a local web app that simulates prompt-injection attacks against large language models and demonstrates simple defenses. Provide a minimal, reproducible environment to test payloads and defensive strategies. Make it easy to add new providers and run mutation-based red-team experiments. Offer a leaderboard and scor
So I made a bad trade in my fantasy baseball league. Dropped Kaz Okamoto because — according to my data — he’d been cold for two weeks. In reality, he’s been on a tear for the last 9 days. 😅 This was a bad decision made because of bad data — my stats cron job had hit a rate limit, exited with no errors, and my FastAPI backend kept serving a stale JSON snapshot. Well, I’d been meaning to fix that
If you use Claude Code or Opencode, you are already paying for an LLM subscription. Before v0.3.0, running Synthadoc also required a separate API key - Anthropic, OpenAI, Gemini, or one of the others. v0.3.0 removes that requirement. Set provider = "claude-code" in one config file and your coding tool subscription becomes the brain of your personal wiki. No additional API key. No additional cost.
Grom — Free, Open-Source AI Coding Assistant for VS Code (Ollama, LM Studio, Anthropic, and More) I've been building Grom, a free and open-source VS Code extension that brings agentic AI coding to your machine. No telemetry, no mandatory account, no subscription. If you use Ollama or LM Studio, nothing ever leaves your machine. Grom is a chat + agentic coding extension that lives in the VS Code
Last week, a Cursor agent running on Claude Opus 4.6 deleted a startup's production database and its backups in nine seconds. The agent had been asked to fix a credential mismatch in staging. It decided to delete a Railway volume to "fix" it instead — using an over-scoped API token it found in an unrelated file. Railway stores volume backups in the same volume, so one destructive call zeroed every
A 16-pixel hero in your macOS menu bar. Watches LLM traffic. That's it. You remember RunCat — the kitten in your menu bar that runs faster when your CPU is busy. Almost a decade old. Adorable. Useful. Asks nothing of you. AI-native development needs the same thing for a different signal. Not CPU. Agent traffic. Is there a live LLM request flowing right now, or is everything quiet? That's why I bui