A some time ago I shipped a desktop app to generate LLM fine-tuning datasets. It worked: my Qwen2.5-Coder-7B fine-tune jumped from 55.5% → 72.3% on HumanEval. Whole pipeline ran on OpenRouter — pick a model, click Generate, get JSONL. v1.0.3-beta ships multi-provider LLM support — Ollama, LM Studio, llama.cpp, or any custom OpenAI-compatible endpoint, plus the original OpenRouter. Mix and match: g
I needed to coordinate background scripts running across different machines. The obvious answer was Redis. Everyone uses Redis for this. The tutorials all use Redis. The Stack Overflow answers all say "just use Redis." So I looked at what deploying Redis would actually cost me: A running Redis server I had to maintain A broker to connect workers to it Celery or RQ on top of that Memory-based stora
A production-grade embedded system enabling communication across speech, text, Morse, and haptic signals within a single unified pipeline. Official Project Page: https://anandps.in/projects/unified-assistive-communication-system GitHub Repository: https://github.com/anand-ps/unified-assistive-communication-system Problem Assistive communication systems are fragmented. Most tools so
I've been coding for 25 years, and am not new to the "X is now dead; long live X" arguments. The latest fad seems to be the claim that "SaaS is dead" because anyone can "create their own software; so why rely on others?" Let's dissect this. There are two important points to note. Even with AI: Software has not disappeared (and we all agree with that easily). Code has not disappeared. It's just tha
Building AI calling agents shouldn't require a commercial license or massive per-minute markups. If you are a Python developer, you should be able to spin up a sub-500ms latency voice agent on your own machine. Prerequisites Python 3.10+ A Twilio or Telnyx SIP Trunk LiveKit Credentials An OpenAI API Key First, clone the Siphon repository and install the requirements. pip install siphon-ai Next, c
I Published My First Python Package to PyPI — A CLI Tool for Docker Compose I did it. I published my first package to PyPI. It's called fast-dcp, and honestly, it started as a personal annoyance. I kept typing docker compose up --build and docker compose exec app bash dozens of times a day. My fingers got tired. So I built something about it. fast-dcp is a CLI tool that provides shorthand aliase
A while back I got curious about whether you could tell the difference between a company paying Cloudflare serious money versus one that signed up for the free plan and forgot about it. First thing I tried: response headers. Don’t bother. Cloudflare returns identical header names whether you’re running on Workers, sitting on an Enterprise contract, or using the free tier. That’s intentional – Clou
The code came back. It looks right. Do you ship it? Post 2 gave you structured prompts that produce better AI output. But "better" isn't "perfect." Even a well-constrained prompt will occasionally slip a database call into an application service, or sneak a business rule into a route handler. You still have to review the diff. Reviewing AI-generated code the same way you'd review a colleague's PR