Ollama makes it easy to run open-weight models locally, but it does not ship an MCP client. The MCP protocol is handled at the client layer, not inside the LLM itself. To use MCP servers with a local Ollama model, you need a bridge that speaks MCP on one side and the Ollama API on the other. MCPFind indexes 832 servers in the ai-ml category, averaging 114.27 stars per server, the highest average a
The Model Context Protocol (MCP) has become the default standard for connecting AI agents to external tools and APIs. Governed by the Linux Foundation since early 2025 and adopted by OpenAI, Anthropic, Microsoft, and Vercel, MCP is the USB-C port of the AI ecosystem — one protocol that lets any LLM application talk to any tool server. But there's a gap between reading the spec and building somethi
If you have spent any real time with Claude Code, you have probably noticed the same problem I did. You write the same instructions in the prompt every other day. "Use four-space indentation here." "Always run the linter after edits." "Format commit messages this way." After the third or fourth repeat, it stops feeling like a prompt and starts feeling like missing config. Skills are how Claude Cod
I'm a software engineer and musician, and I got tired of every metronome app out there feeling like it was designed in 2005. So I built my own. Yames (Yet Another Metronome Everyone Skips) is a free, open-source desktop metronome built with Rust and Tauri. Sub-millisecond timing precision, 10+ handcrafted themes, and it's designed to get out of your way so you can focus on practice. I'm looking fo
Adding email and calendar tools to an AI agent is mostly an exercise in restraint. Give it 50 commands and the agent gets confused. Give it 5 carefully-chosen ones and it punches above its weight. After running agents against the Nylas CLI for a few months, these are the five I keep coming back to. Each gets exposed via MCP (nylas mcp install) so the agent can call them directly. nylas email send
You ssh'd into a fresh Linux box and you need to send an email. Maybe a backup completed. Maybe a deploy succeeded. Maybe a process crashed and you want a stack trace in your inbox. The traditional path: install Postfix, edit main.cf, configure a smart relay, generate SASL credentials, restart the daemon, and pray nothing else on the box uses port 25. That is the 30-minute path. The 60-second path
Your password-reset flow needs an inbox to test against. Your invitation flow too. Your email-verification gate too. The classic setup is a "[email protected]" alias on a shared mailbox, polling Gmail's API, hoping nothing else lands while the test runs. It is fragile, it leaks state across PRs, and your credentials live in CI. A managed agent account flips this. Each PR gets a fresh inb
My inbox averages 200 messages a workday. Half are noise. A quarter need a fast acknowledgement. The remainder need real work. The split is mostly stable, so the triage rules are mostly stable, so it is a good fit for an LLM. I wired Aider to it. Aider is the AI pair-programming CLI — it has a shell, it can call commands, and it speaks Python natively. Pairing it with the Nylas CLI gives a triage