This blog was originally published on Descope. OpenAI's Custom GPTs offer a powerful way to create AI agents that can interact directly with your APIs through natural language conversations. Imagine you have a deployed FastAPI application that implements DevOps tools such as triggering CI/CD workflows, getting deployment logs, usage analytics, and other operational tasks. While integrating your AP
The cost incident that started this Three weeks after we put our chatbot into production, I opened the OpenAI billing dashboard on a Monday morning and stopped breathing for a second. One session — not one user, one session — had burned through roughly four times the daily budget for the entire app. Over a single afternoon. The session wasn't malicious. It was a test account someone forgot to lo
An opinionated list of Python frameworks, libraries, tools, and resources
We’ve been running a series of experiments using ChatGPT 5.4 integrated into a website chatbot across different environments: 🌐 a main website 🎯 Goal: simulate realistic user behavior and observe how the model responds over time. ⚙️ Test setup The chatbot is designed to (no self promo here, just context): 📌 answer strictly based on website content (RAG-like approach) Over time, we intentionally
How I added LLM fallback to my OpenAI app in 10 minutes You're running a production app on OpenAI. One Tuesday morning it goes down. Your app returns 500s. You spend an hour refreshing status.openai.com. There's a better setup. Here's how to add provider fallback to any OpenAI-SDK app without rewriting anything. When you call OpenAI directly, you have one point of failure: from openai import Ope
Some time ago, I was building a chat application using AWS Websocket API gateway. Things were going smoothly. I created a WebSocket API Gateway, added $connect, $disconnect, and sendMessage/addGroup routes. From the frontend (React) side, everything was fire-and-forget. You send a message, and the onMessageHandler takes care of it 💪🏼 But then a new requirement of uploading files using S3 signed
OpenAI revenue is still the number people reach for when they want a leaderboard. But the cleaner frame is different: Anthropic appears to be building a different kind of AI business, one centered on enterprise customers, safety positioning, and less dependence on mass-market fame. That distinction matters because public discussion keeps collapsing three separate things into one scorecard: revenue
LLM Foundry: the boring stack that makes an LLM actually useful Most AI projects are built backwards. People start with the model and only later discover they needed a memory system, semantic retrieval, tool use, tests, and a fallback plan for when one provider decides to nap for no visible reason. That is the part I care about now. LLM Foundry is the workshop around an LLM — not the model itsel