Hey DEV community 👋 I recently built and deployed a full-stack AI system that predicts medical specialties from clinical text using ClinicalBERT, and I wanted to share the full journey from training to deployment. This is part of my project under GradienNinja / Astrolabsoft. Link https://astrolab-medical-ai.netlify.app/ I built an AI system that: Takes clinical notes as input Predicts the most l
🧠 I Built a AI Assistant with Multi-Model Fallback, Voice Chat & a Personal Data Analyst — Here's How What happens when your AI goes down mid-conversation? You lose users. I built Hero's AI to make sure that never happens — and added a whole lot more along the way. Live Demo Have you ever used an AI tool that just... stopped working? Maybe it hit a quota limit, the API went down, or the mod
An opinionated list of Python frameworks, libraries, tools, and resources
Some time ago, I was building a chat application using AWS Websocket API gateway. Things were going smoothly. I created a WebSocket API Gateway, added $connect, $disconnect, and sendMessage/addGroup routes. From the frontend (React) side, everything was fire-and-forget. You send a message, and the onMessageHandler takes care of it 💪🏼 But then a new requirement of uploading files using S3 signed
選定理由 Paper: https://arxiv.org/abs/2512.01020 【社会課題】 【データの設計と従来技術の限界】 Issue Tree(法的論点ツリー)に変換し、葉ノードに対しルーブリック基準を適用可能にした。原告・被告・裁判所の主張をツリー構造で整理した約24,000インスタンスのデータセットを構築。評価軸は「論点カバレッジ」と「正確さ」の2次元。以下がサンプルである: 【原告の主張】被告は540万円を支払え └─【原告】保険金の支払い義務がある ├─【原告】死亡は突発的・偶発的な事故だった │ └─【原告】餅を食べて窒息死=外因による傷害 │ └─【被告】死因は既往症の可能性が高い └─【裁判所の結論】突発的事故と認定 ただし窒息死は証明不十分 この
Introduction To understand knowledge graphs, you first need to grasp three core concepts: entities, relations, and triples. Imagine a knowledge graph as a network that models the real world using nodes and connections. In this network, an entity is any distinct thing or object such as a person, city, or company. For example, “Sreeni”, “Plano”, and “Caterpillar” are all entities. A relation descr
This is my Day 2 of learning AI fundamentals where I will be covering the following concepts: Vector Embeddings How Tokenisation and Vector Embeddings relate to each other Vector embeddings is the process of turning each token id(generated during tokenisation) into high dimensional vector where semantic similarity results into geometric closeness. Think of it like this: dog is closer to puppy, al