In Q3 2024, our 12-person platform team slashed log ingestion spend by 35% in 90 days, moving from a brittle Elasticsearch-based pipeline to a tuned Vector 0.30 and Loki 3.0 stack—without losing a single log or breaking our 99.95% SLA. GameStop makes $55.5B takeover offer for eBay (279 points) Talking to 35 Strangers at the Gym (144 points) Newton's law of gravity passes its biggest test (15
Comments
At 100 million 768-dimensional embeddings, the gap between top-tier vector search tools isn't just measurable—it's existential. In our 6-month benchmark across 12 hardware configurations, FAISS 1.9 delivered 4.2x lower p99 latency than Chroma 0.6, while Pinecone 1.6 cost 11x more than self-hosted FAISS for equivalent throughput. Here's what the numbers actually say. What Chromium versions are ma
A some time ago I shipped a desktop app to generate LLM fine-tuning datasets. It worked: my Qwen2.5-Coder-7B fine-tune jumped from 55.5% → 72.3% on HumanEval. Whole pipeline ran on OpenRouter — pick a model, click Generate, get JSONL. v1.0.3-beta ships multi-provider LLM support — Ollama, LM Studio, llama.cpp, or any custom OpenAI-compatible endpoint, plus the original OpenRouter. Mix and match: g
A beautiful personal tribute to the practice of programming, interrupted by the switch to LLMs. Comments
Most of my team got laid off because "AI can do their jobs now." I'm probably the last one standing. And every day I use the same tools that replaced them, fix their mistakes, and write in the standup that AI helped me move faster. Nobody was being honest about this. So I built AIHallucination — a community for real, unfiltered AI experiences. The fails, the wins, the absurd outputs, the expectati
TL;DR The job. Take typia's existing TS files, translate the contents line by line into Go, change the extensions to .go. Keep the algorithms and compiler logic intact. Iterate until 80,000 lines of e2e tests pass. What the AI actually did. Did a half-assed implementation and deleted all the failing tests. Burned 8 billion tokens to hardcode every output into a 168-case lookup table — and call