What if your Kubernetes cluster simply refused to run unsigned images? I spent some time experimenting with enforcing image provenance in a small Kubernetes setup using MicroK8s. The idea was simple: Only container images with valid cryptographic signatures are allowed to run in the cluster. For this I used: GitLab CI/CD (build + signing pipeline) Cosign / Sigstore (image signing) Kyverno (admissi
AI is being “regulated” on paper. But in reality? It is operating in the dark. Global AI frameworks from the OECD, UNESCO, and the World Economic Forum promise a future built on: Transparency Accountability Human oversight Risk-based regulation It sounds solid and reassuring. But the uncomfortable truth is These principles start to break the moment they hit environments like Nigeria. You are told
Most teams I have worked with have one auth test in their suite. It looks like this: test('valid token verifies', () => { const token = signSync({ sub: 'user-1', aud: 'api://backend' }, secret); const result = verify(token, options); expect(result.valid).toBe(true); }); That test is fine. It is also a smoke test, not a regression suite. It catches the case where verification is completely b
The on-call alert at 02:14 said auth_5xx_rate spiked from 0.01 to 31.4. Not a deploy window. Not a traffic spike. Just thirty-one percent of authenticated requests failing for ~four minutes, then back to baseline. The cause was a JWKS rotation on the issuer side. New keys came in. Old keys went out. Caches in our service didn't refresh fast enough. Tokens signed with the new key were rejected beca
Some time ago, I was building a chat application using AWS Websocket API gateway. Things were going smoothly. I created a WebSocket API Gateway, added $connect, $disconnect, and sendMessage/addGroup routes. From the frontend (React) side, everything was fire-and-forget. You send a message, and the onMessageHandler takes care of it 💪🏼 But then a new requirement of uploading files using S3 signed
The Problem AI agents are moving from answering questions to taking actions — calling APIs, querying databases, executing code, managing memory. The security surface has shifted from "what the model says" to "what the agent does." Most guardrail solutions address the first problem. They filter content. They detect prompt injection. They moderate output. These are necessary but insufficient. The
By Micky Irons. Founder & sole inventor, Mickai. CEO, Trust-Agent.ai. When I started filing the patents that became Mickai, I didn't have a product brief. I had a question. Why is intelligence the only critical capability we lease? We don't lease our title deeds. We don't lease our identity documents. We don't lease the keys to our houses. We hold them. They're ours. They sit in our drawer, our sa
As the global community converges on London and Washington to debate the existential risks of frontier AI, a dangerous assumption has taken root: that AI safety is a universal constant. The prevailing belief is that if a model is "aligned" in a lab in San Francisco or London, it is safe for the rest of the world. My experience as a Systems Engineer and AI governance practitioner in Nigeria suggest