Blog

Latest insights, tutorials, and updates on LLM engineering, observability, and GenAI application development.

ISO 42001 Evidence You Already Have (and What’s Missing) for Agents
4 min read

ISO 42001 Evidence You Already Have (and What’s Missing) for Agents

If you’re running agentic AI in production, you likely possess more ISO/IEC 42001 evidence than you think. ISO/IEC 42001:2023 is the first global AI management system standard, and it expects a repeatable way to govern AI—not magic paperwork you create the week before an audit. (ISO) At ABV, we build tooling that turns the artifacts your teams already produce—agent traces, prompts, evals, incidents—into auditable evidence you can show to a 42001 assessor. What 42001 actually asks for 42001 e

Read More
Incident Response for AI: Who’s on the Hook and What to Document in the First 24 Hours

Incident Response for AI: Who’s on the Hook and What to Document in the First 24 Hours

Your AI assistant recommends a refund, calls an internal tool, and (because a product page hid a prompt) emails a customer’s PII to a third‑party inbox. Is that a model bug, a supply‑chain issue, or a data breach? For AI, incidents often straddle all three. The first 24 hours are critical for mitigating such incidents. Somewhat unrelated, but a good watch What counts as an AI incident? An AI incident is more than just 'downtime'. Use a definition you can defend to counsel and regulators. Th

4 min read
ABV Raises $250K to Build the Control Panel for Safe, Compliant AI

ABV Raises $250K to Build the Control Panel for Safe, Compliant AI

We're excited to announce the close of our $250k pre-seed funding round led by Cogitent Ventures with participation from Glorium Ventures. This funding accelerates our mission to help enterprises and governments deploy AI safely and compliantly, meeting standards like the EU AI Act and ISO 42001. With our platform now live and first enterprise clients onboarded, we're positioned to scale across Europe and the US. Thank you to our investors, advisors, and early supporters who believe in making

1 min read
Prompt Injection, Jailbreaks, and Data Exfiltration: 2025 Field Report

Prompt Injection, Jailbreaks, and Data Exfiltration: 2025 Field Report

Agentic AI crossed a threshold this year. The most revealing incident wasn’t a benchmark; it was a live exploit in a shipping product. On August 20, 2025, Brave researchers disclosed an indirect prompt‑injection flaw in Perplexity’s Comet AI browser that let a malicious Reddit comment steer the agent to read a victim’s email OTP and exfiltrate it by replying to the same thread—a cross‑site, cross‑account takeover in one click, complete with a disclosure timeline. Tom’s Hardware and other outlets

5 min read
EU AI Act compliance checklist (2025–2027)

EU AI Act compliance checklist (2025–2027)

Europe’s AI law is no longer theoretical. Key obligations already started on February 2, 2025, with more biting from August 2, 2025 and August 2, 2026; high‑risk systems embedded in regulated products get until August 2, 2027. If you build or use AI whose outputs will be used in the EU, the clock is running. Quick video primer (official): A quick note about us: abv.dev works with teams shipping AI into regulated environments; you can connect governance workflows and evidence capture to your LL

5 min read