The Audit Trail Problem Nobody Talks About: Why AI Agents Need Cryptographic Signatures
AI agents are making decisions and executing tasks across your systems right now, but you probably can't prove what they did or why they did it. That's the problem Asqav, a new Python SDK released under the MIT license, is trying to solve. By attaching cryptographic signatures to each agent action and linking them into an unbreakable chain, the tool creates an immutable audit trail for autonomous AI systems .
Why Can't We Just Trust AI Agents to Do the Right Thing?
The stakes are high when AI agents operate autonomously across multiple systems. A financial document processing error could cascade into regulatory violations. A misconfigured agent might delete data it shouldn't. A compromised system could tamper with records. Without a verifiable record of what happened and when, you're left guessing about causation and liability .
Asqav addresses this by using ML-DSA-65, a quantum-resistant signing algorithm standardized under FIPS 204. Each agent action gets signed with this algorithm and linked to the previous action in a hash chain. If someone tries to tamper with an entry or omit one from the record, the chain breaks and verification fails .
"Every agent action gets signed with a quantum-safe signature and hash-chained to the previous one. If someone tampers with an entry or tries to omit one, the chain breaks and verification fails," said João André Gomes Marques, author of the project.
João André Gomes Marques, Author of Asqav
Each signature also carries an RFC 3161 timestamp, creating a complete record of when each action occurred. This matters for compliance, debugging, and security investigations .
How Does Asqav Actually Work in Production?
The SDK integrates with five major agent frameworks, making adoption straightforward for teams already using these tools. Developers can add governance without rebuilding their entire stack .
- Framework Support: Asqav works with LangChain, CrewAI, LiteLLM, Haystack, and the OpenAI Agents SDK through a shared AsqavAdapter class
- Flexible Integration: A decorator (@asqav.sign) and context manager (asqav.session()) allow signing arbitrary functions or sequences of steps without restructuring code
- Policy Enforcement: Developers define patterns like blocking any action matching "data:delete:*", and the SDK evaluates those policies before execution
- Multi-Party Approval: Critical actions can require m-of-n threshold signatures, meaning a minimum number of approvals must happen before the action proceeds
The design philosophy matters here. Marques emphasized that compliance tooling is often painful to integrate, so he built Asqav to be something developers actually want to use, not something they're forced into by legal teams .
What About Teams Without Internet Access?
Asqav includes an offline signing mode for environments where API connectivity is unreachable. Actions signed offline are queued locally and synced when connectivity returns using the asqav sync command. This matters for teams operating in restricted networks or dealing with intermittent connectivity .
A command-line interface, installed via pip install asqav[cli], supports signature verification with asqav verify, agent management with asqav agents, and manual sync operations. This means security teams and auditors can verify records without needing to integrate with the full SDK .
How to Get Started with Asqav for Your AI Agents
- Installation: Run pip install asqav to get the free tier, which covers agent creation, signed actions, audit export, and framework integrations
- Initialization: Call asqav.init() to set up the SDK, then asqav.Agent.create() to create an agent, and agent.sign() to sign individual actions
- Verification: Use the CLI tool asqav verify to check signatures and confirm the integrity of your audit trail at any time
- Policy Definition: Define action patterns you want to block or require approval for, and the SDK enforces them before execution
The free tier covers the essentials for most teams. Installation is a single pip command, and from there, initializing an agent and signing an action requires just three function calls .
What's Coming Next for AI Agent Governance?
Asqav's roadmap includes multi-agent audit trails, which would extend the hash chain across calls between agents. This means if Agent A calls Agent B, the entire chain of actions produces a single verifiable record spanning both agents. That's critical for complex workflows where multiple agents coordinate to complete a task .
An early Model Context Protocol (MCP) package called asqav-mcp is already listed in the project's ecosystem. Marques also described additional tool-level governance work as ongoing, and future versions will improve the compliance report generator to map output directly to specific EU AI Act articles. This signals that governance is becoming a first-class concern in agentic AI, not an afterthought .
The broader context matters here. As AI agents move from experimental prototypes to business-critical infrastructure, the ability to prove what happened and why is becoming table stakes. Asqav is available on GitHub and represents a shift toward treating agent governance as seriously as we treat model safety .