
This is the first post in our AI Agent Signing Infrastructure series, which covers what needs to exist - technically and legally - before AI agents can sign contracts in a way that is sound and auditable. Subnoto builds confidential e-signatures today; this series covers where we think the space is going. It is not legal advice; application varies by jurisdiction.
This post is most relevant for engineering teams building agentic systems with signing authority, legal teams reviewing AI deployment risk, and CTOs at companies where agents are already operating autonomously or will be soon.
When an AI agent signs a contract, the legal responsibility lies with the human or legal entity that authorized the agent to act. The agent is not a legal person and cannot bear liability. What matters is whether a valid chain of delegation exists - from a legally authorized principal down to the agent - and whether that chain is documented and enforceable. Without it, every contract your agent signs is legally contestable.
The question every legal team is now asking
AI agents are signing things. They are executing NDAs before due diligence calls, renewing SaaS contracts automatically, signing data access agreements before connecting to partner APIs, and committing to purchase orders within pre-approved budgets.
Most of the time, nobody has thought carefully about what happens when something goes wrong.
The question - who is legally responsible when an AI agent signs a contract? - does not yet have settled case law in most jurisdictions. But the existing legal frameworks for agency, mandate, and delegated authority provide a clear enough answer to act on.
The bigger problem is that nobody has built the infrastructure to make agent signing properly defensible. Most organizations deploying signing agents today are using API keys, standard e-signature platforms, and informal governance. That is not an architecture - it is accumulated legal risk. The three components below are what we think need to exist before agent signing is sound.
The legal framework: agency and mandate
AI agents act as mandataries, not principals
In legal terms, an agent (human or automated) who acts on behalf of another party is called a mandatary (in civil law systems, such as France and most of the EU) or an agent (in common law systems). The party who authorizes the agent to act is called the principal.
The core rule of agency law is straightforward:
A mandatary who acts within the scope of their mandate binds the principal, not themselves.
This means that if your AI agent signs a contract within the authority you have granted it, your company - as the principal - is bound by that contract. The AI agent has no legal personality and cannot be held liable.
This is actually good news for legal certainty. The problem is not the framework - it is the documentation.
What “scope of authority” means for an AI agent
In a human agency relationship, the scope of authority is defined by an employment contract, a power of attorney, or an explicit board resolution. When a sales director signs a commercial agreement, the counterparty can reasonably assume they have authority based on their title and role.
For an AI agent, this assumption does not exist. No counterparty can look up an AI agent’s “title” to assess whether it had authority to sign. This creates two distinct legal risks:
1. Unauthorized signature risk: If an agent signs a contract that exceeds its actual authority - a higher contract value than the principal approved, a counterparty that was not whitelisted, a document type outside its mandate - the principal can argue the signature is void. This protects principals but creates instability for counterparties.
2. Apparent authority risk: If a company deploys an agent without clearly defined limits, and the agent signs something the company did not intend, courts in many jurisdictions may still hold the company liable under the doctrine of apparent authority - because the company created the impression that the agent could act.
Both risks are managed by the same solution: explicit, documented, verifiable delegation.
What we think a legally sound agent signature requires
Based on existing legal frameworks for agency and mandate - and the direction of emerging regulation including DORA and the EU AI Act - here is what we believe a legally defensible AI agent signature requires. This is our framework, not an established industry standard, though the underlying legal theory is grounded in existing mandate doctrine.
1. A verified machine identity
The agent must have a verifiable identity tied to its cryptographic signing key. This is not an email address or a username - it is a machine identity certificate, typically an X.509 certificate or a SPIFFE (Secure Production Identity Framework for Everyone) workload identity.
The identity must be:
- Issued by a trusted certificate authority
- Bound to a specific agent or workload (not a generic service account)
- Verifiable by the counterparty independently of the signing platform
Without a verified machine identity, you cannot prove which agent signed, only that some agent with access to a particular API key did.
2. A documented delegation scope
The principal - the human or legal entity authorizing the agent - must define and record the exact limits of the agent’s authority. This is the delegation scope.
A delegation scope should specify, at minimum:
- Document types authorized: e.g., NDAs, purchase orders, data access agreements
- Financial ceiling: the maximum contract value the agent can commit to
- Authorized counterparties: a whitelist of organizations or legal entity identifiers (LEIs) the agent can contract with
- Validity period: when the delegation expires and must be renewed
- Revocation conditions: what events automatically terminate the agent’s signing authority
This delegation scope must be stored in a tamper-evident, auditable record. It is the equivalent of a power of attorney - and it must be as precise as one.
3. An immutable audit trail
Every signature must produce a Signature Evidence Package - a cryptographically secured record that proves:
- What was signed (document hash at the moment of signing)
- Who signed it (the agent’s verified identity)
- Under what authority (the active delegation scope at the time)
- When (a certified RFC 3161 timestamp)
- That the signing process was not tampered with (ideally attested by hardware, e.g., a TEE attestation)
This evidence package is what your legal team produces in a dispute. It is also what a DORA or NIS2 auditor will ask for when reviewing automated third-party access agreements.
What happens when these three things are missing
Scenario A: No verified machine identity
An API key is used to trigger signatures. If the API key is compromised - by a malicious insider, a memory leak, or a cloud infrastructure breach - any number of contracts may have been signed by an attacker rather than your authorized agent. You cannot distinguish them. Every contract signed during the exposure window is potentially contestable.
Scenario B: No delegation scope
Your agent signs a €400,000 software agreement when your approval policy requires board sign-off above €250,000. The contract is valid under apparent authority doctrine in most jurisdictions. You are bound. Your internal control failed, and there is no documentation to show the agent exceeded its limits - because no limits were defined.
Scenario C: No audit trail
Your counterparty disputes the terms they signed. You have a signed PDF, but no record of what version of the document was presented to the agent at the moment of signing, no timestamp that a court would accept as independent, and no proof that your signing infrastructure was not compromised at the time. You are arguing your word against theirs.
The regulatory dimension: DORA and the EU AI Act
DORA (Digital Operational Resilience Act)
DORA has been in force since January 2025 and applies to financial institutions and their critical third-party ICT providers. Article 28 requires that financial entities document all contractual arrangements with third-party service providers, including automated access agreements.
Practical implication: If your AI agents access partner APIs, data rooms, or external systems, each access event should be covered by a signed, traceable contractual agreement. Manual processes cannot keep up with agent-speed access. The answer is automated, auditable agent signatures - but they must meet the documentation standard the regulation requires.
EU AI Act
The EU AI Act’s traceability requirements for high-risk AI systems - which include systems that make decisions with legal or financial consequences - mean that the decision-making process of an AI agent, including its signing actions, must be logged and explainable.
An agent that signs contracts must be able to produce a record showing: what it was authorized to do, what it did, and that it did not exceed its authority. This is structurally identical to the delegation scope + audit trail architecture described above.
Practical checklist: before you deploy a signing agent
Before your organization deploys an AI agent that can sign contractual documents, verify the following:
- The agent has a registered machine identity (X.509 certificate or SPIFFE workload ID)
- A delegation scope has been formally approved by an authorized human principal and stored in a tamper-evident system
- The delegation scope defines document types, financial ceiling, counterparty whitelist, and expiry
- Every signature produces a Signature Evidence Package with document hash, agent identity, active scope, and certified timestamp
- The signing infrastructure produces hardware attestation (TEE) or equivalent cryptographic proof that the signing process was not tampered with
- A revocation mechanism exists and has been tested
- Your legal team has reviewed the delegation scope and confirmed it constitutes a valid mandate under the laws of your operating jurisdiction
Frequently asked questions
Q: Can an AI agent be a legal signatory?
No. In all current EU and common law jurisdictions, legal personality - and therefore the capacity to be a signatory - is held by natural persons and legal entities (companies). An AI agent cannot be a signatory. It acts as a mandatary on behalf of a human or corporate principal, who bears the legal consequences of the signature.
Q: Who is liable if an AI agent signs a contract fraudulently?
Liability depends on the chain of causation. If the agent was compromised by an external attacker, liability may lie with the signing infrastructure provider if the breach resulted from negligence. If the agent exceeded its authorization due to poor scope design, liability typically lies with the company that deployed it. If the agent was deliberately misused by an insider, criminal liability may attach to that individual.
Q: Does an AI agent signature have the same legal weight as a human e-signature?
Under eIDAS, a simple or advanced electronic signature can be produced by an AI agent with the right implementation - a verified machine identity, a private key under the agent’s exclusive control, and a document hash included in the signature. Qualified electronic signatures are a different matter: they require a qualified trust service provider (QTSP) and a qualified signature creation device (QSCD), which is a higher bar that current agent signing architectures do not generally meet. For most business contracts - NDAs, vendor agreements, SaaS renewals - simple eIDAS-compliant signatures are legally sufficient. For transactions legally requiring qualified signatures, the infrastructure does not yet exist for agents to sign autonomously.
Q: What is the difference between an AI agent signature and a digital signature?
A digital signature is a cryptographic mechanism - a mathematical proof that a specific private key was used to sign specific data. An AI agent signature is a legal act - a contractual commitment - that uses a digital signature as its technical foundation, combined with a machine identity, a delegation framework, and an audit trail that gives it legal standing. Digital signature is the technical layer; AI agent signature is the legal layer built on top of it.
Q: What is a delegation scope in the context of AI agent signing?
A delegation scope is a formal, machine-readable record that defines the limits of an AI agent’s signing authority. It specifies what document types the agent can sign, up to what financial value, with which counterparties, and for how long. It is the AI-native equivalent of a limited power of attorney. See What Is a Delegation Scope?
Q: What regulation governs AI agent signatures in the EU?
Multiple regulations are relevant: eIDAS 2 governs electronic signatures; DORA governs automated contractual arrangements in the financial sector; the EU AI Act governs traceability for high-risk AI systems; and NIS2 governs security of automated processes in critical sectors. There is no single regulation specifically for AI agent signatures yet, which makes proactive implementation of best practices more important, not less.
Summary
When an AI agent signs a contract, legal responsibility lies with the principal who authorized it. The agent is a mandatary, not a legal person. What determines whether the signature is legally defensible is the quality of the delegation chain: a verified machine identity, a documented and limited delegation scope, and an immutable audit trail. Without these three elements, every agent signature in your organization carries legal risk. With them, agent signing is not only legally sound - it is more auditable than most human signing processes.
About Subnoto
Subnoto is a French e-signature company built on confidential computing - documents and signing sessions stay encrypted throughout server-side processing, using hardware-level isolation (Intel SGX / AMD SEV). We currently provide confidential e-signatures for teams and developers who handle sensitive contracts and can’t afford to expose document contents to a third-party platform.
We are working toward the broader trust infrastructure this series describes: machine identity registration, delegation scope management, and hardware-attested audit trails for AI agent signing. We don’t have that built yet. We are talking to the organizations that will need it first, to make sure we build it right.
We’re particularly interested in hearing from engineering or legal teams where AI agents are already operating with signing authority in production, or where that capability is being designed now. If that’s you, the governance questions in this post are probably already live problems - and we’d like to understand your specific constraints.