Install the SDK, register your agent, call gate(). Key management, policy evaluation, and the audit chain are handled. You handle shipping.
pip install agentlattice
import os
from agentlattice import AgentLattice
al = AgentLattice(api_key=os.environ["AL_API_KEY"])
result = await al.gate("deploy-to-prod", agent_id=agent_id, signature=sig)Have an existing agent?
Already running a LangChain, CrewAI, or TypeScript agent? The instrumentation tools add identity, authorization, and a tamper-proof audit trail to every tool call — without rewriting your agent.
pip install al-instrument
al-instrument agent.py --apply --registerRun without --apply first to preview the diff. The diff is the tutorial.
// the four primitives
Stable across model upgrades and key rotations.
Each agent gets an ECDSA P-256 keypair. The public key is the identity — not the model version, not the deployment, not the API key. When Devin upgrades from v3 to v4, the identity persists. When you rotate keys, the audit trail stays intact.
al = AgentLattice(api_key=os.environ["AL_API_KEY"])
# agent_id is stable — survives model upgrades
agent = await al.identity.register(name="devin-v4")Policy-as-code evaluated at every gate.
Write policies in a declarative DSL. Every agent action calls gate() — which evaluates your policy, logs the decision, and either allows or blocks. Approval flows, SoD enforcement, and time-bounded permissions are first-class primitives, not afterthoughts.
# Fails closed if no policy matches — policy_not_found, not 500
result = await al.gate("deploy.production", agent_id=agent_id, signature=sig)
# result: GateResult(allowed=True, policy_id=..., audit_id=...)Tamper-proof. Independently verifiable. Not our word — math.
Every gate() call appends to a hash chain. Each row's hash includes the previous row's hash — so modifying any historical record breaks all subsequent hashes. Your auditor can verify integrity without trusting AgentLattice. SOX, SOC2, HIPAA: this is the primitive they're asking for.
# Verify chain integrity — works offline, no trust in AgentLattice required
result = await al.audit.verify(org_id=org_id, from_=start, to=end)
# result.valid: True | False — if False, result.broken_at shows the rowBounded scope tokens. Programmatically revocable.
Agents can delegate to sub-agents — but only within their own permission boundary. A coding agent can delegate read-only repo access to a review bot. It cannot grant write access it doesn't have. Every delegation is scope-narrowing. Revocation is instant and propagates to all children.
# Devin delegates read-only review scope to code-review-bot
token = await al.delegation.create(
from_=devin_agent_id,
to=review_bot_id,
scope=["pr.read", "comment.write"],
expires_in="4h",
)// works with your stack
AgentLattice is a governance layer, not a framework. It slots into what you already use.
from langchain_agentlattice import AgentLatticeCallback
chain = LLMChain(
llm=llm,
prompt=prompt,
callbacks=[AgentLatticeCallback(api_key=os.environ["AL_API_KEY"])]
)
# Every tool call goes through gate() automatically// delegation
A coding agent can hand off read access to a review bot — but it cannot grant permissions it doesn't have. Revoke the parent token and the entire subtree is instantly revoked.
Principal (human)
Coding Agent
Sub-agent
// scope narrowing
Each delegation can only grant a subset of the delegator's own permissions. You cannot escalate privileges through a chain.
// instant revocation
Revoking a token invalidates the entire subtree — all children and grandchildren lose access simultaneously.
// audit follows the chain
Every action taken under a delegation records both the acting agent and the delegation chain that authorized it.
// coverage map
We're not claiming a magic layer that governs every AI system. Here's what AgentLattice actually covers today — and how.
Full ECDSA path — identity, auth, audit, delegation
Webhook path — HMAC-verified, GitHub Checks enforcement
SDK integration — gate() wraps every tool call
GitHub App webhook — PR/deploy governance enforced
MCP gateway — tool restriction at the server level
// why 60% on model-native agents
GitHub Copilot and ChatGPT Enterprise operate inside their own runtimes — there's no SDK hook. We govern these via MCP: the MCP server is the trust boundary. Tool calls pass through AgentLattice before execution. It's the right architecture; coverage will improve as MCP adoption grows.