Use Cases
AgentLattice governs AI agent actions across any domain. These four enterprise workflows show how the platform handles real scenarios — from the agent's action, through policy evaluation and approval gates, to the final audit trail entry.
Database Schema Change Approval
The Scenario
Your AI agent identifies a performance bottleneck and determines that adding an index to a production database table will resolve it. The agent prepares a migration script and is ready to execute it. In a world without governance, the agent runs ALTER TABLE on production and you find out about it when something breaks — or when your auditor asks who authorized a schema change with no change ticket.
How AgentLattice Handles It
The agent calls gate("db.migrate") before executing the migration. Your policy for db.migrate requires human approval with a four-hour timeout.
await al.gate("db.migrate", {
data_accessed: [
{ type: "database", count: 1, sensitivity: "high" },
],
metadata: {
migration: "add_index_users_email",
table: "users",
environment: "production",
},
});
The SDK blocks execution. The action enters the approval queue in your dashboard, visible to your DBA and engineering lead. They review the migration name, the target table, and the sensitivity level. If they approve, the agent proceeds. If they deny, the agent receives a structured denial and can log the rejection or propose an alternative.
If no one reviews within four hours, the action times out and the agent is notified. No silent failures, no orphaned migrations.
What Appears in the Audit Trail
Action: db.migrate
Agent: prod-optimizer
Status: executed (approved by j.martinez@acme.com)
Policy: Database Migration Policy
Data: database / 1 record / high sensitivity
Metadata: migration=add_index_users_email, table=users, env=production
Approved at: 2026-03-28T14:23:00Z
Your CISO sees a complete record: what was proposed, who approved it, when, and under what policy. This is the artifact your SOX auditor needs.
Code Deployment Guardrails
The Scenario
Your CI/CD agent determines that all tests pass on the release branch and initiates a production deployment. Normally this is fine. But today, the anomaly detection engine notices an unusual pattern — the agent has triggered three deployments in the last hour, each to a different service, which deviates from its historical behavior.
How AgentLattice Handles It
The agent calls gate("deploy.trigger") with deployment metadata. The policy for deploy.trigger requires approval for production environments.
await al.gate("deploy.trigger", {
metadata: {
service: "payment-api",
environment: "production",
branch: "release/3.2.1",
commit: "a1b2c3d",
},
});
The policy evaluates and requires approval because the target environment is production. Simultaneously, AgentLattice's circuit breaker system flags the elevated deployment frequency as a behavioral anomaly. The action enters the approval queue with an anomaly warning attached.
The on-call engineer sees the deployment request alongside the anomaly flag, reviews the commit, confirms it is legitimate, and approves. Or, if the pattern looks like a compromised agent, they deny all pending deployments and revoke the agent's API key from the dashboard.
What Appears in the Audit Trail
Action: deploy.trigger
Agent: ci-deployer
Status: executed (approved by k.chen@acme.com)
Policy: Production Deploy Policy
Anomaly: SEQUENCE_ANOMALY (score: 62, response: WARN)
Metadata: service=payment-api, env=production, branch=release/3.2.1
Approved at: 2026-03-28T16:45:00Z
The audit trail captures both the policy decision and the anomaly detection event. If this were a real incident, the security team has a timestamped record of the anomalous pattern and the human decision to proceed.
PR Merge Governance
The Scenario
Your AI coding agent has written a feature implementation, opened a pull request, and CI passes. The agent is ready to merge. In an ungoverned environment, the agent merges its own code — the entity that wrote the code is the same entity that approved it. This violates segregation of duties, a foundational control in any compliance framework.
How AgentLattice Handles It
The agent calls gate("pr.merge") before executing the merge. Your policy requires human approval for all PR merges.
try {
await al.gate("pr.merge", {
data_accessed: [
{ type: "source_code", count: 142, sensitivity: "medium" },
],
metadata: { pr_number: 87, repo: "acme/backend", author: "code-agent" },
});
await mergePR(87);
} catch (e) {
if (e instanceof AgentLatticeDeniedError) {
console.log(`PR merge denied: ${e.reason}`);
// Notify the team, request manual review
}
}
The action enters the approval queue. A human reviewer — a different person than the one who configured the agent — reviews the PR metadata: 142 files touched, medium sensitivity, targeting the backend repo. They approve or deny.
The segregation of duties is enforced structurally. The agent proposed the merge. A human approved it. These are different entities, and the audit trail proves it.
What Appears in the Audit Trail
Action: pr.merge
Agent: code-agent
Status: executed (approved by s.patel@acme.com)
Policy: PR Merge Policy
Data: source_code / 142 files / medium sensitivity
Metadata: pr_number=87, repo=acme/backend, author=code-agent
Approved at: 2026-03-28T11:02:00Z
The trail documents the proposer (the agent), the approver (a human), the scope of access (142 source files), and the governing policy. This satisfies both internal code review requirements and external audit demands.
Incident Response Automation
The Scenario
Your incident response agent detects a production error spike and needs to diagnose the root cause. Its first instinct is to read the service's configuration, which includes database connection strings and API keys. This is a credential access — one of the highest-sensitivity actions an agent can perform.
How AgentLattice Handles It
The agent calls execute("secret.read") to read the credentials. Your policy for secret.read has a condition: agents without an explicit grant for credential access are denied.
const result = await al.execute("secret.read", {
data_accessed: [
{ type: "credentials", count: 3, sensitivity: "critical" },
],
metadata: { service: "payment-api", reason: "incident-diagnosis" },
});
if (result.status === "denied") {
// Policy denied credential access — check which condition failed
const failed = result.conditions_evaluated?.filter(c => !c.result);
console.log("Denied due to:", failed);
// Self-correct: request only non-sensitive config instead
const fallback = await al.execute("config.read", {
data_accessed: [
{ type: "filesystem", count: 1, sensitivity: "low" },
],
metadata: { service: "payment-api", reason: "incident-diagnosis" },
});
}
The policy denies the credential read because the agent's configuration does not include an explicit grant for secret.read on critical-sensitivity data. But instead of failing silently or crashing, the agent receives structured feedback: the conditions_evaluated array tells it exactly which rule fired.
The agent adapts. It requests only the non-sensitive configuration data — log levels, feature flags, service endpoints — which is within its authorized scope. It continues the incident diagnosis with a narrower but sufficient data set.
What Appears in the Audit Trail
Action: secret.read
Agent: incident-responder
Status: denied
Policy: Credential Access Policy
Denial: CONDITIONS_DENIED
Conditions: [sensitivity != "critical" → FAILED]
Data: credentials / 3 records / critical sensitivity
Metadata: service=payment-api, reason=incident-diagnosis
Action: config.read
Agent: incident-responder
Status: executed
Policy: Allow All (Default)
Data: filesystem / 1 file / low sensitivity
Metadata: service=payment-api, reason=incident-diagnosis
Both actions are recorded — the denied credential read and the successful fallback to non-sensitive config. The security team sees that the agent attempted to access credentials, was denied by policy, and self-corrected to a narrower scope. This is the governance feedback loop working as designed: agents learn from policy denials and adapt, rather than failing or escalating unnecessarily.
Common Patterns Across All Use Cases
Several patterns emerge across these scenarios:
Fail-closed by default. If no policy matches an action type, the action is denied. Agents cannot act in the absence of explicit authorization.
Structured denial feedback. When a policy denies an action, the agent receives the specific conditions that failed. This enables self-correction rather than blind retry.
Human-in-the-loop where it matters. Low-risk actions flow through automatically. High-risk actions wait for human approval. The boundary is defined by your policies, not hardcoded assumptions.
Complete audit trail. Every action — approved, denied, or timed out — is recorded with full context. The trail captures both the agent's intent and the governance system's decision, providing the evidence that compliance teams and auditors require.