Compliance
This document is written for CISOs, GRC teams, and auditors evaluating how AgentLattice maps to established compliance frameworks. It covers SOX control mappings, SOC 2 Type II evidence generation, audit trail integrity guarantees, and how to run a compliance-focused pilot.
The Problem AgentLattice Solves for Compliance
AI agents are a new category of actor in your environment. They read data, write code, merge pull requests, trigger deployments, and access sensitive systems. Your existing compliance controls were designed for human users and automated pipelines with predictable behavior. Agents are neither.
AgentLattice provides the governance layer that makes AI agent activity auditable, policy-governed, and compliant with the same frameworks you already report against. Every agent action flows through a policy engine, produces an immutable audit event, and can be traced back to the identity that authorized it.
SOX Compliance Mapping
AgentLattice maps directly to the Trust Services Criteria used in SOX IT general controls. Here is how each relevant criterion is addressed:
CC6.1 -- Logical Access Controls
Requirement: The organization implements logical access controls to restrict access to information assets.
How AgentLattice addresses it: Every AI agent has a registered identity with a unique API key fingerprint. Every action records which agent performed it, what trust source authenticated it (direct key or delegation token), and what data categories were accessed. The audit trail provides a complete record of who had access and what they did with it.
Evidence produced: Agent identity list, action-by-agent breakdown, key fingerprint mapping, trust source distribution.
CC6.2 -- System Operations and Authorization
Requirement: The organization implements controls to ensure that authorized users and processes are authenticated and their access is authorized.
How AgentLattice addresses it: Every action is evaluated against your workspace's policy rules. Policies can auto-approve, require human review, or auto-deny based on action type, metadata conditions, and data sensitivity. Every evaluation result is recorded: approved, denied, or timed out.
Evidence produced: Authorization rate, policy coverage percentage (what fraction of actions had a governing policy), denial records with reasons, approval records with reviewer identity.
CC6.3 -- Segregation of Duties
Requirement: The organization implements segregation of duties to prevent unauthorized or inappropriate actions.
How AgentLattice addresses it: AgentLattice enforces that the entity requesting an action cannot be the entity approving it. This is a hard constraint in the approval workflow. SOX evidence reports include a violations counter -- any nonzero value represents a material finding.
Evidence produced: SoD enforcement status (active/inactive), violation count for the reporting period.
CC8.1 -- Monitoring
Requirement: The organization monitors system components and the operation of those components for anomalies.
How AgentLattice addresses it: Behavioral baselines are maintained for each agent. Anomaly detection scores every action against the baseline and flags deviations. Circuit breaker policies automate graduated responses (warn, throttle, halt, kill). All anomaly events and enforcement actions are recorded with tamper-evident hashing.
Evidence produced: Anomaly event log with scores and threat taxonomy technique tags, enforcement event log, circuit breaker policy configurations, agent health summaries.
SOC 2 Type II Evidence
SOC 2 Type II audits require evidence that controls operated effectively over a sustained period -- not just that they exist, but that they worked. AgentLattice produces structured JSON evidence artifacts mapped to each Trust Services Criterion:
| Criterion | Evidence Section | What It Contains |
|---|---|---|
| CC6.1 | Logical Access | Total agent actions, distinct identities, actions by trust source, actions by type |
| CC6.2 | Authorization | Total actions, approved/denied counts, approval rate, policy coverage percentage |
| CC6.3 | Segregation of Duties | SoD enforcement status, violation count |
| CC8.1 | Change Management | PR-scoped actions, distinct repositories, policy versions in use |
Each evidence export includes:
- Chain integrity attestation: Whether the hash chain verified cleanly through the reporting period, the cumulative hash at the end of the period, and the ECDSA signature of the latest checkpoint.
- Evidence sample: A representative set of audit events (up to 1,000 per export) for auditor review. The full dataset is available via raw JSON or CSV export.
- Period boundaries: Explicit start and end dates for the reporting window.
- Schema version: Ensures auditors can validate evidence format across reporting periods.
The evidence is generated on demand via API or dashboard export. There is no manual assembly, no spreadsheet compilation, no "we'll get back to you in a week." Your auditor gets structured, machine-readable evidence with cryptographic integrity verification.
Audit Trail Integrity
Hash Chain: How It Works
Every audit event in AgentLattice is linked to the previous event through a cryptographic hash chain. Here is how it works in non-technical terms:
- When the first event in your workspace is created, it references a known starting value (the genesis).
- Each subsequent event takes the hash of the previous event and includes it as part of its own record.
- A new hash is computed from the event's data plus the previous hash, creating a fingerprint for this event.
- This fingerprint becomes the "previous hash" for the next event.
The result is a chain where every link depends on every link before it. If someone modifies event number 500 out of 10,000, the hash for event 500 changes, which means event 501's "previous hash" no longer matches, which means event 501's own hash is wrong, and so on through event 10,000. Tampering with any single event is detectable by verifying the chain.
ECDSA Checkpoint Signatures
Verifying a chain from the very beginning grows linearly with the number of events. To keep verification fast, AgentLattice creates signed checkpoints at regular intervals. Each checkpoint records the cumulative hash at that point and signs it with an ECDSA P-256 key.
Verification only needs to walk the chain from the most recent signed checkpoint. This means verification cost is proportional to recent activity, not total historical volume.
What This Means for Your Auditor
- Tamper evidence, not tamper prevention. The hash chain does not prevent someone with database access from modifying records. It makes any modification detectable. This is the same model used by blockchain, certificate transparency logs, and other append-only audit systems.
- Non-repudiation. ECDSA signatures on checkpoints provide cryptographic proof that the chain state was attested at a specific time. The signing key is server-managed -- no client can forge a checkpoint signature.
- Independent verification. The hash chain can be verified by any party with read access to the audit events. The algorithm is deterministic: given the same events, any implementation will produce the same hashes.
Running a Compliance Pilot
If you are evaluating AgentLattice for compliance purposes, here is a suggested approach:
Setup (Week 1)
- Register 2-3 representative agents covering different risk profiles (e.g., a code review bot, a deployment agent, a data processing workflow).
- Configure policies for each agent's action types. Start with "require approval" for high-risk actions and "auto-approve" for low-risk actions.
- Enable anomaly detection with default thresholds. Set circuit breaker policies to notify-only mode initially.
Observation (Weeks 2-4)
Let the agents operate normally. During this period, measure:
- Actions audited: Total events recorded. Target: 100% of agent actions flow through AgentLattice.
- Policy coverage: Percentage of actions governed by a policy. Target: 100%.
- Approval response time: How long humans take to approve gated actions. This reveals bottlenecks in your approval workflow.
- False positive rate: How many anomaly alerts are false positives. Use this to tune thresholds before enabling auto-enforcement.
- Anomalies caught: Genuine behavioral deviations detected. Even one real catch during a pilot demonstrates value.
Validation (Week 4)
- Export a SOC 2 evidence report covering the pilot period. Review the structure with your auditor.
- Run a hash chain verification to confirm integrity across the full pilot period.
- Export a SOX evidence report and map it to your existing control matrix.
- Review circuit breaker effectiveness: were the right anomalies flagged? Were response levels appropriate?
Success Criteria
| Metric | Target |
|---|---|
| Action audit coverage | 100% of agent actions recorded |
| Policy coverage | 100% of actions governed by a policy |
| Hash chain integrity | Valid across entire pilot period |
| Approval response time | Under your SLA (varies by org) |
| Evidence export | Generates without manual intervention |
| Anomaly detection | Baselines calibrated, detection active |
Evidence Export
AgentLattice supports three export formats:
Structured JSON (SOC 2 / SOX)
Purpose-built compliance artifacts with evidence mapped to specific control criteria. Generated via the dashboard or API. Includes chain integrity attestation and evidence sampling.
GET /api/audit/export?format=json
Raw JSON
Complete audit event data with all fields. Suitable for loading into your own SIEM, data warehouse, or analysis tooling.
GET /api/audit/export?format=json&limit=10000
CSV
Tabular export for spreadsheet-based review workflows. Same data as raw JSON in a flat format.
GET /api/audit/export?format=csv
All exports are scoped to the authenticated user's workspace. Row-level security ensures no cross-workspace data leakage. Exports are capped at 10,000 rows per request -- for larger datasets, use pagination or the streaming API.
Regulatory Landscape
AgentLattice's governance model aligns with emerging AI regulatory requirements:
- EU AI Act (effective August 2026): Requires risk-based monitoring, human oversight mechanisms, and documented audit trails for high-risk AI systems. AgentLattice provides all three.
- NIST AI RMF: The governance, map, measure, and manage functions all benefit from the structured audit trail and policy enforcement AgentLattice provides.
- Industry-specific regulations: Financial services (OCC guidance on model risk), healthcare (HIPAA audit requirements), and government (FISMA) all require demonstrable controls over automated systems accessing sensitive data.
AgentLattice does not certify your organization as compliant with any framework. It provides the infrastructure and evidence that makes compliance demonstrable. Your auditor evaluates the controls; AgentLattice generates the evidence they need to do so.