Governance Posture Score
The governance posture score is a single number (0 to 100) that summarizes how well-governed an agent is. It answers the question: "Is this agent operating within its intended boundaries?" A high score means the agent has active policies, low denial rates, minimal anomalies, and a verified audit chain. A low score means something needs attention.
The score is computed per agent, not per workspace. Each agent has its own posture based on its own behavior and the policies that cover it.
Prerequisites
The chain integrity component of the score depends on your workspace's audit hash chain configuration. If you are unfamiliar with audit chain signing and verification, see Security for background.
The Four Components
The score is the sum of four equally-weighted components, each worth 25 points, capped at 100. All time-based components use a rolling 30-day window.
| Component | Max | Window | What It Measures |
|---|---|---|---|
policy_coverage |
25 | 30 days | Percentage of the agent's action types covered by an active policy |
approval_rate |
25 | 30 days | Percentage of actions that were approved or auto-executed (not denied) |
anomaly_frequency |
25 | 30 days | Inverse of the anomaly rate. Fewer anomalies = higher score |
chain_integrity |
25 | All-time | Whether the audit hash chain is valid and cryptographically signed |
Policy Coverage (25 points)
AgentLattice looks at every distinct action_type this agent has used in the last 30 days, then checks how many of those action types have an active policy in the workspace.
- Wildcard policy (
action_type: "*") counts as covering all action types. Score: 25. - Partial coverage: If the agent uses 4 action types and only 3 have policies, the score is
round(3/4 * 25)= 19.
Approval Rate (25 points)
The ratio of actions with status approved or executed versus total actions over 30 days. A 100% approval rate scores 25. A 50% rate (half denied) scores 13.
Anomaly Frequency (25 points)
Measured as anomalies per 100 actions. The formula: 25 * (1 - anomalyRate / 10). Zero anomalies = 25 points. At 10 anomalies per 100 actions (a 10% rate), the score hits 0.
Chain Integrity (25 points)
This component has three discrete values, not a continuous scale:
| State | Score | Condition |
|---|---|---|
| Valid chain + signed checkpoint | 25 | Audit hash chain passes verification AND at least one checkpoint has been cryptographically signed |
| Valid chain, no signing | 15 | Hash chain is valid but no ECDSA signing has been configured |
| Broken chain | 0 | Hash chain verification failed. Audit trail integrity is compromised |
The 10-point gap between 15 and 25 reflects the difference between tamper-evident hashing (good) and cryptographic signing (better). Configuring ECDSA signing for your workspace's audit chain is worth a concrete 10-point improvement.
New Agent Defaults
A brand-new agent with no audit history starts at 75, 90, or 100, not 0. All three time-based components (policy coverage, approval rate, anomaly frequency) default to full score (25 each) when the agent has zero actions. With a valid signed chain, a new agent scores 100. With a valid but unsigned chain, it scores 90. With no valid chain, it scores 75.
This is by design. An undeployed agent has no violations. But it also means the score is not meaningful until the agent has real activity. If you use posture scores for CI gates or deployment decisions, always check that the agent has actual action history before trusting the score:
const al = new AgentLattice({ apiKey: process.env.AL_API_KEY! });
const { score, components } = await al.posture();
// Don't trust the score if there's no real activity
if (components.policy_coverage.action_types_total === 0) {
console.warn("Agent has no recorded actions — posture score is not yet meaningful");
}
import os
from agentlattice import AgentLattice
al = AgentLattice(api_key=os.environ["AL_API_KEY"])
result = al.posture_sync()
# Don't trust the score if there's no real activity
if result.components["policy_coverage"].extra.get("action_types_total", 0) == 0:
print("Agent has no recorded actions — posture score is not yet meaningful")
else:
print(f"Score: {result.score}/100")
Using the Score for CI Gates
The posture score is designed to be used as an automated quality gate. You can block deployments, PR merges, or agent promotions when the score drops below a threshold.
const al = new AgentLattice({ apiKey: process.env.AL_API_KEY! });
async function enforcePostureGate(minimumScore: number) {
const { score, components } = await al.posture();
// Require real activity before trusting the score
if (components.policy_coverage.action_types_total === 0) {
throw new Error("Agent has no recorded actions — cannot evaluate posture");
}
if (score < minimumScore) {
const breakdown = Object.entries(components)
.map(([k, v]) => ` ${k}: ${(v as { score: number; max: number }).score}/${(v as { score: number; max: number }).max}`)
.join("\n");
throw new Error(
`Governance posture ${score} is below threshold ${minimumScore}.\n${breakdown}`
);
}
console.log(`Posture gate passed: ${score}/100`);
}
// In your CI pipeline:
await enforcePostureGate(80);
import os
from agentlattice import AgentLattice
al = AgentLattice(api_key=os.environ["AL_API_KEY"])
def enforce_posture_gate(minimum_score: int) -> None:
result = al.posture_sync()
if result.components["policy_coverage"].extra.get("action_types_total", 0) == 0:
raise RuntimeError("Agent has no recorded actions — cannot evaluate posture")
if result.score < minimum_score:
breakdown = "\n".join(
f" {k}: {v.score}/{v.max}" for k, v in result.components.items()
)
raise RuntimeError(
f"Governance posture {result.score} is below threshold {minimum_score}.\n{breakdown}"
)
print(f"Posture gate passed: {result.score}/100")
# In your CI pipeline:
enforce_posture_gate(80)
Interpreting Your Score
| Score | Status | Recommended Action |
|---|---|---|
| 90-100 | Healthy | Maintain current configuration |
| 70-89 | Acceptable | Review low-scoring components for improvement opportunities |
| 50-69 | At risk | Address gaps before promoting to production |
| Below 50 | Blocked | Do not deploy. Investigate and resolve before proceeding |
When investigating a low score, the components object in the API response tells you exactly which dimension is dragging the total down. Each component includes its raw counts (e.g., total_actions, anomaly_count, action_types_covered) so you can diagnose the root cause without additional API calls.