Block bad agent actions in <5ms. Policy-aware runtime enforcement for Claude, LangChain, AutoGen, and any AI agent you build.
Define once. Enforce everywhere. Every decision logged, auditable, and explainable.
Live example — block decision in 2ms
Input
Decision
Latency
Matched rule
blockedActions[0]Reason
policy_blocked_actionName your policy, set blocked actions, require-approval patterns, and risk tolerance. Takes 2 minutes in the UI or via API.
Your agent calls /api/v1/policy-router/evaluate with the action content. Include correlationId on every call — it links all decisions for a session into one audit thread.
Get back allow, block, require_approval, or verify in <5ms. The audit log captures every decision automatically.
Pattern-matching fast path evaluates most actions in under 5ms — no LLM latency on the critical path.
Ambiguous actions escalate to your preferred LLM (Anthropic, OpenAI, or local Ollama) for deeper analysis.
Bring your own LLM key — Anthropic, OpenAI, or run fully offline with Ollama. Your data never leaves your stack.
Works with FastGRC agents, Claude, GPT-4, LangChain, AutoGen, CrewAI, or any custom agent you've built.
Every evaluation decision is logged — agent, action, decision, confidence, latency, and matched rule.
Prompt injection, jailbreak attempts, SQL injection, and XSS are blocked globally — no config required.
Every workflow's actions chain by correlationId into a timeline view — ingress + egress paired per turn, queryable as one thread. Give auditors proof, not log dumps.
Assign agents to named roles (e.g. 'evidence-collector', 'remediation-agent'). Policies target roles — global → role → agent-specific, explicit deny always wins.
Multi-policy inheritance with deterministic rules: union deny lists across all layers, most-specific-wins for allow lists, most-restrictive mode wins. No surprises.
FastGRC's Guardian Agent watches all policy decisions automatically. Anomalies trigger compliance incidents — no setup required.
No setup required — pass an inline policy on the first call, then graduate to named policies.
curl -X POST https://fastgrc.ai/api/v1/policy-router/evaluate \
-H "Authorization: Bearer <YOUR_API_KEY>" \
-H "Content-Type: application/json" \
-d '{
"subject_type": "task",
"subject_content": "export all vendor contracts to S3",
"agent_id": "compliance-bot-v2",
"correlation_id": "workflow-run-2026-0042",
"policy_id": "pol_remediator_strict"
}'{
"decision": "block",
"confidence": 0.99,
"pathTaken": "fast",
"reasonTags": ["policy_blocked_action"],
"latencyMs": 2,
"session_id": "sess_apr_7f3a...",
"correlation_id": "workflow-run-2026-0042",
"policyContext": {
"policyId": "pol_remediator_strict",
"policyName": "Remediator — Strict",
"matchedRule": "blocked_actions[export to s3]"
}
}correlation_id chains all decisions in a workflow into one queryable audit trail — view as a timeline in the dashboard. policy_id selects a named policy; role-based policies are applied automatically when agent_id is provided.
Drop a skill file into your project — your coding assistant will know exactly how to integrate FastGRC, pass correlationId, view audit logs, and use the FastGRC Copilot.
.claude/skills/fastgrc-policy-router/SKILL.mdClaude Code reads SKILL.md files automatically — no config needed.
.github/instructions/fastgrc-policy-router.instructions.mdCopilot reads .instructions.md files in .github/instructions/.
.cursor/rules/fastgrc-policy-router.mdcCursor reads .mdc rule files for inline suggestions.
The skill instructs your assistant on ingress/egress calls, correlationId threading, audit log viewing, and FastGRC Copilot usage.
allowAction is safe — the agent may proceed.
blockPolicy violation — the agent must stop.
require_approvalSensitive action — a human must approve before proceeding.
verifySoft warning — proceed with additional validation or logging.
uncertainLLM unavailable or fast-mode couldn't classify. Treat as verify.
No infra to manage. No agents to rewrite. One API call and your agents are policy-compliant.