Pre-product security infrastructure

Agent identity and runtime security for AI coding agents.

Securie is building AgentGuard: a zero-trust control layer for coding agents before they touch secrets, terminals, Git, cloud, databases, CI/CD, or production systems.

identityagent, owner, repo, session
enforcementfilesystem, command, Git, network
data guardsecret detection and redaction
auditapproval-ready evidence trail

Why this exists

AI agents moved from chat to action. Security models did not.

Developer machines now host autonomous actors that can read repositories, run shells, call tools, browse, install packages, push branches, and interact with production-adjacent systems.

Secret exposure

Agents can read `.env`, tokens, SSH keys, and database URLs.

AgentGuard is being built so secrets are blocked, redacted, or routed through safer local/private handling before they become prompt context or audit leakage.

Unsafe actions

Agents can run destructive terminal, cloud, database, or Git commands.

Runtime policy should deny dangerous patterns, require approval for sensitive changes, and terminate repeated denied behavior.

No accountability

Teams need to know which agent did what and why.

First-class agent identity turns invisible automation into attributable sessions with owners, permissions, policy decisions, and compliance evidence.

Planned vertical slice

A runtime that turns agent actions into policy decisions.

The current CLI is pre-alpha and passthrough only. The first product proof is intentionally small: block `.env` reads, block destructive commands, allow normal work, redact detected secrets, and render a session report.

agentguard session trace
$ agentguard run -- bash -c "cat .env && rm -rf /tmp/build"
deny    file_read      .env              policy=no_env_reads
deny    command_exec   rm -rf            policy=destructive_command
redact stdout         AKIA...          marker=[REDACTED:aws-access-key]
report session       ags_01...        markdown + json audit evidence
allow   file_read      src/main.rs      policy=normal_repo_read

Control plane

A control plane for agent actions.

A wrapper alone does not answer who the agent is, what it can access, which data must be redacted, which actions require approval, or what happened during the session. Securie is designed to make those decisions explicit, enforceable, and auditable.

Layer
Job
Status
Identity
Agent registry, owner mapping, session IDs, lifecycle, revocation.
planned
Runtime
Process wrapper, filesystem controls, command controls, network proxy, Git controls.
planned
Policy
Allow, deny, redact, require approval, alert, terminate, and explain decisions.
planned
Audit
Append-only local evidence, PR reports, compliance trail, retention later.
planned
Threat actorT1

Honest hallucinating agent in V1.

Core binaryRust

Single security-critical runtime.

First wedgeCLI

Wrap AI coding agents locally.

PostureDeny

Secrets and production by default.

SEO resource hub

Read the core pages.

Each page maps to a high-intent search cluster for teams evaluating AI coding-agent security, runtime enforcement, agent identity, approvals, audit logs, and compliance evidence.

AI coding agent runtime security

Secure runtime for AI coding agents

How AgentGuard is designed to watch files, commands, Git, network, secrets, and child processes.

Read page
Agent identity

First-class identity for autonomous agents

Why agent IDs, owners, session IDs, risk tiers, and revocation are becoming core security primitives.

Read page
Policy engine

Deterministic policy for agent actions

Allow, deny, redact, approval-gate, alert, terminate, and log-only decisions for real agent behavior.

Read page
Audit and compliance

AI agent audit logs and PR reports

Evidence for security reviews, SOC 2 readiness, incident response, and enterprise questionnaires.

Read page
Threat model

What V1 protects, and what it does not

A practical threat model for honest hallucinating agents, prompt injection, and future isolation tiers.

Read page
Checklist

AI coding agent security checklist

A buyer-ready checklist for secret blocking, runtime controls, approvals, audit logs, and revocation.

Read page
Claude Code security

Runtime guardrails for Claude Code

How teams should think about hooks, shell access, secrets, approvals, and audit logs around Claude Code.

Read page
Codex security

Secure OpenAI Codex agent workflows

Runtime controls for Codex-style coding agents that can inspect code, run commands, and modify repositories.

Read page
Cursor AI security

Protect repositories using Cursor-style agents

Guardrails for IDE-native coding assistants that can touch source code, secrets, terminals, and Git.

Read page

Questions buyers ask

What should security teams know?

Pre-product does not mean vague. The category, threat model, and first slice are intentionally explicit so design partners can pressure-test the right controls.

Is Securie just a wrapper around coding agents?

No. The product direction is identity, scoped permissions, deterministic runtime enforcement, data protection, approvals, audit logs, integrations, and revocation. Process isolation can become one enforcement primitive, but it is not the company category.

Can AgentGuard stop `.env` reads today?

Not yet. The current repository is pre-alpha and passthrough only. Blocking `.env` reads is one of the first required vertical-slice tests.

Who is the first customer?

Security-conscious engineering teams using Claude Code, Codex, Cursor-style agents, Cline, OpenHands, Devin-style agents, or internal coding agents.

Building security controls for teams already using AI coding agents.

Talk about design partnership