OpenAI Codex security

Security controls for Codex-style coding agents.

Codex-style agents can reason about code and take actions in a developer workflow. The security question is not only model quality; it is what the agent is allowed to see, access, execute, transmit, and change.

Codex workflows need runtime policy

When an AI coding agent can run commands or propose code changes, the boundary should not be a prompt instruction alone. The runtime should attach identity, classify the action, evaluate policy, and record a decision.

High-risk actions to control

Secret reads

Deny `.env`, API keys, SSH keys, cloud credentials, database URLs, and SaaS tokens.

Destructive commands

Block `rm -rf`, database drops, cloud deletion, backup deletion, and force pushes.

Sensitive edits

Require approval for auth, payment, infrastructure, CI/CD, production config, and migrations.

Network egress

Allow known endpoints, flag unknown domains, and detect upload-like behavior.

Audit makes adoption easier

Security buyers will ask whether an agent can leak secrets, access production, delete data, or modify sensitive code. A runtime audit report should answer those questions with evidence from the actual session.

Design Codex guardrails around actual agent behavior.

Use the checklist