). User can override model if needed (see Model Options below).
Select the sandbox mode required for the task; default to
--sandbox read-only
unless edits or network access are necessary.
Assemble the command with the appropriate options:
-m, --model
--config model_reasoning_effort=""
--sandbox
--full-auto
-C, --cd
--skip-git-repo-check
Always use --skip-git-repo-check.
When continuing a previous session, use
codex exec --skip-git-repo-check resume --last
via stdin. When resuming don't use any configuration flags unless explicitly requested by the user e.g. if he species the model or the reasoning effort when requesting to resume a session. Resume syntax:
. All flags have to be inserted between exec and resume.
IMPORTANT
By default, append
2>/dev/null
to all
codex exec
commands to suppress thinking tokens (stderr). Only show stderr if the user explicitly requests to see thinking tokens or if debugging is needed.
Run the command, capture stdout/stderr (filtered as appropriate), and summarize the outcome for the user.
After Codex completes
, inform the user: "You can resume this Codex session at any time by saying 'codex resume' or asking me to continue with additional analysis or changes."
90% off ($0.125/M tokens) for repeated context, cache lasts up to 24 hours.
Following Up
After every
codex
command, immediately use
AskUserQuestion
to confirm next steps, collect clarifications, or decide whether to resume with
codex exec resume --last
.
When resuming, pipe the new prompt via stdin:
echo "new prompt" | codex exec resume --last 2>/dev/null
. The resumed session automatically uses the same model, reasoning effort, and sandbox mode from the original session.
Restate the chosen model, reasoning effort, and sandbox mode when proposing follow-up actions.
Error Handling
Stop and report failures whenever
codex --version
or a
codex exec
command exits non-zero; request direction before retrying.
Before you use high-impact flags (
--full-auto
,
--sandbox danger-full-access
,
--skip-git-repo-check
) ask the user for permission using AskUserQuestion unless it was already given.
When output includes warnings or partial results, summarize them and ask how to adjust using
AskUserQuestion
.
CLI Version
Requires Codex CLI v0.57.0 or later for GPT-5.2 model support. The CLI defaults to
gpt-5.2
on macOS/Linux and
gpt-5.2
on Windows. Check version:
codex --version
Use
/model
slash command within a Codex session to switch models, or configure default in
~/.codex/config.toml
.