- HumanInTheLoopMiddleware
- /
- humanInTheLoopMiddleware
-
- Pause before dangerous tool calls for human approval
- Custom middleware
-
- Intercept tool calls for error handling, logging, retry logic
- Command resume
- Continue execution after human decisions (approve, edit, reject)
Requirements:
Checkpointer + thread_id config for all HITL workflows.
Human-in-the-Loop
@tool
def send_email(to: str, subject: str, body: str) -> str:
"""Send an email."""
return f"Email sent to {to}"
agent = create_agent(
model="gpt-4.1",
tools=[send_email],
checkpointer=MemorySaver(), # Required for HITL
middleware=[
HumanInTheLoopMiddleware(
interrupt_on={
"send_email": {"allowed_decisions": ["approve", "edit", "reject"]},
}
)
],
)
Set up an agent with HITL that pauses before sending emails for human approval. typescript import { createAgent, humanInTheLoopMiddleware } from "langchain"; import { MemorySaver } from "@langchain/langgraph"; import { tool } from "@langchain/core/tools"; import { z } from "zod"; const sendEmail = tool( async ({ to, subject, body }) => `Email sent to ${to}`, { name: "send_email", description: "Send an email", schema: z.object({ to: z.string(), subject: z.string(), body: z.string() }), } ); const agent = createAgent({ model: "anthropic:claude-sonnet-4-5", tools: [sendEmail], checkpointer: new MemorySaver(), middleware: [ humanInTheLoopMiddleware({ interruptOn: { send_email: { allowedDecisions: ["approve", "edit", "reject"] } }, }), ], }); config = {"configurable": {"thread_id": "session-1"}} Step 1: Agent runs until it needs to call tool result1 = agent.invoke({ "messages": [{"role": "user", "content": "Send email to john@example.com "}] }, config=config) Check for interrupt if " interrupt " in result1: print(f"Waiting for approval: {result1[' interrupt ']}") Step 2: Human approves result2 = agent.invoke( Command(resume={"decisions": [{"type": "approve"}]}), config=config ) </python> <typescript> Run the agent, detect an interrupt, then resume execution after human approval.typescript import { Command } from "@langchain/langgraph"; const config = { configurable: { thread_id: "session-1" } }; // Step 1: Agent runs until it needs to call tool const result1 = await agent.invoke({ messages: [{ role: "user", content: "Send email to john@example.com" }] }, config); // Check for interrupt if (result1.interrupt) { console.log(Waiting for approval: ${result1.__interrupt__}); } // Step 2: Human approves const result2 = await agent.invoke( new Command({ resume: { decisions: [{ type: "approve" }] } }), config ); Which tools require approval (per-tool policies) Allowed decisions per tool (approve, edit, reject) Custom middleware hooks: before_model , after_model , wrap_tool_call , before_agent , after_agent Tool-specific middleware (apply only to certain tools) What You CANNOT Configure Interrupt after tool execution (must be before) Skip checkpointer requirement for HITL CORRECT agent = create_agent( model="gpt-4.1", tools=[send_email], checkpointer=MemorySaver(), # Required middleware=[HumanInTheLoopMiddleware({...})] )HITL requires a checkpointer to persist state. typescript // WRONG: No checkpointer const agent = createAgent({ model: "anthropic:claude-sonnet-4-5", tools: [sendEmail], middleware: [humanInTheLoopMiddleware({ interruptOn: { send_email: true } })], }); // CORRECT: Add checkpointer const agent = createAgent({ model: "anthropic:claude-sonnet-4-5", tools: [sendEmail], checkpointer: new MemorySaver(), middleware: [humanInTheLoopMiddleware({ interruptOn: { send_email: true } })], }); CORRECT agent.invoke(input, config={"configurable": {"thread_id": "user-123"}}) </python> </fix-no-thread-id> <fix-wrong-resume-syntax> <python> Use Command class to resume execution after an interrupt.python
WRONG
agent.invoke({"resume": {"decisions": [...]}})
CORRECT
from langgraph.types import Command agent.invoke(Command(resume={"decisions": [{"type": "approve"}]}), config=config) // CORRECT import { Command } from "@langchain/langgraph"; await agent.invoke(new Command({ resume: { decisions: [{ type: "approve" }] } }), config);