Dial Your Context Help a user create the Instructions field content for their Sanity Agent Context MCP. The goal is a concise set of pure deltas — only information the agent can't figure out from the auto-generated schema. What you're building The Agent Context MCP already provides the agent with: A compressed schema of all document types and fields A GROQ query tutorial (~194 lines) Response style guidance Tool descriptions for GROQ queries, semantic search, etc. The Instructions field you're crafting gets injected as a
Custom instructions
section between
Response style
and
Tools
in the MCP's instructions blob. It should contain
only what the schema doesn't make obvious
:
Counter-intuitive field names (e.g.,
body
is actually a slug,
hero
is a reference to
mediaAsset
)
Second-order reference chains the schema doesn't connect (e.g., "to find products with Dolby Atmos, chain
product → productFeature
and match on the feature's
id
field — the schema shows each hop but not the full path")
Data quality issues the schema can't reveal (e.g., "the
product
type has a
features
array but it's always empty — use
support-product
instead")
Required filters the agent must always apply (locale, draft status, etc.)
Known data gaps confirmed by the user (e.g., "the
subtitle
field is unused — ignore it")
Query patterns for common use cases that aren't obvious from the schema
Fallback strategies when primary approaches fail
Never duplicate
what the schema already communicates clearly.
Prerequisites
You need one of these to run this session:
Path A — Write access (recommended):
A Sanity write token or the general Sanity MCP (OAuth). This lets you create a draft context doc, write instructions + filter to it during the session, and promote it to production when done. Production is never touched until you're ready.
Path B — URL params only:
If no write access is available, you can use
?instructions=
and
?groqFilter=
URL query params on the MCP endpoint to test everything. At the end, provide the final content for the user to enter manually in Sanity Studio.
Both paths are safe — neither modifies the production agent during the session.
Critical rules
Pure deltas only.
If the schema makes it obvious, don't put it in Instructions.
Never generalize from small samples.
Querying 3 docs and concluding "field X is always null" is the #1 failure mode. Every claim must be verified with the user before inclusion.
The user knows their data.
Schema dialogue beats data exploration. Present the schema, ask questions, listen.
Verify every claim with evidence.
For each line in the draft Instructions, show the query + result that supports it. The user confirms or corrects.
Keep it concise and factual.
The compaction step (summarizing findings into Instructions) is where information gets lost or distorted. No creative interpretation. Short declarative sentences.
Workflow
Step 1: Connect & Clean Slate
Goal:
Establish MCP access, set up a safe working environment.
Connect to the user's Sanity Agent Context MCP. Get the project ID, dataset, and slug from the user if not already known.
Set up your working environment:
Path A (write access):
Create a new draft context doc by copying the existing one (if any) to a new slug like
tuning-draft
. All exploration and iteration happens against this draft — the production agent is untouched.
Path B (no write access):
Use URL query params throughout the session:
?instructions=""
— forces a blank slate (ignores existing instructions)
?groqFilter=
Question Query Result Finding 1 "Recent articles" [_type == "article"] | order(publishedAt desc)[0..4] ✅ 5 results Works with schema alone 2 "Articles by author" [_type == "article" && references(authorId)] ⚠️ Empty Authors linked via contributors[].person , not direct ref 3 "Published only" [_type == "article" && status == "published"] ❌ No status field User confirms: use !(_id in path("drafts.*")) instead Adapt to scale: Simple dataset (3-5 types, 5 questions): This step might take 10 minutes Complex dataset (50 types, 20 questions): Group related questions, explore systematically, but still verify each finding Output: A findings table with verified results for each expected question. Step 5: Draft Instructions Goal: Distill findings into concise, factual Instructions content. Review the findings table from Step 4. Include only items marked ⚠️ or ❌ — things that required non-obvious patterns or failed with the obvious approach. Write the Instructions as short, declarative statements organized by category:
Rules
Always filter drafts: use
!(_id in path("drafts.**"))
— there is no
status
field
-
Always include
[_lang == "en"]
for localized content unless user specifies otherwise
Schema notes
contributors
on
article
is an array of objects with a
person
reference to
author
— not a direct author reference
-
hero
on
article
is a reference to
mediaAsset
, not an image field
-
body
on
page
is a Portable Text array, not a string — use
pt::text(body)
for plain text search
Query patterns
Articles by author:
*[_type == "article" && contributors[].person._ref == $authorId]
-
Published articles by date:
*[_type == "article" && !(_id in path("drafts.**"))] | order(publishedAt desc)
Known limitations
subtitle
field on
article
is unused — ignore it
-
relatedArticles
is manually curated and often empty for older content
Keep it tight.
Each line should pass this test: "Would an agent with the schema alone get this wrong?" If you're unsure, test it — try answering 2-3 questions with
?instructions=""
and see what the model gets wrong on its own. That's your empirical baseline for what actually needs to be here. If no, cut it.
Do not include:
General GROQ syntax (the tutorial covers this)
Field lists or type descriptions (the schema covers this)
Response formatting guidance (the response style section covers this)
Anything the agent would figure out on its own
Output:
A draft Instructions block, typically 10-40 lines depending on dataset complexity.
Step 6: Verify Claims
Goal:
Ensure every line in the draft is backed by evidence.
Go through the draft Instructions line by line. For each claim, show the user:
The claim:
e.g., "contributors on article is an array of objects with a person reference"
The evidence:
The GROQ query and result that demonstrates it
Ask for confirmation:
"Is this accurate? Anything to add or correct?"
Example:
Claim:
"Always filter drafts using
!(_id in path("drafts."))
— there is no status field"
Evidence:
[_type == "article" && defined(status)][0..2]
→ 0 results.
[_type == "article" && _id in path("drafts.")][0..2]
→ 3 draft documents found.
Is this correct?
If the user corrects a claim, update the draft immediately.
If the user adds new information ("oh, and you should also know that..."), add it to the draft and verify it the same way.
Output:
A verified Instructions block where every claim has been confirmed by the user.
Step 7: Deploy
Goal:
Get the Instructions and filter into production safely.
Present the final Instructions content and filter to the user for one last review:
Here's the final configuration:
Filter (GROQ expression):
_type in ["article", "author", "category", "tag"]
Instructions:
[final instructions block]
Ready to deploy?
Path A (write access):
Write the
instructions
and
groqFilter
fields to the draft context doc
Verify by querying the draft MCP endpoint — confirm the instructions appear in
Custom instructions
Promote to production:
Either update the production context doc's
instructions
and
groqFilter
fields to match, or update the production agent's MCP URL to point to the new slug
Verify the production endpoint serves the correct instructions
Path B (no write access):
Provide the final MCP URL with all params baked in:
https://api.sanity.io/vX/agent-context/{project}/{dataset}/{slug}?instructions=