Optionally collect full-text snippets to deepen evidence beyond abstracts.
This skill is intentionally conservative: in many survey runs, abstract/snippet mode is enough and avoids heavy downloads.
Inputs
-
papers/core_set.csv(expectspaper_id,title, and ideallypdf_url/arxiv_id/url) -
Optional:
outline/mapping.tsv(to prioritize mapped papers)
Outputs
-
papers/fulltext_index.jsonl(one record per attempted paper) -
Side artifacts:
papers/pdfs/<paper_id>.pdf (cached downloads)
papers/fulltext/<paper_id>.txt(extracted text)
Decision: evidence mode
queries.mdcan setevidence_mode: "abstract" | "fulltext".
abstract (default template): do not download; write an index that clearly records skipping.
fulltext: download PDFs (when possible) and extract text topapers/fulltext/.
Local PDFs Mode
When you cannot/should not download PDFs (restricted network, rate limits, no permission), provide PDFs manually and run in “local PDFs only” mode.
-
PDF naming convention:
papers/pdfs/<paper_id>.pdfwhere<paper_id>matchespapers/core_set.csv. -
Set
- evidence_mode: "fulltext"inqueries.md. -
Run:
python .codex/skills/pdf-text-extractor/scripts/run.py --workspace <ws> --local-pdfs-only
If PDFs are missing, the script writes a to-do list:
-
output/MISSING_PDFS.md(human-readable summary) -
papers/missing_pdfs.csv(machine-readable list)
Workflow (heuristic)
-
Read
papers/core_set.csv. -
If
outline/mapping.tsvexists, prioritize mapped papers first. -
For each selected paper (fulltext mode):
resolve pdf_url (use pdf_url, else derive from arxiv_id/url when possible)
-
download to
papers/pdfs/<paper_id>.pdfif missing -
extract a reasonable prefix of text to
papers/fulltext/<paper_id>.txt -
append/update a JSONL record in
papers/fulltext_index.jsonlwith status + stats -
Never overwrite existing extracted text unless explicitly requested (delete the
.txtto re-extract).
Quality checklist
papers/fulltext_index.jsonl exists and is non-empty.
If evidence_mode: "fulltext": at least a small but non-trivial subset has extracted text (strict mode blocks if extraction coverage is near-zero).
If evidence_mode: "abstract": the index records clearly reflect skip status (no downloads attempted).
Script
Quick Start
-
python .codex/skills/pdf-text-extractor/scripts/run.py --help -
python .codex/skills/pdf-text-extractor/scripts/run.py --workspace <workspace_dir>
All Options
-
--max-papers <n>: cap number of papers processed (can be overridden byqueries.md) -
--max-pages <n>: extract at most N pages per PDF -
--min-chars <n>: minimum extracted chars to count as OK -
--sleep <sec>: delay between downloads -
--local-pdfs-only: do not download; only usepapers/pdfs/<paper_id>.pdfif present -
queries.mdsupports:evidence_mode,fulltext_max_papers,fulltext_max_pages,fulltext_min_chars
Examples
- Abstract mode (no downloads):
Set - evidence_mode: "abstract" in queries.md, then run the script (it will emit papers/fulltext_index.jsonl with skip statuses)
- Fulltext mode with local PDFs only:
Set - evidence_mode: "fulltext" in queries.md, put PDFs under papers/pdfs/, then run: python .codex/skills/pdf-text-extractor/scripts/run.py --workspace <ws> --local-pdfs-only
- Fulltext mode with smaller budget:
python .codex/skills/pdf-text-extractor/scripts/run.py --workspace <ws> --max-papers 20 --max-pages 4 --min-chars 1200
Notes
-
Downloads are cached under
papers/pdfs/; extracted text is cached underpapers/fulltext/. -
The script does not overwrite existing extracted text unless you delete the
.txtfile.
Troubleshooting
Issue: no PDFs are available to download
Fix:
- Use
evidence_mode: abstract(default) or provide local PDFs underpapers/pdfs/and rerun with--local-pdfs-only.
Issue: extracted text is empty/garbled
Fix:
- Try a different extraction backend if supported; otherwise mark the paper as
abstractevidence level and avoid strong fulltext claims.