Quick Start

Clone, open Claude Code, ask a question. Three steps.

Step 1: Clone

git clone https://github.com/kamilseghrouchni/vcro-sourcing.git && cd vcro-sourcing

The repo ships with a pre-built wiki of 335 entities (75 cohorts, 103 institutions, 101 investigators, 46 platforms, 10 bundles). You can query immediately.

Step 2: Open Claude Code

claude

Claude Code reads CLAUDE.md and the agent definitions in .claude/ on startup. The orchestrator (vcro-os) is now ready to route your questions.

Don't have Claude Code? Install it with npm install -g @anthropic-ai/claude-code (requires Node.js 18+). See the Claude Code docs.

Step 3: Ask a question

Use natural language or a slash command:

# Natural language
Find AD CSF DNA methylation cohorts with n>100 case-control

# Slash command
/source AD CSF DNA methylation cohorts with n>100 case-control

The orchestrator routes this to the query workflow. It reads the wiki index, scores matches on three axes (Scale, Cost, Quality), and writes a recommendation. If the wiki is thin for your domain, it searches PubMed, ingests papers, compiles new entities, then scores.

Output lands in a timestamped directory:

store/queries/2026-04-12_ad-csf-dna-methylation/
  request.json              # Parsed request with filters
  search_history.jsonl      # Every search query run
  candidates.json           # Wiki entities that matched
  scored_candidates.json    # Three-axis scores per candidate
  recommendation.md         # Human-readable verdict
  listings.jsonl            # Machine-readable cards for the web app

Other workflows

# Procure samples with a budget and outcome
/bounty 50 AD plasma samples, commercial use, under 80k EUR

# Onboard a biobank or institution
/onboard Sahlgrenska Biobank

# Scan the wiki for gaps and broken links
/lint

# Compile new papers into the wiki
/compile PMC10103184 PMC6922070

What happened under the hood

  1. query/understand parsed your request into structured filters (indication, sample type, modality, minimum N).
  2. query/discover scanned the wiki index for matching cohorts.
  3. If the wiki was thin, the orchestrator searched PubMed, ingested papers, and compiled new entities autonomously.
  4. query/score evaluated each candidate on Scale, Cost, and Quality independently.
  5. query/deliver assembled the recommendation and listings.

Every step wrote files to disk. Every claim in the recommendation has a verbatim source quote and a source ID. The full audit trail is in the query directory.

See the wiki grow

After the query, check the wiki from the terminal:

python3 bin/vcro wiki browse methylation

Entities compiled during the query now live in store/wiki/. The next query -- different question, different angle -- finds them already there. Faster, richer, cheaper.