Cut the middle man. Navigates papers, compiles studies, samples, institutions — assesses fit-for-purpose, maps stakeholders, and gets better with every query.
Open source. Runs on Claude Code. Your subscription, your machine, your data.
You can't see where samples come from, whether they'll work for your assay, or how much they should cost. The information exists — it's just buried.
This is what opens. Your wiki, your workflows, one prompt away.
Not a search engine. A knowledge graph that compounds with every question.
8,000 words per paper. 21 intelligence dimensions extracted — sample counts, protocols, consent, PI contacts, platform specs. With verbatim quotes.
Every cohort linked to its institution, investigators, platforms, protocols. Ask about a PI — see every cohort they touch. Ask about a platform — see who validated it.
Scale, Cost, Quality — independently. No composite rank. You decide the weighting. Every score traces to a source quote.
Lint scans the wiki for stale facts, missing links, duplicate entities. Findings feed back into compile. The graph heals itself.
Every entity is a markdown file. Click a node — see its type, connections, source.
Example: the AD plasma metabolomics query connected 4 cohorts to 8 institutions, 9 investigators, and 3 platforms. Your wiki starts empty and grows from here.
Natural language or structured commands. Every answer backed by evidence.
21 documented instruction sets under .claude/skills/.
Each one is a markdown file. Read it, fork it, swap it out.
One paper → structured intelligence fragments. 5–8 dimensions.
Fragments vs. wiki. New entity, merge, or ambiguous.
Plan in, wiki articles out. Idempotent. Hook-gated.
Natural language → structured request.json.
PubMed, EuropePMC, ctgov. Coverage scoring per round.
Wiki-first candidate search. Reports gaps.
Scale, Cost, Quality. Per-axis confidence. No composite.
Scored candidates → recommendation.md + listings.jsonl.
Procurement chain → wiki bundle entity. Per-link evidence.
Real gaps vs. structural. Worth recompiling or not.
Duplicate cohorts, distinct waves, or genuinely separate.
Needs re-verification vs. static facts that aged.
Latent links to apply. Papers worth ingesting.
Institution listing draft from wiki entities.
Consent, commercial-use, export, IRB scope.
Cost-recovery pricing from verified analogues.
Query API for external agents. Dynamic per run.
Batch runs, compile wedges, lint sweeps.
How to add a skill without breaking the lockfile.
Next.js + Vercel AI SDK + Claude bootstrap.
viCRO is MIT-licensed. You need a Claude subscription for the LLM calls — that's it. The wiki, the graph, the skills all live on your machine.
21 skills · 4 agents · 9 rules · MIT
Runtime. Your subscription powers the agents.
Interactive graph visualization.
Zero dependencies for core scripts.