What a Lab finding is, the discipline behind the corpus, and how to read the labels on every finding card. Read this once; the conventions are stable across every finding in the corpus.
A Lab finding is a citable claim with: a load-bearing thesis, a corpus of evidence (cases, data, source documents), a mechanism explanation, an honest exposure of how it could be wrong, and at least one pre-registered prediction with a public falsifier. Findings are versioned, lifecycle-tracked, and citable by handle. They are the unit of intellectual output the Lab produces.
Findings are not essays, opinions, or summaries of others’ work. They are claims the Lab is willing to be wrong about, structured so that being wrong is visible.
The Lab produces three distinct artifact types. They serve different functions and should not be confused with each other.
/findings.The distinction matters because epistemic status matters. Citing a brief as if it were a finding inflates its claim weight; citing a finding as if it were a brief deflates it. Each artifact carries the discipline appropriate to its cadence and form.
Every finding belongs to one of five narrative forms. The form determines the section structure — what the finding leads with, how the evidence is laid out, what the mechanism explanation looks like. Matching form to claim is itself a methodological discipline; a counterintuitive surprise is told differently than a paired-case comparison.
Counterintuitive findings; surprises that overturn conventional wisdom.
Most-similar / most-different paired-case findings.
Outcomes that resolve a pre-registered prediction. Or — for forward-looking work — predictions awaiting resolution.
Findings that emerge across 3+ analogs with a common mechanism.
Trajectory-tracing findings, deviation-point analyses.
Every finding carries one of five lifecycle states. The state determines what the reader can expect from the claim and from the version history.
Visibility is set per finding, separate from lifecycle. A pre-registered finding can be private (still maturing) or public-peek (open to scrutiny but not collaboration). The model is designed so that drafts can mature in private without the social pressure of an audience, but published findings carry their visibility on the page.
Every finding has a canonical citation handle of the form:
lab:finding/<domain>/<year>/<slug>/<version>For example: lab:finding/popdec/2026/korea-hungary-divergence/v1. The handle resolves to a URL of the same shape under /findings/, so cross-references in finding bodies are clickable. Bare-slug URLs (without the version) resolve to the latest version with a <link rel="canonical"> pointing to the versioned URL — so external citations to a bare slug stay stable while internal navigation always reaches the current reading.
Versioning is strict. v2 of a finding is a new artifact, not a quiet edit of v1. The original v1 stays at its versioned URL, lifecycle moves to revised, and a "what changed" banner on v2 explains the delta. Citations from outside the Lab should always reference the version, not the bare slug, so the cited reading is preserved exactly as it was at citation time.
Pre-registration is the load-bearing methodological move that separates Lab findings from written opinions. Every finding includes at least one prediction with: a central estimate, a predicted band, a falsifier band (the threshold past which the finding is wrong), and a resolution date. All four are committed before resolution data is available.
The full register lives at the pre-registration calendar, sorted by resolution date. When a prediction’s resolution date arrives, the parent finding’s lifecycle moves accordingly:
published, predictions resolved as forecast.retracted; retraction note explains what failed.revised; v2 narrows the original claim to the verified subset.The most common failure mode of falsification claims in social science is moving the goalposts at resolution — quietly redefining the test once the data is in. Every Lab finding with a forward-looking prediction includes pre-registered anti-goalpost commitments: the specific saves we will refuse to allow ourselves at resolution.
These commitments are made now, while we still don’t know the answer, so that future-us cannot save the finding by quietly shifting what counts as confirmation. They typically include: rejecting partial-data exemptions, refusing to expand the candidate set after resolution, requiring independent attribution of mechanism, and binding ourselves to the specific bar value committed at v1 even if a "more lenient" reading would let the finding survive.
Read the positive-case-search finding for a worked example of how this is structured in practice.
Some findings, if widely cited, can change the systems they describe. A finding about market structure changes how traders behave; a finding about fertility trajectories affects panic, capital flight, and individual decisions. The reflexivity tag (LOW / MEDIUM / HIGH) flags this on every finding so readers can calibrate.
HIGH reflexivity findings are framed as diagnostic + a menu of tested-intervention positions, not as prescriptive policy advice. The framing matters: a finding that says "policy X cannot durably lift fertility" is research; the same finding rewritten as "you should defund family policy" is advocacy, and weaponizes the research against the institutional support it depends on.
Mismatches show up in the writing — a counterintuitive finding crammed into the pair-comparison scaffold reads as flat; a paired-case finding squeezed into the detective scaffold buries the wedge. Match the scaffold to the claim before drafting.