IRIS
Interpretive Rotation on Invariant Signals
IRIS is a structured analytical method for making visible how different interpretive frameworks process the same evidence into divergent conclusions.
It holds the empirical input constant, rotates the interpretive grammar, and observes what changes in the output.
An interpretive grammar is a rule-bound processing system defined by five parameters:
G = (V, S, W, C, A)
- V
- primary analytical variable
- S
- signal selection rules
- W
- weighting function
- C
- causal chain model
- A
- admissible closure set
IRIS does not adjudicate between interpretations. It makes the architecture of interpretation visible — the processing rules that produce conclusions — so that divergence can be diagnosed rather than merely experienced.
Developed by R. Jazinski. Explored across 20 case study applications, 15 independently prompted LLM instances, and approximately one million words of analytical output.
Method Essay
The IRIS method's core architecture is presented in a standalone preprint:
Jazinski, R. (2026). "Interpretive Rotation on Invariant Signals: A Method for Making Epistemic Architectures Visible." Working paper.
The paper specifies the five-parameter grammar notation, demonstrates the method through a fully worked example (Citizens United v. FEC), summarises the preliminary convergence programme, and positions IRIS relative to discourse analysis, frame analysis, Q-methodology, analytical pluralism, argumentation theory, and the sociology of knowledge.
Read on Zenodo (DOI: 10.5281/zenodo.19131632) →The IRIS Toolkit
A self-contained methodological guide for applying IRIS to any domain where interpretive divergence persists despite shared evidence.
- The method (formal definition, six analytical steps)
- Grammar specification procedures
- Signal set construction
- Closure comparison (seven dimensions + ontological check)
- Guidance for using AI agents as survey instruments
- Worked examples from the validation programme
- Taxonomy of twelve common interpretive grammars
- Complete analysis checklist and templates
IRIS Case Studies
The analytical commentary on a five-round validation programme (Marks 1–5, 2026) that tested the IRIS method across:
- 20 case study applications
- 17 unique domains (politics, health, technology, economics, history, geopolitics, philosophy)
- 15 independent AI agents
- Approximately one million words of output
Contains grammar convergence tables, key findings per case, cross-mark synthesis, and ten cross-cutting findings including:
- Grammar convergence confirmed at scale
- Ontological pluralism (7+ cases)
- Emerging grammar detection
- Grammar aging and temporal formation
- Refusal boundary mapping
- The clean-slate test (zero-priming convergence)
IRIS Case Studies
Every agent output from the five-round validation programme. All cases, all agents, all rounds — formatted with consistent framing but preserving each agent's own text and structure.
| Mark 1 | 7 cases × 5 agents | ~200,000 words |
| Mark 2 | 5 cases × 7 agents | ~258,000 words |
| Mark 3 | 3 cases × 9 agents | ~165,000 words |
| Mark 4 | 2 cases × 15 agents | ~175,000 words |
| Mark 5 | 3 cases × 12 agents | ~247,000 words |
| Total | ~1,045,000 words across 150+ formatted entries | |
Includes all main outputs, extra outputs, refusals, method comparison materials, and case study briefings.
Download Archive Hub (Contents & Index) →The complete archive is published in five volumes (one per Mark). Individual volumes are available on request.