A portable method for making invisible system effects visible.
A methodology for studying system dependency, cognitive scaffolding, and non-reversible transformations
A system’s removal does not remove its effects.
Bounded Fictional Analysis is a research method for examining how systems reshape cognition, documentation, and coordination over time. It uses deliberately constructed fictional scenarios to isolate dynamics that are difficult—or impossible—to observe cleanly in real-world systems.
Best used when:
- Systems are too entangled to isolate variables
- Gradual changes obscure causality
- Ethical constraints limit experimentation
- Counterfactuals must be explored before implementation
Five phases model the lifecycle of system integration:
- Baseline — Pre-system state → Existing capabilities, constraints, and cognitive models
- Introduction — Early adoption → Habit formation, initial restructuring
- Steady-State — Full integration → Capability expansion + capability atrophy
- Removal — System disappears → Breakage, persistence of behaviors, cognitive mismatch
- Adaptation — New equilibrium → What remains, what recovers, what is permanently altered
Across domains, four recurring patterns emerge:
Behavioral Lock-In
Actions persist after their purpose disappears
Orphaned Sophistication
Cognitive frameworks depend on missing infrastructure
Context Discontinuity
Artifacts persist, meaning does not
Residual Infrastructure
Structures shaped by systems outlive them
- AI systems — memory, tooling, platform discontinuity
- Documentation systems — archival decay, format drift
- Cognition — scaffolded thinking, habit persistence
- Systems design — degradation, migration, legacy handling
-
Design a bounded fictional scenario
- Clear system boundaries
- Observable effects
- Toggleable presence/absence
-
Walk the five phases
Track changes across:- Individual cognition
- Social coordination
- Structural dependencies
-
Extract patterns
Focus on persistence, mismatch, and irreversibility -
Map to real systems
Use patterns as explanatory lenses—not proof
- Isolates confounded variables
- Enables safe counterfactual exploration
- Makes hidden dynamics visible
- Generates testable hypotheses
- Not predictive or quantitative
- Does not replace empirical validation
- Does not prove causality
- Cannot substitute real-world observation
A useful analysis shows:
- Cross-domain recognition
- Explanatory clarity
- Predictive usefulness (pattern-level)
- Generative insight
- Clear boundaries
When applied to AI systems:
- Explicit fictional framing
- No covert experimentation
- Transparent intent
When mapping to humans:
- Patterns ≠ proof
- Context matters
- Individual variation persists
At each phase, examine:
- Cognition
- Coordination
- Infrastructure
- Persistence
Look for:
- Lock-in
- Orphaning
- Discontinuity
- Residue
Ask:
- What changed?
- What persisted?
- What broke?
- What cannot be undone?
The framework is best understood through concrete scenarios. Below is a minimal walkthrough demonstrating how patterns emerge across phases.
Scenario: A team uses an LLM system with persistent memory and tool integrations for daily workflows.
Phase 1 — Baseline
- Knowledge stored manually (notes, docs)
- Context switching is explicit
- Reasoning is slower but self-contained
Phase 2 — Introduction
- LLM begins assisting with recall and drafting
- Users offload memory and structure
- Prompts become externalized thinking
Phase 3 — Steady-State
- LLM becomes primary interface for:
- recall
- synthesis
- coordination
- Users rely on:
- saved context
- tool chains
- system-specific workflows
Emerging effects:
- Faster iteration
- Reduced internal memory load
- Increased system dependency
Phase 4 — Removal
System becomes unavailable (API change, outage, migration)
Observed effects:
- Prompt habits persist but fail
- Users attempt workflows that no longer function
- Knowledge exists but is harder to access
- Coordination slows abruptly
Discontinuity shock occurs
Phase 5 — Adaptation
- Users rebuild partial workflows
- Some behaviors persist (structured prompting, decomposition)
- Some capabilities degrade (instant recall, synthesis speed)
Patterns Observed
- Behavioral Lock-In — Continued reliance on prompt-based thinking
- Orphaned Sophistication — Workflows assume capabilities (memory, chaining) that no longer exist
- Context Discontinuity — Stored information loses usability without system mediation
- Residual Structures — Team processes remain shaped by prior system
Insight
The system did not just assist cognition—it restructured it. Its removal exposes not absence, but transformation.
Bounded Fictional Analysis: A methodology for studying system dependency and non-reversible cognitive transformations through deliberately constructed fictional scenarios.
For a complete catalog of related research:
📂 AI Safety & Systems Architecture Research Index
