Skip to content

leenathomas01/Bounded-Fictional-Analysis

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

12 Commits
 
 
 
 
 
 

Repository files navigation

Bounded Fictional Analysis

A portable method for making invisible system effects visible.

A methodology for studying system dependency, cognitive scaffolding, and non-reversible transformations


Visual Overview

Bounded Fictional Analysis Framework

A system’s removal does not remove its effects.


Overview

Bounded Fictional Analysis is a research method for examining how systems reshape cognition, documentation, and coordination over time. It uses deliberately constructed fictional scenarios to isolate dynamics that are difficult—or impossible—to observe cleanly in real-world systems.

Best used when:

  • Systems are too entangled to isolate variables
  • Gradual changes obscure causality
  • Ethical constraints limit experimentation
  • Counterfactuals must be explored before implementation

Core Framework

Five phases model the lifecycle of system integration:

  1. Baseline — Pre-system state → Existing capabilities, constraints, and cognitive models
  2. Introduction — Early adoption → Habit formation, initial restructuring
  3. Steady-State — Full integration → Capability expansion + capability atrophy
  4. Removal — System disappears → Breakage, persistence of behaviors, cognitive mismatch
  5. Adaptation — New equilibrium → What remains, what recovers, what is permanently altered

What This Reveals

Across domains, four recurring patterns emerge:

Behavioral Lock-In
Actions persist after their purpose disappears

Orphaned Sophistication
Cognitive frameworks depend on missing infrastructure

Context Discontinuity
Artifacts persist, meaning does not

Residual Infrastructure
Structures shaped by systems outlive them


Where It Applies

  • AI systems — memory, tooling, platform discontinuity
  • Documentation systems — archival decay, format drift
  • Cognition — scaffolded thinking, habit persistence
  • Systems design — degradation, migration, legacy handling

How to Use

  1. Design a bounded fictional scenario

    • Clear system boundaries
    • Observable effects
    • Toggleable presence/absence
  2. Walk the five phases
    Track changes across:

    • Individual cognition
    • Social coordination
    • Structural dependencies
  3. Extract patterns
    Focus on persistence, mismatch, and irreversibility

  4. Map to real systems
    Use patterns as explanatory lenses—not proof


Strengths

  • Isolates confounded variables
  • Enables safe counterfactual exploration
  • Makes hidden dynamics visible
  • Generates testable hypotheses

Limitations

  • Not predictive or quantitative
  • Does not replace empirical validation
  • Does not prove causality
  • Cannot substitute real-world observation

Validation Signals

A useful analysis shows:

  • Cross-domain recognition
  • Explanatory clarity
  • Predictive usefulness (pattern-level)
  • Generative insight
  • Clear boundaries

Ethics

When applied to AI systems:

  • Explicit fictional framing
  • No covert experimentation
  • Transparent intent

When mapping to humans:

  • Patterns ≠ proof
  • Context matters
  • Individual variation persists

Quick Reference

At each phase, examine:

  • Cognition
  • Coordination
  • Infrastructure
  • Persistence

Look for:

  • Lock-in
  • Orphaning
  • Discontinuity
  • Residue

Ask:

  • What changed?
  • What persisted?
  • What broke?
  • What cannot be undone?

Examples

The framework is best understood through concrete scenarios. Below is a minimal walkthrough demonstrating how patterns emerge across phases.

Worked Example — LLM Memory & Tool Dependency

Scenario: A team uses an LLM system with persistent memory and tool integrations for daily workflows.

Phase 1 — Baseline

  • Knowledge stored manually (notes, docs)
  • Context switching is explicit
  • Reasoning is slower but self-contained

Phase 2 — Introduction

  • LLM begins assisting with recall and drafting
  • Users offload memory and structure
  • Prompts become externalized thinking

Phase 3 — Steady-State

  • LLM becomes primary interface for:
    • recall
    • synthesis
    • coordination
  • Users rely on:
    • saved context
    • tool chains
    • system-specific workflows

Emerging effects:

  • Faster iteration
  • Reduced internal memory load
  • Increased system dependency

Phase 4 — Removal

System becomes unavailable (API change, outage, migration)

Observed effects:

  • Prompt habits persist but fail
  • Users attempt workflows that no longer function
  • Knowledge exists but is harder to access
  • Coordination slows abruptly

Discontinuity shock occurs

Phase 5 — Adaptation

  • Users rebuild partial workflows
  • Some behaviors persist (structured prompting, decomposition)
  • Some capabilities degrade (instant recall, synthesis speed)

Patterns Observed

  • Behavioral Lock-In — Continued reliance on prompt-based thinking
  • Orphaned Sophistication — Workflows assume capabilities (memory, chaining) that no longer exist
  • Context Discontinuity — Stored information loses usability without system mediation
  • Residual Structures — Team processes remain shaped by prior system

Insight

The system did not just assist cognition—it restructured it. Its removal exposes not absence, but transformation.


Citation

Bounded Fictional Analysis: A methodology for studying system dependency and non-reversible cognitive transformations through deliberately constructed fictional scenarios.


For a complete catalog of related research:
📂 AI Safety & Systems Architecture Research Index


About

A research methodology that uses deliberately constructed fictional scenarios to study system dependency, cognitive scaffolding, documentation drift, and non-reversible transformations across AI, tools, and human coordination.

Topics

Resources

License

Stars

Watchers

Forks

Packages

 
 
 

Contributors