Skip to content

junhyr/scope-context-forcing

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

1 Commit
 
 
 
 
 
 

Repository files navigation

scope-context-forcing

A Daydream Scope plugin implementing the Context Forcing pipeline for consistent long-context autoregressive video generation.

Based on the paper: Context Forcing: Consistent Autoregressive Video Generation with Long Context (Chen et al., Feb 2026) from TIGER-AI-Lab (University of Waterloo).

Note: Official checkpoints and code have not yet been released by the authors. This plugin implements the Slow-Fast Memory architecture described in the paper on top of Causal Forcing weights as a compatible starting point. It will be updated when official weights become available.

Technique

Context Forcing solves the student-teacher mismatch in streaming video generation by training with a long-context teacher that sees the full generation history. The key innovation is a Slow-Fast Memory architecture that replaces the standard FIFO KV cache:

Memory = Sink ∪ SlowMemory ∪ FastMemory
Component Size Role
Attention Sink N_s = 3 frames Initial tokens preserved for attention stability
Slow Memory N_c = 12 frames Long-term buffer for high-entropy keyframes
Fast Memory N_l = 6 frames Rolling FIFO queue for immediate local context

Surprisal-Based Consolidation

Tokens are promoted from Fast Memory to Slow Memory only when they represent significant temporal changes:

consolidate(x_t) if cos_sim(k_t, k_{t-1}) < τ   (τ = 0.95)

This prioritizes novelty over redundancy, enabling consistent generation beyond 20 seconds (vs ~5s for standard Causal Forcing).

Bounded Positional Encoding

All tokens are mapped to positions in [0, N_s + N_c + N_l - 1] regardless of actual generation step, preventing out-of-distribution positional embeddings during long generation.

Comparison with Causal Forcing

Aspect Causal Forcing Context Forcing
Context Length ~5 seconds >20 seconds
KV Cache Rolling FIFO Slow-Fast tri-partite
Key Innovation AR teacher for ODE init Long-context teacher + memory

Installation

uv add scope-context-forcing

Or install from source:

git clone https://github.com/livepeer/scope-context-forcing.git
cd scope-context-forcing
uv sync --group dev
uv pip install -e .

Configuration

Parameter Default Description
sink_size 3 Attention Sink size (N_s)
slow_memory_size 12 Slow Memory capacity (N_c)
fast_memory_size 6 Fast Memory capacity (N_l)
consolidation_threshold 0.95 Cosine similarity threshold (τ)
consolidation_interval 2 Consolidation frequency in chunks
height 480 Output height
width 832 Output width
denoising_steps [1000, 750, 500, 250] Denoising schedule

Development

uv sync --group dev

References

License

See LICENSE for details.

About

No description, website, or topics provided.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages