Record: 11L XSA+EMA+TTT, sliding val_bpb=1.1254 (3-seed mean 1.1256)#338
Open
alertcat wants to merge 9 commits intoopenai:mainfrom
Open
Record: 11L XSA+EMA+TTT, sliding val_bpb=1.1254 (3-seed mean 1.1256)#338alertcat wants to merge 9 commits intoopenai:mainfrom
alertcat wants to merge 9 commits intoopenai:mainfrom
Conversation
Innovation over PR openai#198 (SOTA 1.1318): - 12 transformer layers (was 11): +2.2M params, better representation - Int5 quantization for MLP weights [-16,15]: 3 zero high bits - zstd compression 1.88x vs int6 1.51x, saves ~1.8MB - Funds the 12th layer within 16MB budget - Int6 kept for attention weights (precision-sensitive) - FA3 fallback for older PyTorch - LR=0.025 (validated as optimal in A/B testing) Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
RyanLisse
added a commit
to RyanLisse/parameter-golf
that referenced
this pull request
Mar 21, 2026
New CUDA presets: - pr332_12l_xsa: 12L/2xMLP, seq2048, momentum 0.99 (from PR openai#332) - pr338_11l_ttt: 11L/2xMLP, seq2048, momentum 0.99 (from PR openai#338) - bft_ensemble: 9L/3xMLP Byzantine fault tolerant checkpoint config - difficulty_adjusted: 10L/2xMLP adaptive search with tight LR - partial_rope_headtemp: baseline arch with novel attention params Expanded search: NUM_LAYERS includes 11, TRAIN_SEQ_LEN includes 4096. Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
DigitalSword99
pushed a commit
to DigitalSword99/parameter-golf
that referenced
this pull request
Mar 21, 2026
- Move EMA shadow weights to GPU (CPU transfers cost ~32% throughput) - Increase train seq_len from 1024 to 2048 (matches record PR openai#338) Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.This suggestion is invalid because no changes were made to the code.Suggestions cannot be applied while the pull request is closed.Suggestions cannot be applied while viewing a subset of changes.Only one suggestion per line can be applied in a batch.Add this suggestion to a batch that can be applied as a single commit.Applying suggestions on deleted lines is not supported.You must change the existing code in this line in order to create a valid suggestion.Outdated suggestions cannot be applied.This suggestion has been applied or marked resolved.Suggestions cannot be applied from pending reviews.Suggestions cannot be applied on multi-line comments.Suggestions cannot be applied while the pull request is queued to merge.Suggestion cannot be applied right now. Please check back later.
11L XSA + EMA + TTT + Int6 MLP3x
val_bpb = 1.1254 (sliding window stride=64, best seed 42) | 15.55 MB artifact | 8xH100 SXM, 600s
Key Innovation: TTT on XSA+EMA baseline
First submission combining XSA (Exclusive Self Attention) + EMA + Test-Time Training. After training and quantization, TTT performs 3 epochs of SGD fine-tuning on the validation token stream, adapting the model to the test distribution.
Results (3-seed, 8xH100 SXM)
Mean: 1.1256 | Std: 0.0002
TTT Details
Architecture (from PR #315)
Eval Timing
Training: 600s | TTT: 47s | Sliding eval: 73s | Total eval: ~120s
Reproduction
Defaulting to user installation because normal site-packages is not writeable
Requirement already satisfied: zstandard in e:�naconda\lib\site-packages (0.23.0)
Built on PR #315 (XSA, EMA, SmearGate, BigramHash, OrthoInit, sliding window eval).