Record: 11L Backout + Int6 + SWA (val_bpb: 1.1364)#339
Open
sheeki03 wants to merge 1 commit intoopenai:mainfrom
Open
Record: 11L Backout + Int6 + SWA (val_bpb: 1.1364)#339sheeki03 wants to merge 1 commit intoopenai:mainfrom
sheeki03 wants to merge 1 commit intoopenai:mainfrom
Conversation
Adds Backout Connection — learned residual subtraction from mid-network hidden state. Improves val_bpb by 0.0071 over PR openai#198 baseline with zero additional matrix parameters (one learned scalar). val_bpb: 1.1364 (sliding window, stride=64) Artifact: 16,170,051 bytes (170KB over cap, fixable with INT5_MLP=1) Hardware: 8xH100 SXM, 600s wallclock Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.This suggestion is invalid because no changes were made to the code.Suggestions cannot be applied while the pull request is closed.Suggestions cannot be applied while viewing a subset of changes.Only one suggestion per line can be applied in a batch.Add this suggestion to a batch that can be applied as a single commit.Applying suggestions on deleted lines is not supported.You must change the existing code in this line in order to create a valid suggestion.Outdated suggestions cannot be applied.This suggestion has been applied or marked resolved.Suggestions cannot be applied from pending reviews.Suggestions cannot be applied on multi-line comments.Suggestions cannot be applied while the pull request is queued to merge.Suggestion cannot be applied right now. Please check back later.
Record: 11L Backout + Int6 + SWA (val_bpb: 1.1364)
val_bpb: 1.1364 (sliding window, stride=64) | 16.17 MB | 8xH100 SXM, 600s
Known Issue
Artifact is 16,170,051 bytes — 170KB over the 16,000,000 byte cap. The code supports
INT5_MLP=1which switches MLP quantization from int6 to int5, saving 1-2MB. A follow-up run is planned to bring the artifact under the cap.Progress from prior submissions
Note: Our baseline replication of PR #198's config yielded 1.1435 (vs their reported 1.1318), likely due to hardware/driver differences (RunPod community cloud vs dedicated). Relative to our own baseline, Backout improves by -0.0071.
What's new
Backout Connection — A learned residual subtraction from a mid-network hidden state. After the U-Net encoder-decoder forward pass, the model subtracts
lambda * h_midfrom the final representation, wherelambdais a learned scalar (initialized at 0.2) andh_midis the hidden state at layernum_layers // 2.This acts as a learned negative residual that removes redundant mid-network information, sharpening the final representation for the language modeling head. Zero additional matrix parameters — only one learned scalar.
Controlled comparison (same hardware, same run)
Results
Architecture
11 layers, 512 dim, 8 heads / 4 KV heads, MLP 3x, relu-squared, SmearGate, BigramHash(4096), OrthoInit, Muon + AdamW with WD=0.04, SWA, int6 mixed quant + zstd, FA3, seq 2048, sliding window eval stride=64.
Backout layer:
num_layers // 2(layer 5). Lambda: learned scalar, initialized at 0.2.Run command
Hardware
8xH100 SXM 80GB HBM3 (RunPod, EUR-IS-3)
Next steps
INT5_MLP=1to bring artifact under 16MB