nanochat/dev/LOG.md

5.1 KiB
Raw Blame History

Experiment Log

A running summary documenting some experiments and findings. Started ~Jan 7 2026.


2026-01-10: Muon Optimizer Upgrades & Cautious Weight Decay

Cherry-picked improvements from NorMuon (modded-nanogpt) into our simpler Muon implementation. Decided against using NorMuon directly due to hard-coded architecture assumptions (expects 32 params split 10 attn + 22 mlp), parameter labeling requirements, and complexity.

Changes Made

1. Polar Express Orthogonalization

  • Replaced Newton-Schulz iteration with "Polar Express Sign Method" from arxiv.org/pdf/2505.16932
  • Uses 5 different coefficient tuples (one per iteration) instead of fixed coefficients
  • Both methods kept in code for easy comparison (zeropower_via_polar_express vs zeropower_via_newtonschulz5)
  • Result: No dramatic/noticeable difference in training, but keeping the new Polar Express as default.

2. Variance Reduction (NorMuon-style)

  • Added low-rank variance estimator similar to Adafactor (arxiv.org/pdf/2510.05491)
  • Maintains second_momentum_buffer with shape [rows, 1] or [1, cols] (whichever is smaller)
  • Normalizes updates based on running per-row/col variance estimate (beta2=0.95)
  • Memory overhead: ~1/max(rows, cols) per param, negligible
  • Result: Led to a very small improvement, kept and enabled by default.

3. Cautious Weight Decay

  • Only decays weights where update * weight >= 0 (same sign) from arxiv.org/abs/2411.16085
  • Standard WD always pulls toward zero; cautious WD skips decay when gradient is pushing weight away from zero
  • Implementation note: Had to inline the logic rather than use a separate @torch.compile function. Passing changing float values (like weight_decay during scheduling) as function arguments triggers recompilation. Reading from group["weight_decay"] inside the step avoids this.
  • Result: Solid improvements, especially the cautious version was better than standard wd.
  • Now defaults to ON for Muon via the weight_decay param. AdamW still has no weight decay and is hardcoded to 0 weight decay, might try to re-tune this later.

4. Weight decay schedule

  • Added a linear schedule to weight decay that is default on from 1.0 to 0.0 (i.e. start with max weight decay in the beginning of training, them ramp to 0 by the end). Worked better than a static setting in experiments. (modded-nanogpt has the same schedule but it is imlpemented in a more confusing way by multiplying twice by the learning rate, which is already wired up to a decay schedule).

Weight Decay Scaling Experiments

Swept weight decay values at d8, d12, d16, d20 to find optimal values and scaling law.

Optimal Values Found:

Depth Width (channels) Optimal WD
d8 512 ~0.40
d12 768 ~0.22
d16 1024 ~0.10
d20 1280 ~0.08

Scaling Law:

  • Fit power law: WD = k / channels^α in log-log space
  • Found α ≈ 1.97 (approximately 2), meaning WD ∝ 1/width²

Practical Formula:

WD_target = WD_reference × (d_reference / d_target)²

Example: If d12 optimal is 0.22, then d20 optimal ≈ 0.22 × (12/20)² ≈ 0.08

Reference: Moonlight paper uses fixed WD=0.1 for their 15B MoE model. Our experiments indicated a scaling law where the optimal WD changed with depth, so we go along with the empirical scaling law.

Summary

Muon was changed to use Polar Express, added Adafactor-style variance reduction, and cautious weight decay with schedule that ramps linearly to zero. All of these changes follow modded-nanogpt repo, but all of them were also validated piece by piece to yield improvements in nanochat with the exception of the Polar Express change which was in the noise. This is default on and configurable with --weight_decay, using simply 0.2 and ∝ 1/width² scaling. The kwarg --weight_decay is therefore changing as of this change. It used to configure AdamW via standard weight decay and now it becomes exclusively used in Muon (AdamW is hardcoded to 0.0), and it is scaled based on depth.


2026-01-08: exp_grad_clip - Gradient Clipping

Hypothesis: Gradient clipping may be unnecessary overhead. Tested L2 norm clipping at various thresholds (0.25, 0.5, 1.0, 2.0) and elementwise clipping.

Results:

  • No benefit at any scale tested (d12, d20)
  • All variants within noise (~0.9827 val_bpb)
  • Grad norm never exceeds 1.0 naturally, so clipping is always inactive
  • Clipping adds ~2% time overhead from the all-reduce

Bug Found: Original implementation clipped local gradients before sync. Since this codebase doesn't use DDP (gradient sync is in the optimizers), each rank was clipping based on its own local norm. Fixed on the branch with proper distributed all-reduce.

Observartion: modded-nanogpt does not appear to clip either right now.

Summary: Deleted all grad-clip code paths. The code naturally produces well-behaved gradients. This improves a bit of MFU because we don't have to calculate and sync grad norms.