mirror of
https://github.com/karpathy/nanochat.git
synced 2026-03-10 19:25:31 +00:00
tune logit softcap?
This commit is contained in:
parent
83dccc20ae
commit
aba30cb037
|
|
@ -4,6 +4,10 @@ A running summary documenting some experiments and findings. Started ~Jan 7 2026
|
|||
|
||||
---
|
||||
|
||||
## 2026-03-02: SoftCap tuning
|
||||
|
||||
Quick experiment to tune logit softcap on d24 scale. Tried 5..30. 5 was terrible, the rest of them were all about equal with the exception of 20, which was the best. Minor but solid improvement: val loss improved by ~1e-3 (0.716 -> 0.715). Setting as default.
|
||||
|
||||
## 2026-02-19: Mixture of Experts (negative)
|
||||
|
||||
Implemented a DeepSeekV3-style Mixture of Experts layer as a drop-in replacement for the dense MLP. The MoE branch works and improves per-step validation loss, but is not a net improvement on wall clock time due to MoE overhead (at least for our scale of interest of approx GPT-2 capability).
|
||||
|
|
|
|||
|
|
@ -407,7 +407,7 @@ class GPT(nn.Module):
|
|||
x = norm(x)
|
||||
|
||||
# Forward the lm_head (compute logits)
|
||||
softcap = 15 # smoothly cap the logits to the range [-softcap, softcap]
|
||||
softcap = 20 # smoothly cap the logits to the range [-softcap, softcap]
|
||||
logits = self.lm_head(x) # (B, T, padded_vocab_size) <- very big tensor, large amount of memory
|
||||
logits = logits[..., :self.config.vocab_size] # slice to remove padding
|
||||
logits = logits.float() # switch to fp32 for logit softcap and loss computation
|
||||
|
|
|
|||
Loading…
Reference in New Issue
Block a user