nanochat/scripts
Kaiyue Wen 29487517ed Revert to torchao for FP8 training to fix MFU regression
The custom fp8 module had a performance issue in reparam_linear:
it was doing reshape→matmul→reshape on every linear layer, and
torch.compile couldn't fuse these operations because _Float8Matmul
was marked @allow_in_graph (opaque to compiler).

torchao's matmul_with_hp_or_float8_args handles N-D tensors directly
without external reshaping, allowing better fusion opportunities and
higher MFU.

Co-Authored-By: Claude Sonnet 4.5 (1M context) <noreply@anthropic.com>
2026-02-12 16:58:05 -08:00
..
base_eval.py small touchups to the eval script, re-order items etc, cosmetic 2026-02-03 21:03:42 +00:00
base_train.py Revert to torchao for FP8 training to fix MFU regression 2026-02-12 16:58:05 -08:00
chat_cli.py remove leftover mid references (#491) 2026-02-02 08:33:46 -08:00
chat_eval.py remove leftover mid references (#491) 2026-02-02 08:33:46 -08:00
chat_rl.py remove leftover mid references (#491) 2026-02-02 08:33:46 -08:00
chat_sft.py fix bug in chat_sft, the attention window must be preserved sigh 2026-02-01 20:58:44 +00:00
chat_web.py remove leftover mid references (#491) 2026-02-02 08:33:46 -08:00
tok_eval.py initial commit 2025-10-13 06:49:24 -07:00
tok_train.py quick fix to not OOM main speedrun script 2026-01-26 22:31:42 +00:00