From cc40ccc51502a24531e7dac1c8b37e5e45a04d29 Mon Sep 17 00:00:00 2001 From: svlandeg Date: Wed, 14 Jan 2026 15:08:50 +0100 Subject: [PATCH] fix commands in readme, using new arg format --- README.md | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/README.md b/README.md index acb9111..9de2884 100644 --- a/README.md +++ b/README.md @@ -82,10 +82,10 @@ That said, to give a sense, the example changes needed for the [speedrun.sh](spe python -m nanochat.dataset -n 450 & ... # use --depth to increase model size. to not oom, halve device batch size 32 -> 16: -torchrun --standalone --nproc_per_node=8 -m scripts.base_train -- --depth=26 --device_batch_size=16 +torchrun --standalone --nproc_per_node=8 -m scripts.base_train -- --depth=26 --device-batch-size=16 ... # make sure to use the same later during midtraining: -torchrun --standalone --nproc_per_node=8 -m scripts.mid_train -- --device_batch_size=16 +torchrun --standalone --nproc_per_node=8 -m scripts.mid_train -- --device-batch-size=16 ``` That's it! The biggest thing to pay attention to is making sure you have enough data shards to train on (the code will loop and do more epochs over the same training set otherwise, decreasing learning speed a bit), and managing your memory/VRAM, primarily by decreasing the `device_batch_size` until things fit (the scripts automatically compensate by increasing the number of gradient accumulation loops, simply turning parallel compute to sequential compute).