From f8ff0439b9b9192399deb1ed8a09874152b4a407 Mon Sep 17 00:00:00 2001 From: svlandeg Date: Fri, 6 Mar 2026 11:03:00 +0100 Subject: [PATCH 1/3] two more small typos --- README.md | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/README.md b/README.md index 077fd9c..6be1109 100644 --- a/README.md +++ b/README.md @@ -71,7 +71,7 @@ OMP_NUM_THREADS=1 torchrun --standalone --nproc_per_node=8 -m scripts.base_train This uses wandb (run name "d12"), only runs the CORE metric on last step, and it doesn't sample and save intermediate checkpoints. I like to change something in the code, re-run a d12 (or a d16 etc) and see if it helped, in an iteration loop. To see if a run helps, I like to monitor the wandb plots for: 1. `val_bpb` (validation loss in vocab-size-invariant units of bits per byte) as a function of `step`, `total_training_time` and `total_training_flops`. -2. `core_metric` (the DCLM CORE socre) +2. `core_metric` (the DCLM CORE score) 3. VRAM utilization, `train/mfu` (Model FLOPS utilization), `train/tok_per_sec` (training throughput) See an example [here](https://github.com/karpathy/nanochat/pull/498#issuecomment-3850720044). @@ -101,7 +101,7 @@ NANOCHAT_DTYPE=bfloat16 torchrun --nproc_per_node=8 -m scripts.base_train # for How it works: model weights are stored in fp32 (for optimizer precision), but our custom `Linear` layer casts them to `COMPUTE_DTYPE` during the forward pass. Embeddings are stored directly in `COMPUTE_DTYPE` to save memory. This gives us the same mixed-precision benefit as autocast but with full explicit control over what runs in which precision. -Note: `float16` training automatically enables a `GradScaler` in `base_train.py` to prevent gradient underflow. SFT suppors this too but RL currently does not. Inference in fp16 works fine everywhere. +Note: `float16` training automatically enables a `GradScaler` in `base_train.py` to prevent gradient underflow. SFT supports this too but RL currently does not. Inference in fp16 works fine everywhere. ## Guides From d96558bcb0dc11b546bebff79bc0f56fa944c362 Mon Sep 17 00:00:00 2001 From: svlandeg Date: Tue, 10 Mar 2026 09:57:30 +0100 Subject: [PATCH 2/3] fix heading, cf #622 --- .claude/skills/read-arxiv-paper/SKILL.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/.claude/skills/read-arxiv-paper/SKILL.md b/.claude/skills/read-arxiv-paper/SKILL.md index 6a9cda7..0a1b131 100644 --- a/.claude/skills/read-arxiv-paper/SKILL.md +++ b/.claude/skills/read-arxiv-paper/SKILL.md @@ -33,7 +33,7 @@ Every latex source usually has an entrypoint, such as `main.tex` or something li Once you've found the entrypoint, Read the contents and then recurse through all other relevant source files to read the paper. -#### Part 6: Report +### Part 6: Report Once you've read the paper, produce a summary of the paper into a markdown file at `./knowledge/summary_{tag}.md`. Notice that 1) use the local knowledge directory here (it's easier for me to open and reference here), not in `~/.cache`, and 2) generate some reasonable `tag` like e.g. `conditional_memory` or whatever seems appropriate given the paper. Probably make sure that the tag doesn't exist yet so you're not overwriting files. From 1052d25d454847a4bbf2cb85cbee250471535814 Mon Sep 17 00:00:00 2001 From: svlandeg Date: Fri, 13 Mar 2026 13:46:16 +0100 Subject: [PATCH 3/3] we only need to wait 2h now! --- dev/LEADERBOARD.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/dev/LEADERBOARD.md b/dev/LEADERBOARD.md index 556ec3c..6fdeaa3 100644 --- a/dev/LEADERBOARD.md +++ b/dev/LEADERBOARD.md @@ -36,7 +36,7 @@ Note that: - `target-param-data-ratio=8.25` controls the training horizon, which is determined in the script by taking the number of non-embedding model parameters and simply multiplying by this number. The current optimal Tokens:Params ratio can be seen in the defaults of the `base_train.py` script (it is 10.5). 10.5 would produce the *compute optimal* model given the currently measured scaling laws. However, GPT-2 capability is currently somewhere in between a d24 and d26. So to reach it exactly, we want to either overtrain d24 or undertrain d26. In this particular example, I am choosing to slightly undertrain a d26. Note that odd depths (e.g. d25) are not super recommended to use because the math around the transformer sizing and its head dimensions doesn't come out neatly. - `--fp8` turns on fp8 training. If your GPU does not support fp8, you can leave this out and the code will simply train in bf16. bf16 is higher precision than fp8, so you can actually expect that you might be able to do fewer steps (lower the `target-param-data-ratio`) to achieve the same capability. -Once you kick off the run, you wait ~3 hours and then at the end you'll see something like: +Once you kick off the run, you wait ~2 hours and then at the end you'll see something like: ``` wandb: Run summary: