diff --git a/dev/LOG.md b/dev/LOG.md index 5f6e1d7..2a94daa 100644 --- a/dev/LOG.md +++ b/dev/LOG.md @@ -4,6 +4,14 @@ A running summary documenting some experiments and findings. Started ~Jan 7 2026 --- +## 2026-01-15: Olmo pretraining mix (Negative result) + +I attempted to train on the Olmo 3 pretraining dataset [allenai/dolma3_mix-6T](https://huggingface.co/datasets/allenai/dolma3_mix-6T) instead of FineWeb-edu. I ran into a number of [errors and issues](https://huggingface.co/datasets/allenai/dolma3_mix-6T/discussions/2) trying to both download and process the dataset and then noticed some quality issues (e.g. some documents seem to be extremely short, like "5".). I managed to work around these with some sensible hacks (e.g. reject documents less than 100 characters in length) and tried to process the dataset exactly as FineWeb, re-trained the tokenizer and trained a d16 model. The CORE score decreased from 15.5 to 13.8, i.e. the result is quite a bit worse. + +I am still looking to try the [DCLM dataset](https://arxiv.org/abs/2406.11794), which according to the paper should be better that FineWeb-edu. I do have some concerns that the same group both prepared the DCLM dataset *and* introduced the CORE score so I'm a bit hesitant in case there was some overfitting to CORE score adjacent data distribution. + +Classifying as negative result and reverting back to FineWeb-edu for now. + ## 2026-01-13: Varlen Attention (Negative Result) Attempted to prevent attention from "leaking" across document boundaries using Flash Attention's `flash_attn_varlen_func`, similar to modded-nanogpt's approach.