• Joined on 2024-05-31
tacit synced commits to refs/pull/59/merge at tacit/nanochat from mirror 2025-11-14 01:22:14 +00:00
9a71d13688 typo oops
7b7fd0fe71 thank you Sophie for your help with nanochat
c6abcdfe3a big change: add pretraining resumption logic so that checkpoints can now be approximately resumed and training can continue. this is useful for very long runs when you don't want the anxiety of your run crashing for some reason. alternatively, it's a way to recover training in the event of loss spikes. i mean, this should have been there in v0 but it's ok. the resumption is approximate to control complexity and bloat, but it's possible we want to change that in the future. to use, set --save_every to a step interval to write checkpoints with, and then use --resume_from_step to resume optimization from a given step. only base model training (pretraining) supports this atm, but it's ok because midtraining is comparably quite a bit faster.
91f09ccd0d minor fix comment in engine
Compare 6 commits »
tacit synced commits to refs/pull/93/merge at tacit/nanochat from mirror 2025-11-14 01:22:14 +00:00
9a71d13688 typo oops
7b7fd0fe71 thank you Sophie for your help with nanochat
c6abcdfe3a big change: add pretraining resumption logic so that checkpoints can now be approximately resumed and training can continue. this is useful for very long runs when you don't want the anxiety of your run crashing for some reason. alternatively, it's a way to recover training in the event of loss spikes. i mean, this should have been there in v0 but it's ok. the resumption is approximate to control complexity and bloat, but it's possible we want to change that in the future. to use, set --save_every to a step interval to write checkpoints with, and then use --resume_from_step to resume optimization from a given step. only base model training (pretraining) supports this atm, but it's ok because midtraining is comparably quite a bit faster.
91f09ccd0d minor fix comment in engine
Compare 6 commits »
tacit synced commits to refs/pull/3/merge at tacit/nanochat from mirror 2025-11-14 01:22:14 +00:00
9a71d13688 typo oops
7b7fd0fe71 thank you Sophie for your help with nanochat
c6abcdfe3a big change: add pretraining resumption logic so that checkpoints can now be approximately resumed and training can continue. this is useful for very long runs when you don't want the anxiety of your run crashing for some reason. alternatively, it's a way to recover training in the event of loss spikes. i mean, this should have been there in v0 but it's ok. the resumption is approximate to control complexity and bloat, but it's possible we want to change that in the future. to use, set --save_every to a step interval to write checkpoints with, and then use --resume_from_step to resume optimization from a given step. only base model training (pretraining) supports this atm, but it's ok because midtraining is comparably quite a bit faster.
91f09ccd0d minor fix comment in engine
Compare 6 commits »
tacit synced commits to refs/pull/258/merge at tacit/nanochat from mirror 2025-11-14 01:22:14 +00:00
9a71d13688 typo oops
7b7fd0fe71 thank you Sophie for your help with nanochat
c6abcdfe3a big change: add pretraining resumption logic so that checkpoints can now be approximately resumed and training can continue. this is useful for very long runs when you don't want the anxiety of your run crashing for some reason. alternatively, it's a way to recover training in the event of loss spikes. i mean, this should have been there in v0 but it's ok. the resumption is approximate to control complexity and bloat, but it's possible we want to change that in the future. to use, set --save_every to a step interval to write checkpoints with, and then use --resume_from_step to resume optimization from a given step. only base model training (pretraining) supports this atm, but it's ok because midtraining is comparably quite a bit faster.
91f09ccd0d minor fix comment in engine
Compare 6 commits »
tacit synced commits to refs/pull/40/merge at tacit/nanochat from mirror 2025-11-14 01:22:14 +00:00
9a71d13688 typo oops
7b7fd0fe71 thank you Sophie for your help with nanochat
c6abcdfe3a big change: add pretraining resumption logic so that checkpoints can now be approximately resumed and training can continue. this is useful for very long runs when you don't want the anxiety of your run crashing for some reason. alternatively, it's a way to recover training in the event of loss spikes. i mean, this should have been there in v0 but it's ok. the resumption is approximate to control complexity and bloat, but it's possible we want to change that in the future. to use, set --save_every to a step interval to write checkpoints with, and then use --resume_from_step to resume optimization from a given step. only base model training (pretraining) supports this atm, but it's ok because midtraining is comparably quite a bit faster.
91f09ccd0d minor fix comment in engine
Compare 6 commits »
tacit synced commits to refs/pull/275/merge at tacit/nanochat from mirror 2025-11-14 01:22:14 +00:00
9a71d13688 typo oops
7b7fd0fe71 thank you Sophie for your help with nanochat
c6abcdfe3a big change: add pretraining resumption logic so that checkpoints can now be approximately resumed and training can continue. this is useful for very long runs when you don't want the anxiety of your run crashing for some reason. alternatively, it's a way to recover training in the event of loss spikes. i mean, this should have been there in v0 but it's ok. the resumption is approximate to control complexity and bloat, but it's possible we want to change that in the future. to use, set --save_every to a step interval to write checkpoints with, and then use --resume_from_step to resume optimization from a given step. only base model training (pretraining) supports this atm, but it's ok because midtraining is comparably quite a bit faster.
91f09ccd0d minor fix comment in engine
Compare 6 commits »
tacit synced commits to refs/pull/53/merge at tacit/nanochat from mirror 2025-11-14 01:22:14 +00:00
9a71d13688 typo oops
7b7fd0fe71 thank you Sophie for your help with nanochat
c6abcdfe3a big change: add pretraining resumption logic so that checkpoints can now be approximately resumed and training can continue. this is useful for very long runs when you don't want the anxiety of your run crashing for some reason. alternatively, it's a way to recover training in the event of loss spikes. i mean, this should have been there in v0 but it's ok. the resumption is approximate to control complexity and bloat, but it's possible we want to change that in the future. to use, set --save_every to a step interval to write checkpoints with, and then use --resume_from_step to resume optimization from a given step. only base model training (pretraining) supports this atm, but it's ok because midtraining is comparably quite a bit faster.
91f09ccd0d minor fix comment in engine
Compare 6 commits »
tacit synced commits to refs/pull/159/merge at tacit/nanochat from mirror 2025-11-14 01:22:13 +00:00
9a71d13688 typo oops
7b7fd0fe71 thank you Sophie for your help with nanochat
c6abcdfe3a big change: add pretraining resumption logic so that checkpoints can now be approximately resumed and training can continue. this is useful for very long runs when you don't want the anxiety of your run crashing for some reason. alternatively, it's a way to recover training in the event of loss spikes. i mean, this should have been there in v0 but it's ok. the resumption is approximate to control complexity and bloat, but it's possible we want to change that in the future. to use, set --save_every to a step interval to write checkpoints with, and then use --resume_from_step to resume optimization from a given step. only base model training (pretraining) supports this atm, but it's ok because midtraining is comparably quite a bit faster.
91f09ccd0d minor fix comment in engine
Compare 6 commits »
tacit synced commits to refs/pull/151/merge at tacit/nanochat from mirror 2025-11-14 01:22:13 +00:00
9a71d13688 typo oops
7b7fd0fe71 thank you Sophie for your help with nanochat
c6abcdfe3a big change: add pretraining resumption logic so that checkpoints can now be approximately resumed and training can continue. this is useful for very long runs when you don't want the anxiety of your run crashing for some reason. alternatively, it's a way to recover training in the event of loss spikes. i mean, this should have been there in v0 but it's ok. the resumption is approximate to control complexity and bloat, but it's possible we want to change that in the future. to use, set --save_every to a step interval to write checkpoints with, and then use --resume_from_step to resume optimization from a given step. only base model training (pretraining) supports this atm, but it's ok because midtraining is comparably quite a bit faster.
91f09ccd0d minor fix comment in engine
Compare 6 commits »
tacit synced commits to refs/pull/161/merge at tacit/nanochat from mirror 2025-11-14 01:22:13 +00:00
9a71d13688 typo oops
7b7fd0fe71 thank you Sophie for your help with nanochat
c6abcdfe3a big change: add pretraining resumption logic so that checkpoints can now be approximately resumed and training can continue. this is useful for very long runs when you don't want the anxiety of your run crashing for some reason. alternatively, it's a way to recover training in the event of loss spikes. i mean, this should have been there in v0 but it's ok. the resumption is approximate to control complexity and bloat, but it's possible we want to change that in the future. to use, set --save_every to a step interval to write checkpoints with, and then use --resume_from_step to resume optimization from a given step. only base model training (pretraining) supports this atm, but it's ok because midtraining is comparably quite a bit faster.
91f09ccd0d minor fix comment in engine
Compare 6 commits »
tacit synced commits to refs/pull/204/merge at tacit/nanochat from mirror 2025-11-14 01:22:13 +00:00
9a71d13688 typo oops
7b7fd0fe71 thank you Sophie for your help with nanochat
c6abcdfe3a big change: add pretraining resumption logic so that checkpoints can now be approximately resumed and training can continue. this is useful for very long runs when you don't want the anxiety of your run crashing for some reason. alternatively, it's a way to recover training in the event of loss spikes. i mean, this should have been there in v0 but it's ok. the resumption is approximate to control complexity and bloat, but it's possible we want to change that in the future. to use, set --save_every to a step interval to write checkpoints with, and then use --resume_from_step to resume optimization from a given step. only base model training (pretraining) supports this atm, but it's ok because midtraining is comparably quite a bit faster.
91f09ccd0d minor fix comment in engine
Compare 6 commits »
tacit synced commits to refs/pull/15/merge at tacit/nanochat from mirror 2025-11-14 01:22:13 +00:00
9a71d13688 typo oops
7b7fd0fe71 thank you Sophie for your help with nanochat
c6abcdfe3a big change: add pretraining resumption logic so that checkpoints can now be approximately resumed and training can continue. this is useful for very long runs when you don't want the anxiety of your run crashing for some reason. alternatively, it's a way to recover training in the event of loss spikes. i mean, this should have been there in v0 but it's ok. the resumption is approximate to control complexity and bloat, but it's possible we want to change that in the future. to use, set --save_every to a step interval to write checkpoints with, and then use --resume_from_step to resume optimization from a given step. only base model training (pretraining) supports this atm, but it's ok because midtraining is comparably quite a bit faster.
91f09ccd0d minor fix comment in engine
Compare 6 commits »
tacit synced commits to refs/pull/256/merge at tacit/nanochat from mirror 2025-11-13 17:12:14 +00:00
9a71d13688 typo oops
7b7fd0fe71 thank you Sophie for your help with nanochat
c6abcdfe3a big change: add pretraining resumption logic so that checkpoints can now be approximately resumed and training can continue. this is useful for very long runs when you don't want the anxiety of your run crashing for some reason. alternatively, it's a way to recover training in the event of loss spikes. i mean, this should have been there in v0 but it's ok. the resumption is approximate to control complexity and bloat, but it's possible we want to change that in the future. to use, set --save_every to a step interval to write checkpoints with, and then use --resume_from_step to resume optimization from a given step. only base model training (pretraining) supports this atm, but it's ok because midtraining is comparably quite a bit faster.
91f09ccd0d minor fix comment in engine
Compare 6 commits »
tacit synced commits to refs/pull/32/merge at tacit/nanochat from mirror 2025-11-13 17:12:14 +00:00
9a71d13688 typo oops
7b7fd0fe71 thank you Sophie for your help with nanochat
c6abcdfe3a big change: add pretraining resumption logic so that checkpoints can now be approximately resumed and training can continue. this is useful for very long runs when you don't want the anxiety of your run crashing for some reason. alternatively, it's a way to recover training in the event of loss spikes. i mean, this should have been there in v0 but it's ok. the resumption is approximate to control complexity and bloat, but it's possible we want to change that in the future. to use, set --save_every to a step interval to write checkpoints with, and then use --resume_from_step to resume optimization from a given step. only base model training (pretraining) supports this atm, but it's ok because midtraining is comparably quite a bit faster.
91f09ccd0d minor fix comment in engine
Compare 6 commits »
tacit synced commits to refs/pull/90/merge at tacit/nanochat from mirror 2025-11-13 17:12:14 +00:00
9a71d13688 typo oops
7b7fd0fe71 thank you Sophie for your help with nanochat
c6abcdfe3a big change: add pretraining resumption logic so that checkpoints can now be approximately resumed and training can continue. this is useful for very long runs when you don't want the anxiety of your run crashing for some reason. alternatively, it's a way to recover training in the event of loss spikes. i mean, this should have been there in v0 but it's ok. the resumption is approximate to control complexity and bloat, but it's possible we want to change that in the future. to use, set --save_every to a step interval to write checkpoints with, and then use --resume_from_step to resume optimization from a given step. only base model training (pretraining) supports this atm, but it's ok because midtraining is comparably quite a bit faster.
91f09ccd0d minor fix comment in engine
Compare 6 commits »
tacit synced commits to refs/pull/255/merge at tacit/nanochat from mirror 2025-11-13 17:12:14 +00:00
9a71d13688 typo oops
7b7fd0fe71 thank you Sophie for your help with nanochat
c6abcdfe3a big change: add pretraining resumption logic so that checkpoints can now be approximately resumed and training can continue. this is useful for very long runs when you don't want the anxiety of your run crashing for some reason. alternatively, it's a way to recover training in the event of loss spikes. i mean, this should have been there in v0 but it's ok. the resumption is approximate to control complexity and bloat, but it's possible we want to change that in the future. to use, set --save_every to a step interval to write checkpoints with, and then use --resume_from_step to resume optimization from a given step. only base model training (pretraining) supports this atm, but it's ok because midtraining is comparably quite a bit faster.
91f09ccd0d minor fix comment in engine
Compare 6 commits »
tacit synced and deleted reference refs/tags/refs/pull/286/merge at tacit/nanochat from mirror 2025-11-13 17:12:13 +00:00
tacit synced commits to master at tacit/nanochat from mirror 2025-11-13 17:12:13 +00:00
9a71d13688 typo oops
7b7fd0fe71 thank you Sophie for your help with nanochat
c6abcdfe3a big change: add pretraining resumption logic so that checkpoints can now be approximately resumed and training can continue. this is useful for very long runs when you don't want the anxiety of your run crashing for some reason. alternatively, it's a way to recover training in the event of loss spikes. i mean, this should have been there in v0 but it's ok. the resumption is approximate to control complexity and bloat, but it's possible we want to change that in the future. to use, set --save_every to a step interval to write checkpoints with, and then use --resume_from_step to resume optimization from a given step. only base model training (pretraining) supports this atm, but it's ok because midtraining is comparably quite a bit faster.
91f09ccd0d minor fix comment in engine
adb5d4a16c uv lock has to change when we removed numpy the other commit
Compare 5 commits »
tacit synced commits to refs/pull/201/merge at tacit/nanochat from mirror 2025-11-13 17:12:13 +00:00
9a71d13688 typo oops
7b7fd0fe71 thank you Sophie for your help with nanochat
c6abcdfe3a big change: add pretraining resumption logic so that checkpoints can now be approximately resumed and training can continue. this is useful for very long runs when you don't want the anxiety of your run crashing for some reason. alternatively, it's a way to recover training in the event of loss spikes. i mean, this should have been there in v0 but it's ok. the resumption is approximate to control complexity and bloat, but it's possible we want to change that in the future. to use, set --save_every to a step interval to write checkpoints with, and then use --resume_from_step to resume optimization from a given step. only base model training (pretraining) supports this atm, but it's ok because midtraining is comparably quite a bit faster.
91f09ccd0d minor fix comment in engine
Compare 6 commits »
tacit synced commits to refs/pull/252/merge at tacit/nanochat from mirror 2025-11-13 17:12:13 +00:00
9a71d13688 typo oops
7b7fd0fe71 thank you Sophie for your help with nanochat
c6abcdfe3a big change: add pretraining resumption logic so that checkpoints can now be approximately resumed and training can continue. this is useful for very long runs when you don't want the anxiety of your run crashing for some reason. alternatively, it's a way to recover training in the event of loss spikes. i mean, this should have been there in v0 but it's ok. the resumption is approximate to control complexity and bloat, but it's possible we want to change that in the future. to use, set --save_every to a step interval to write checkpoints with, and then use --resume_from_step to resume optimization from a given step. only base model training (pretraining) supports this atm, but it's ok because midtraining is comparably quite a bit faster.
91f09ccd0d minor fix comment in engine
Compare 6 commits »