## Problem
When running SFT with small device-batch-size (≤8), fully-masked micro-batches
cause NaN loss from step 1, corrupting gradients permanently. This happens when
a micro-batch contains only 'User' tokens (all targets=-1), especially common
with small batch sizes on consumer GPUs.
Root cause: torch.nn.functional.cross_entropy with reduction='mean' returns NaN
when all labels are -1 (division by zero in mean computation).
## Solution
Added validation in the training loop to detect and skip fully-masked batches:
- Check (y != -1).any() before computing loss
- Skip backward() for batches with no valid targets (zero gradient contribution)
- Track skipped batches and warn user if >5% in first 100 steps
- Log skipped batches as loss=0 for transparency
## Testing
- Added comprehensive test suite (test_sft_masked_batches.py)
- Tests cover: fully masked, partially masked, and unmasked batches
- Documents cross_entropy behavior with ignore_index=-1
- Validates the fix logic
## Impact
- Fixes#590: NaN loss with small batch sizes
- No performance impact for normal batches
- Helps users on consumer GPUs (RTX 3060, etc.)
- Prevents silent gradient corruption
Resolves#590
SDPA fallback now respects sliding window during single-token KV-cache
decode by slicing K/V to the last (window + 1) tokens.
Also simplifies the mask building for chunk inference to properly apply
sliding window in that path as well.
Fixes#452
Co-Authored-By: Kartik Vashishta <kartikv776@gmail.com>
Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
* test: add engine generation tests for expected invariants
- test_seed_reproducibility
- test_temperature_zero_determinism
- test_max_tokens_respected
- test_num_samples_count
🤖 Generated with [Claude Code](https://claude.com/claude-code)
Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
* Fix temperature test
* add test for seed variation in sampling
Add test for seed variation in sampling with temperature > 0.
* Rename test for clarity
* Shorten assert msg
---------
Co-authored-by: Claude Opus 4.5 <noreply@anthropic.com>
Co-authored-by: Sofie Van Landeghem <svlandeg@users.noreply.github.com>
Previously, when generating multiple samples (num_samples > 1), the first
token after prefill was sampled once and broadcast to all rows, causing
all samples to start identically. Now the prefill logits are expanded to
num_samples and sampled independently for each row.
Also simplified the generation loop by moving the forward pass to the end
of the loop, eliminating the first_iteration flag and if/else branching.
🤖 Generated with [Claude Code](https://claude.com/claude-code)
Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
Performance varies by machine and load, making hard assertions flaky.
🤖 Generated with [Claude Code](https://claude.com/claude-code)
Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>