keep original cpu/gpu extra

This commit is contained in:
svlandeg 2026-03-27 17:46:01 +01:00
parent 98d3a26e45
commit c6381cabc2
4 changed files with 12 additions and 11 deletions

View File

@ -37,18 +37,20 @@ uv sync --extra cpu # (or) Use for CPU-only / MPS
source .venv/bin/activate
```
If you plan on running `scripts.chat_web` to chat with your model via a web UI, add the extra "web":
```bash
uv sync --extra gpu --extra web # Use for CUDA (A100/H100/etc.)
uv sync --extra cpu --extra web # (or) Use for CPU-only / MPS
source .venv/bin/activate
```
For development (adds pytest, matplotlib, ipykernel, transformers, etc.):
```bash
uv sync --extra gpu --group dev
```
If you plan on running `scripts.chat_web`:
```bash
uv sync --extra web
```
### Reproduce and talk to GPT-2
The most fun you can have is to train your own GPT-2 and talk to it. The entire pipeline to do so is contained in the single file [runs/speedrun.sh](runs/speedrun.sh), which is designed to be run on an 8XH100 GPU node. Boot up a new 8XH100 GPU box from your favorite provider (e.g. I use and like [Lambda](https://lambda.ai/service/gpu-cloud)), and kick off the training script:
@ -57,10 +59,9 @@ The most fun you can have is to train your own GPT-2 and talk to it. The entire
bash runs/speedrun.sh
```
You may wish to do so in a screen session as this will take ~3 hours to run. Once it's done, you can talk to it via the ChatGPT-like web UI. Make sure again that your local uv virtual environment is active (run `source .venv/bin/activate`) and has the `web` extra installed, and then serve it:
You may wish to do so in a screen session as this will take ~3 hours to run. Once it's done, you can talk to it via the ChatGPT-like web UI. Make sure again that your local uv virtual environment (with the "web" extra) is active (run `source .venv/bin/activate`), and serve it:
```bash
uv sync --extra web
python -m scripts.chat_web
```

View File

@ -62,5 +62,5 @@ python -m scripts.chat_sft \
# python -m scripts.chat_cli -p "What is the capital of France?"
# Chat with the model over a pretty WebUI ChatGPT style
# uv sync --extra web
# uv sync --extra cpu --extra web
# python -m scripts.chat_web

View File

@ -89,7 +89,7 @@ torchrun --standalone --nproc_per_node=8 -m scripts.chat_eval -- -i sft
# python -m scripts.chat_cli -p "Why is the sky blue?"
# even better, chat with your model over a pretty WebUI ChatGPT style
# uv sync --extra web
# uv sync --extra gpu --extra web
# python -m scripts.chat_web
# -----------------------------------------------------------------------------

View File

@ -43,7 +43,7 @@ try:
from fastapi.middleware.cors import CORSMiddleware
from fastapi.responses import StreamingResponse, HTMLResponse, FileResponse
except ImportError as exc:
raise SystemExit("Missing web dependencies, install with: uv sync --extra web") from exc
raise SystemExit("Missing web dependencies, install the extra 'web'") from exc
from pydantic import BaseModel
from typing import List, Optional, AsyncGenerator
from dataclasses import dataclass