Commit Graph

34 Commits

Author SHA1 Message Date
Manmohan Sharma
2b6b7186d3
feat(ui): cleaner input layout + sanitize model-output artifacts
ChatInput: textarea on top, inline tool pills (Think, Search) on the left and send button on the right — single rounded pod, no more bolted-on feel. Smaller pill buttons with subtle ring instead of heavy borders. MessageBubble: add sanitizeModelOutput() that strips training-artifact leaks: <b>/<i>/<strong>/<em> HTML tags, stray standalone '<' markers, leading 'Answer:/Response:' labels, placeholder image markdown. Applied before tool-marker parsing so cleaned text also feeds the <think> card renderer.
2026-04-22 15:31:00 -07:00
Manmohan Sharma
43ad35f73b
feat(ui): remove model selector dropdown - single model only
There's only one deployed model (samosaChaat). Drop the 'nanochat · base' select dropdown from the Sidebar and replace the header model badge with a static 'samosaChaat' label. Removes unused MODEL_OPTIONS / setModel / ChevronDown imports.
2026-04-22 15:27:38 -07:00
Manmohan Sharma
215e8bd8c3
feat(ui): add Search toggle that forces web_search every message
New Globe/'Search' toggle next to the Brain/'Think' button. When ON, every message sent pushes force_web_search=true through: frontend -> chat-api -> Modal. Modal bypasses the heuristic classifier and always pre-seeds the assistant turn with a real Tavily-grounded tool call + result. Toggle is independent of Think — use either or both. Classifier still runs when toggle is OFF, so auto-detection of 'current president' / 'latest weather' / etc still works without any user action.
2026-04-22 15:20:45 -07:00
Manmohan Sharma
4628d53d67
fix(tools): force web_search on tool-worthy queries + strip orphan markers in UI
Adds modal/_query_classifier.py with regex patterns covering time-sensitive queries (current/present/latest/today/weather/CEO/president/stock/news/sports/etc). Modal serve.py classifies each user message and, when it matches, pre-seeds the assistant turn with a real Tavily-backed tool call + result — so 'whos the present president' now triggers web_search the same as 'current president'. Also tightens the post-injection break to fire on any leaked tool marker. UI: MessageBubble.tsx now strips orphan close-tags (<|output_end|> without an open), dedupes consecutive identical tool-result blocks, and removes fragment markers from text segments so they don't leak into the message body.
2026-04-22 15:01:07 -07:00
Manmohan Sharma
f70be25212
fix(tools): enable Tavily include_answer and fix UI overflow 2026-04-22 14:20:47 -07:00
Manmohan
3ab89e7890
feat: deploy d24-sft-r6 with full reasoning mode + live tool use (Tavily)
Model R6 (97% pass rate on 33-probe eval, val_bpb 0.2635):
- modal/serve.py + modal/_tools.py: tool-aware streaming with
  TavilySearchBackend auto-detect, python_start/end state machine,
  output_start/end forcing; mount tavily secret
- modal/serve.py: MODEL_TAG=d24-sft-r6, model path points at new SFT r6
- services/chat-api/routes/messages.py: accept thinking_mode flag,
  inject samosaChaat system prompt (direct or <think> variant) into
  first user message before streaming to Modal
- services/frontend/components/chat/ChatInput.tsx: Brain toggle
  'Think' button next to send; when active, model uses think mode
- services/frontend/components/chat/ChatWindow.tsx: track
  thinkingMode state, pass through to API body as thinking_mode
- services/frontend/components/chat/MessageBubble.tsx: parse and
  render <think>...</think> as collapsible italic blocks;
  <|python_start|>...<|python_end|> as tool-call cards with icons
  per tool name; <|output_start|>...<|output_end|> as result cards
  with expandable JSON
- nanochat/tools.py: TavilySearchBackend class + auto-detect
- nanochat/ui.html: legacy UI reasoning toggle (kept for parity)

Tool execution verified live: query -> web_search via Tavily ->
Macron returned with grounded answer.
2026-04-22 13:43:43 -07:00
Manmohan
94bec5f2a0
fix(frontend): assistant messages fill the chat column (#42)
Assistant responses were capped at max-w-[75%] of the column, so long
replies broke into a narrow block with dead space on the right. Cap
only applies to user bubbles now; assistant messages use w-full of the
max-w-3xl content column, matching how ChatGPT/Claude render replies.
Also bumps message vertical spacing from mb-3 to mb-5.

Co-authored-by: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-04-16 20:23:56 -04:00
Manmohan
748d2e561c
fix(frontend): widen nav pill, default to dark theme (#41)
LandingNav was max-w-3xl which forced "How it works" and "Try
samosaChaat" to wrap on two lines. Bumps the pill to 1100px,
tightens the link padding, demotes the @ handle to lg+, and adds
whitespace-nowrap to every chip so nothing wraps again. Default
theme is now dark — the no-flash init script adds .dark unless the
user has explicitly stored 'light', and the useTheme hook seeds
from the same logic.

Co-authored-by: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-04-16 20:08:55 -04:00
Manmohan
1d2a76eec4
feat: deploy d24 SFT + polished UI redesign with dark mode (#39)
* feat(inference): deploy d24 SFT weights to Modal

Repoint Modal inference app from the broken d20 checkpoint to our own
ManmohanSharma/nanochat-d24 SFT step 484. Rewrites the standalone model
as an inference-only port of nanochat/gpt.py so the modern architecture
(smear gate, per-layer value embeddings, ve_gate, backout, sliding
window attention via SDPA, rotary base 100000, padded vocab, logit
softcap) loads cleanly from the checkpoint. Tokenizer loads the pickled
tiktoken encoding directly so special tokens end up at their true IDs
(32759-32767), and the stop check uses that set instead of hardcoded
0-8. GPU bumped to L4 for headroom. HF token sourced from the
'huggingface' Modal secret.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>

* feat(frontend): polished redesign with serif display + dark mode

Lifts the craft level of the landing and chat UI without changing the
desi identity. Adds Fraunces for display headlines, a floating pill
LandingNav, a saffron-glow hero with a large serif headline and black
pill CTAs, and three gradient-tiled feature cards with inline SVG
glyphs replacing the emoji cards. The chat empty state is now a serif
greeting with pill-chip prompt starters, and ChatInput is a single
rounded pod so the send button sits inside the input (fixes the
misaligned floating button). Adds a class-based dark mode across the
chat surfaces with a sun/moon toggle in the sidebar footer, powered by
a small useTheme hook and a no-flash init script in the root layout.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>

* chore(frontend): add ESLint config so CI lint step passes

next lint was failing with an interactive prompt because the repo had
no ESLint config. Adds a minimal next/core-web-vitals extends and
drops the now-unloadable @typescript-eslint/no-explicit-any disable
directive in the stream proxy by narrowing the body type to unknown.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>

---------

Co-authored-by: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-04-16 19:55:16 -04:00
Manmohan Sharma
16f40ceb54
fix(frontend): pass assistantMsgId directly to fix stale closure bug 2026-04-16 15:15:53 -07:00
Manmohan Sharma
a873b6ad46
fix: stream directly from chat-api, bypass Next.js proxy
Replaced the double-proxy (browser→Next.js→chat-api→Modal) with
direct streaming (browser→nginx→chat-api→Modal). Added nginx route
for /api/conversations → chat-api. Inlined SSE parsing in ChatWindow
instead of useSSE hook going through /api/chat/stream.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-04-16 15:08:46 -07:00
Manmohan Sharma
df0584b861
fix(chat-api): detect Modal URL by domain not path suffix 2026-04-16 14:59:20 -07:00
Manmohan
2dd914a69d
Merge pull request #35 from manmohan659/fix/stream-body-format
fix(frontend): type fix for proxyUpstream
2026-04-16 17:53:02 -04:00
Manmohan Sharma
7ecd8a928c
fix(frontend): use any type for proxyUpstream body param 2026-04-16 14:52:50 -07:00
Manmohan
15bb2324e2
Merge pull request #34 from manmohan659/fix/stream-body-format
fix(frontend): add maxTokens to StreamBody type
2026-04-16 17:51:15 -04:00
Manmohan Sharma
fe34250900
fix(frontend): add maxTokens to StreamBody interface 2026-04-16 14:51:03 -07:00
Manmohan
c5d4d17650
Merge pull request #33 from manmohan659/fix/stream-body-format
fix(frontend): correct body format for chat-api messages
2026-04-16 17:49:33 -04:00
Manmohan Sharma
faf4810696
fix(frontend): send correct body format to chat-api messages endpoint
Chat-api expects {content, temperature, max_tokens, top_k} but frontend
was sending {messages: [...]}. Now extracts last user message as content
when proxying to /api/conversations/:id/messages.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-04-16 14:49:22 -07:00
Manmohan
129553b215
Merge pull request #31 from manmohan659/fix/chat-api-fk
fix(chat-api): defer users FK to avoid startup crash
2026-04-16 17:41:05 -04:00
Manmohan Sharma
e8222011d9
fix(chat-api): use_alter on users FK to avoid metadata resolution error
Chat-api doesn't define the users model (owned by auth service), so
SQLAlchemy can't resolve the FK. use_alter=True defers the constraint
to ALTER TABLE, avoiding the NoReferencedTableError at startup.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-04-16 14:40:45 -07:00
Manmohan Sharma
6d3e1f0afd
fix(chat-api): support Modal inference URL in inference client
The inference client now auto-detects if the URL already ends with
/generate (Modal's endpoint URL pattern) and skips appending the path.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-04-16 14:36:36 -07:00
Manmohan Sharma
36debd8502
fix(frontend): redesign landing and chat pages for warm, premium look
Landing page: warm gradient background, illustrations flanking hero text
(180-220px), new tagline, features section with 3 cards, footer updated
to "Built by Manmohan", gold CTA and nav buttons, toran moved to hero.

Chat page: removed "Chat Completions" header, added samosa logo and
bigger suggestion cards to empty state, sidebar empty state message,
input area top border/shadow, more prominent new chat button.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-04-16 14:03:55 -07:00
Manmohan
b5fbebb63f
Merge pull request #26 from manmohan659/fix/missing-models
fix: add missing SQLAlchemy models to auth and chat-api
2026-04-16 16:50:22 -04:00
Manmohan Sharma
8a95a76522
fix: add missing models/ dirs to auth and chat-api services
Root .gitignore had `models/` which matched both ML weights AND
SQLAlchemy model dirs. Changed to `/models/` (root only).
Added auth/src/models/ (User) and chat-api/src/models/ (Conversation, Message).

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-04-16 13:50:08 -07:00
Manmohan Sharma
2061f8848b
fix(docker): add structlog + prometheus deps to auth and chat-api Dockerfiles
Auth service was crash-looping with ModuleNotFoundError for
prometheus_fastapi_instrumentator. Chat-api was also missing it.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-04-16 13:46:53 -07:00
Manmohan Sharma
aa7a907063
feat(frontend): wire frontend to real backend auth + chat-api services
Remove NextAuth and replace with token-based auth against the backend
auth service (OAuth + JWT). The frontend now redirects login to
/api/auth/google and /api/auth/github (proxied by nginx to the auth
service), captures the JWT from the redirect query param, and uses it
for all API calls.

Key changes:
- Remove next-auth dependency and all NextAuth config/routes
- Add lib/auth-client.ts (JWT token storage + auth headers)
- Add hooks/useAuth.ts (client-side auth state + token capture)
- Rewrite middleware.ts to pass-through (client-side auth only)
- Login page uses plain <a> links to /api/auth/{provider}
- Chat page captures access_token from OAuth redirect
- Zustand store fetches conversations from real chat-api via JWT
- API routes proxy /api/conversations/* to chat-api with auth
- chat/stream route supports conversationId + auth header forwarding
- useSSE hook accepts auth headers for authenticated streaming
- Sidebar loads conversations from API, supports delete
- Landing page (Hero, LandingNav) uses useAuth instead of useSession
- Add .env.production.example and scripts/generate-jwt-keys.sh

Mock echo fallback preserved when CHAT_API_URL is not set.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-04-16 13:21:38 -07:00
Manmohan Sharma
07892c0f00
fix(inference): regenerate uv.lock after structlog/prometheus deps added
The observability PR added structlog and prometheus-fastapi-instrumentator
to inference pyproject.toml but did not regenerate uv.lock, causing
Docker build to fail with --locked flag.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-04-16 12:49:05 -07:00
Manmohan Sharma
aa0818aae2
feat(observability): Prometheus + Grafana + Loki stack for samosaChaat (#9)
Replaces the helm/observability scaffold with a real monitoring stack
wired into the samosaChaat platform.

Helm chart (helm/observability/)
- Chart.yaml declares kube-prometheus-stack (~62.0) and loki-stack
  (~2.10) as subchart dependencies.
- values.yaml configures Prometheus (15d retention, 50Gi PVC,
  ServiceMonitor + rule selector on app.kubernetes.io/part-of:
  samosachaat), Alertmanager (10Gi PVC), Grafana (OAuth-only via
  GitHub + Google, local login disabled, Prometheus + Loki datasources,
  dashboards auto-provisioned from a ConfigMap, email + Slack contact
  points with a critical route to Slack), Loki (50Gi, 30d retention,
  tsdb schema), and Promtail (JSON pipeline that lifts level / service
  / trace_id / user_id into labels, scrape config with pod labels).
- Alert rules: HighCPU, HighMemory, DiskSpaceLow, High5xxRate,
  InferenceServiceDown, HighP99Latency.
- templates/grafana-dashboards-configmap.yaml renders every file under
  dashboards/ into a single grafana_dashboard=1 ConfigMap.
- dashboards/node-health.json, app-performance.json, inference.json -
  fully-formed Grafana dashboards with Prometheus datasource variable,
  templated app selector, thresholded gauges, and LogQL-ready labels.

Scraping (helm/samosachaat/templates/servicemonitor.yaml)
- ServiceMonitor CRs for auth / chat-api / inference that Prometheus
  picks up via the part-of=samosachaat selector; scrapes /metrics
  every 15s and replaces the app label so dashboards line up.

Application instrumentation
- services/{auth,chat-api,inference} each depend on
  prometheus-fastapi-instrumentator and expose /metrics (request count,
  latency histograms, in-progress gauges).
- services/auth/src/logging_setup.py and
  services/inference/src/logging_setup.py mirror the canonical
  chat-api implementation - structlog JSON with service, trace_id,
  user_id context injection.
- configure_logging() is called at create_app() in auth and inference;
  inference's main.py now uses structlog via get_logger() instead of
  logging.getLogger.
- log_level setting added to auth + inference config (LOG_LEVEL env).

Docs
- contracts/logging-standard.md defines the required JSON fields,
  Python (structlog) + Node.js (pino) implementations, LogQL examples
  for cross-service queries, and the x-trace-id propagation contract.

Closes #9

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-04-16 12:29:16 -07:00
Manmohan
1e2fc09ca6
Merge pull request #17 from manmohan659/feat/chat-api-service
feat(chat-api): conversation orchestration + SSE streaming proxy (#6)
2026-04-16 14:57:10 -04:00
Manmohan Sharma
8153a4fadf
feat(chat-api): conversation orchestration + SSE streaming proxy (#6)
- FastAPI service that manages conversations and messages in PostgreSQL
  (SQLAlchemy 2.0 async + asyncpg) and streams assistant responses back
  to the client via sse-starlette, forwarding the inference service SSE
  contract unchanged.
- Auth guard validates every request against the auth service
  /auth/validate endpoint (X-Internal-API-Key) and caches results in an
  in-process TTL cache (5 min, 1024 entries) to absorb request bursts.
- Every query filters by authenticated user_id; cross-user access
  returns 404. Message send flow auto-titles the first message,
  persists the streamed assistant response after the client disconnects,
  and records token_count + inference_time_ms.
- /api/models{,/swap} proxies the inference admin surface; swap
  requires is_admin on the validated user.
- Structured JSON logging via structlog with trace_id + user_id
  ContextVars attached to every log line.
- Test suite (pytest + aiosqlite + respx) covers CRUD, user scoping,
  streaming SSE persistence, regenerate, model proxy admin gate,
  and the stream proxy error path. 16/16 passing.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-04-16 11:49:51 -07:00
Manmohan Sharma
4b4aca642a
feat(auth): OAuth2 + JWT auth service with Alembic migrations (#5 #7)
- Alembic async migrations: users, conversations, messages, is_favorited
- FastAPI auth service: Google + GitHub OAuth, RS256 JWT, refresh cookie
- /auth/me, /auth/refresh, /auth/validate (service-to-service)
- rate limiting 10/min on OAuth routes, CORS locked to FRONTEND_URL

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-04-16 11:47:00 -07:00
Manmohan Sharma
634be4080b
feat(frontend): Next.js 14 frontend service for samosaChaat (#2)
Build services/frontend/ replacing the legacy nanochat/ui.html single-file UI.
Landing, login, and chat pages ported with full design system: Devanagari +
Great Vibes hero, samosa/chai/toran SVG animations, gold/cream palette.

- App Router pages: / (hero + floating illustrations), /login (split-screen
  OAuth with mandala motif), /chat (260px collapsible sidebar, suggestion
  chips, markdown + code-copy, auto-expanding input, slash commands)
- SSE streaming via useSSE hook and /api/chat/stream BFF route (proxies to
  CHAT_API_URL when set, falls back to mock echo for local dev)
- NextAuth.js v5 with Google + GitHub providers; middleware gates /chat/*
- Zustand store with localStorage persistence for conversations/settings
- Tailwind theme carries all ui.html tokens + keyframes (pendulum, float,
  wobble, steamFloat, steamType); SVG assets componentized under components/svg
- Multi-stage node:20-alpine Dockerfile with Next standalone output

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-04-16 11:26:57 -07:00
Manmohan Sharma
577771b890
extract standalone inference service 2026-04-16 11:19:18 -07:00
Manmohan Sharma
957f66181d
scaffold monorepo platform layout 2026-04-16 11:06:29 -07:00