Add Ollama, LM Studio providers and Tauri desktop app#16
Add Ollama, LM Studio providers and Tauri desktop app#16kpulik wants to merge 22 commits intomskayyali:mainfrom
Conversation
- Add Ollama and LM Studio as local AI providers (no API key needed) - Auto-detect installed models via dropdown (fetched from local server) - Proxy local requests through Next.js API routes to bypass CORS - Add Tauri v2 scaffold for native desktop builds (.dmg/.msi/.AppImage) - Conditional static export for Tauri builds (browser mode unchanged) - Update README with setup instructions for browser, desktop, and local providers Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
There was a problem hiding this comment.
Pull request overview
Adds first-class support for local AI providers (Ollama, LM Studio) and introduces a Tauri v2 desktop scaffold so nodepad can run as a native app, alongside new proxy API routes intended to work around browser CORS limitations.
Changes:
- Added
ollama/lmstudioprovider types with dynamic local model discovery and local-provider header behavior. - Updated settings UI to support local providers (server URL, refreshable model dropdown, hide API key).
- Added Tauri v2 scaffold + conditional Next static export + CSP updates, plus new
/api/local-modelsand/api/local-chatproxy routes.
Reviewed changes
Copilot reviewed 16 out of 34 changed files in this pull request and generated 9 comments.
Show a summary per file
| File | Description |
|---|---|
| src-tauri/tauri.conf.json | Adds Tauri app configuration (build commands, CSP, bundle icons). |
| src-tauri/src/main.rs | Tauri entrypoint delegating to app_lib::run(). |
| src-tauri/src/lib.rs | Tauri builder setup (log plugin in debug) and app run. |
| src-tauri/icons/StoreLogo.png | Adds Tauri icon asset. |
| src-tauri/icons/Square89x89Logo.png | Adds Tauri icon asset. |
| src-tauri/icons/Square71x71Logo.png | Adds Tauri icon asset. |
| src-tauri/icons/Square44x44Logo.png | Adds Tauri icon asset. |
| src-tauri/icons/Square30x30Logo.png | Adds Tauri icon asset. |
| src-tauri/icons/Square150x150Logo.png | Adds Tauri icon asset. |
| src-tauri/icons/Square142x142Logo.png | Adds Tauri icon asset. |
| src-tauri/icons/Square107x107Logo.png | Adds Tauri icon asset. |
| src-tauri/icons/32x32.png | Adds Tauri icon asset. |
| src-tauri/icons/128x128@2x.png | Adds Tauri icon asset. |
| src-tauri/icons/128x128.png | Adds Tauri icon asset. |
| src-tauri/Cargo.toml | Adds Rust crate config and Tauri dependencies. |
| src-tauri/capabilities/default.json | Adds default Tauri capabilities configuration. |
| src-tauri/build.rs | Adds Tauri build script. |
| README.md | Documents browser vs desktop setup and local provider usage. |
| package.json | Adds Tauri scripts + dependencies. |
| package-lock.json | Locks new Tauri dependencies. |
| next.config.mjs | Adds conditional output: "export" for Tauri builds + CSP updates. |
| lib/ai-settings.ts | Adds provider types, local model fetcher, local-provider helpers, header/baseUrl logic. |
| lib/ai-ghost.ts | Routes local-provider ghost completions via /api/local-chat. |
| lib/ai-enrich.ts | Routes local-provider enrichment completions via /api/local-chat. |
| components/project-sidebar.tsx | Adds local provider UI (server URL, dynamic models, refresh, API key hiding). |
| app/opengraph-image.tsx | Forces static generation (helpful for export builds). |
| app/api/local-models/route.ts | Adds proxy route to fetch model lists from local servers. |
| app/api/local-chat/route.ts | Adds proxy route to send chat completions to local servers. |
| .gitignore | Ignores out/ and src-tauri/target/. |
💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.
- Fix IPv6 loopback: [::1] → ::1 in ALLOWED_HOSTS (URL parser strips brackets) - Detect Tauri runtime via __TAURI_INTERNALS__; bypass proxy routes and call local servers directly (static export has no API routes) - Remove sentinel "local" API key; use empty string + isLocalProvider() - Add SSRF guards: proxy routes check request Host is localhost, restrict target ports to 11434/1234 only - Scope CSP localhost entries to dev only; production uses /api/* proxy (covered by 'self') Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
There was a problem hiding this comment.
Pull request overview
Copilot reviewed 16 out of 34 changed files in this pull request and generated 8 comments.
💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.
- Replace Host-header SSRF guard with explicit NODE_ENV === "production" check (Host is client-controlled and spoofable) - Remove isLocalHost entirely — production guard is sufficient - Fix stale loading state: clear localModelsLoading in .finally() and when switching away from local providers - Trim customBaseUrl in getBaseUrl() and on save so whitespace-only values fall back to preset - Remove unsafe-eval from Tauri production CSP (only needed in dev) Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
There was a problem hiding this comment.
Pull request overview
Copilot reviewed 16 out of 34 changed files in this pull request and generated 6 comments.
💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.
- Debounce customBaseUrl effect (500ms) to avoid per-keystroke fetches - Clear modelId when switching to local provider (populated by fetch) - Clarify in README and CSP comments that local providers work in dev and Tauri desktop, not hosted production deployments Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
There was a problem hiding this comment.
Pull request overview
Copilot reviewed 16 out of 34 changed files in this pull request and generated 1 comment.
Comments suppressed due to low confidence (1)
lib/ai-settings.ts:307
loadAIConfig()now allows local providers without an API key, but it can still return a config with an emptymodelId(e.g., if the user saves immediately after switching to a local provider before model fetch completes, or leaves it blank). Downstream calls sendmodel: "", which many OpenAI-compatible servers reject. Consider returningnull(or a more explicit error) whenmodelIdis falsy, or enforcing a non-emptymodelIdat save time for local providers.
export function loadAIConfig(): AIConfig | null {
const s = loadSettings()
// Local providers don't need an API key
if (!s.apiKey && !isLocalProvider(s.provider)) return null
const models = getModelsForProvider(s.provider)
const model = models.find(m => m.id === s.modelId)
const modelId = model?.id ?? models[0]?.id ?? s.modelId ?? DEFAULT_MODEL_ID
const supportsGrounding =
(s.provider === "openrouter" || s.provider === "openai") &&
s.webGrounding &&
(model?.supportsGrounding ?? false)
return { apiKey: s.apiKey, modelId, supportsGrounding, provider: s.provider, customBaseUrl: s.customBaseUrl }
💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.
There was a problem hiding this comment.
Pull request overview
Copilot reviewed 17 out of 36 changed files in this pull request and generated 5 comments.
Comments suppressed due to low confidence (1)
lib/ai-settings.ts:307
- For local providers,
getModelsForProvider()returns an empty array. If the user saves settings with an empty/unknownmodelId(e.g. just switched to Ollama/LM Studio and fetch hasn’t populated yet),loadAIConfig()falls back toDEFAULT_MODEL_ID(openai/gpt-4o). That value will then be sent to the local server and will reliably fail. Consider special-casing local providers somodelIdnever falls back to a cloud default (e.g. returnnull/ throw a clear error whens.modelIdis empty, or keepmodelIdas-is for locals and let the UI enforce selection).
export function loadAIConfig(): AIConfig | null {
const s = loadSettings()
// Local providers don't need an API key
if (!s.apiKey && !isLocalProvider(s.provider)) return null
const models = getModelsForProvider(s.provider)
const model = models.find(m => m.id === s.modelId)
const modelId = model?.id ?? models[0]?.id ?? s.modelId ?? DEFAULT_MODEL_ID
const supportsGrounding =
(s.provider === "openrouter" || s.provider === "openai") &&
s.webGrounding &&
(model?.supportsGrounding ?? false)
return { apiKey: s.apiKey, modelId, supportsGrounding, provider: s.provider, customBaseUrl: s.customBaseUrl }
💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.
There was a problem hiding this comment.
Pull request overview
Copilot reviewed 17 out of 36 changed files in this pull request and generated 5 comments.
💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.
There was a problem hiding this comment.
Pull request overview
Copilot reviewed 18 out of 37 changed files in this pull request and generated 4 comments.
💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.
There was a problem hiding this comment.
Pull request overview
Copilot reviewed 18 out of 37 changed files in this pull request and generated 3 comments.
💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.
There was a problem hiding this comment.
Pull request overview
Copilot reviewed 18 out of 37 changed files in this pull request and generated 2 comments.
💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.
There was a problem hiding this comment.
Pull request overview
Copilot reviewed 18 out of 37 changed files in this pull request and generated 1 comment.
💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.
There was a problem hiding this comment.
Pull request overview
Copilot reviewed 18 out of 37 changed files in this pull request and generated 1 comment.
💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.
There was a problem hiding this comment.
Pull request overview
Copilot reviewed 18 out of 37 changed files in this pull request and generated 3 comments.
💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.
There was a problem hiding this comment.
Pull request overview
Copilot reviewed 18 out of 37 changed files in this pull request and generated no new comments.
💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.
|
Ready for final review/merge.
|
- Add getSafeContentTypeConfig helper to lib/content-types.ts - Update TileCard, GraphArea, and other UI components with robust fallbacks - Add proactive contentType validation to AI enrichment parser - Document crash fix in BACKLOG.md as issue mskayyali#16
Summary
npm run tauri:dev/npm run tauri:build), with no CORS restrictions for local providers/api/local-models,/api/local-chat) proxy requests to local servers so everything works in the browser without extra configurationChanged files
lib/ai-settings.tsollama/lmstudioprovider types, dynamic model fetching viafetchLocalModels(),isLocalProvider()helper, skip auth headers for local providerscomponents/project-sidebar.tsxlib/ai-enrich.ts/api/local-chatproxylib/ai-ghost.tsapp/api/local-models/route.tsapp/api/local-chat/route.tsnext.config.mjsoutput: "export"for Tauri buildssrc-tauri/package.json@tauri-apps/cli,@tauri-apps/api,tauri:dev/tauri:buildscriptsREADME.mdHow to use
Browser (unchanged):
npm run dev→ localhost:3000Desktop app:
npm run tauri:dev(dev) ornpm run tauri:build(distributable)Local providers: Settings → select Ollama or LM Studio → models appear automatically
Test plan
npm run tauri:devopens a native window with the app workingnpm run tauri:buildproduces a distributable binary