From 013c5a2816d71e5f7e37842d1e0b85921507c380 Mon Sep 17 00:00:00 2001 From: Miyoung Choi Date: Tue, 12 May 2026 15:12:23 -0700 Subject: [PATCH 1/3] docs: style fixes --- docs/get-started/quickstart.mdx | 2 +- .../tutorials/first-network-policy.mdx | 4 ++-- docs/get-started/tutorials/github-sandbox.mdx | 12 +++++----- docs/get-started/tutorials/index.mdx | 2 +- .../tutorials/inference-ollama.mdx | 18 +++++++-------- .../tutorials/local-inference-lmstudio.mdx | 8 +++---- docs/index.mdx | 4 ++-- docs/kubernetes/access-control.mdx | 6 ++--- docs/kubernetes/ingress.mdx | 2 +- docs/kubernetes/managing-certificates.mdx | 4 ++-- docs/kubernetes/openshift.mdx | 6 ++--- docs/kubernetes/setup.mdx | 22 +++++++++---------- docs/observability/accessing-logs.mdx | 2 +- docs/observability/logging.mdx | 4 ++-- docs/observability/ocsf-json-export.mdx | 14 +++++++----- docs/observability/overview.mdx | 8 +++---- docs/reference/default-policy.mdx | 2 +- docs/reference/gateway-auth.mdx | 2 +- docs/reference/policy-schema.mdx | 6 ++--- docs/reference/sandbox-compute-drivers.mdx | 2 +- docs/sandboxes/inference-routing.mdx | 2 +- docs/sandboxes/manage-gateways.mdx | 2 +- docs/sandboxes/manage-providers.mdx | 2 +- docs/sandboxes/policies.mdx | 4 ++-- docs/security/best-practices.mdx | 12 +++++----- fern/fern.config.json | 2 +- 26 files changed, 79 insertions(+), 75 deletions(-) diff --git a/docs/get-started/quickstart.mdx b/docs/get-started/quickstart.mdx index fcfbc945b..9c40eb024 100644 --- a/docs/get-started/quickstart.mdx +++ b/docs/get-started/quickstart.mdx @@ -36,7 +36,7 @@ If you prefer [uv](https://docs.astral.sh/uv/): uv tool install -U openshell ``` -After installing the CLI, run `openshell --help` in your terminal to see the full CLI reference. +After installing the CLI, run `openshell --help` in your terminal to view the full CLI reference. You can also clone the [NVIDIA OpenShell GitHub repository](https://github.com/NVIDIA/OpenShell) and use the `/openshell-cli` skill to load the CLI reference into your agent. diff --git a/docs/get-started/tutorials/first-network-policy.mdx b/docs/get-started/tutorials/first-network-policy.mdx index effe7c445..3e4593308 100644 --- a/docs/get-started/tutorials/first-network-policy.mdx +++ b/docs/get-started/tutorials/first-network-policy.mdx @@ -4,7 +4,7 @@ title: "Write Your First Sandbox Network Policy" sidebar-title: "First Network Policy" slug: "get-started/tutorials/first-network-policy" -description: "See how OpenShell network policies work by creating a sandbox, observing default-deny in action, and applying a fine-grained L7 read-only rule." +description: "Learn how OpenShell network policies work by creating a sandbox, observing default-deny in action, and applying a fine-grained L7 read-only rule." keywords: "Generative AI, Cybersecurity, Tutorial, Policy, Network Policy, Sandbox, Security" --- @@ -117,7 +117,7 @@ network_policies: - { path: /usr/bin/curl } ``` -The `filesystem_policy`, `landlock`, and `process` sections preserve the default sandbox settings. This is required because `policy set` replaces the entire policy. The `network_policies` section is the key part: `curl` may make GET, HEAD, and OPTIONS requests to `api.github.com` over HTTPS. Everything else is denied. The proxy auto-detects TLS on HTTPS endpoints and terminates it to inspect each HTTP request and enforce the `read-only` access preset at the method level. +The `filesystem_policy`, `landlock`, and `process` sections preserve the default sandbox settings. This is required because `policy set` replaces the entire policy. The `network_policies` section is the key part: `curl` can make GET, HEAD, and OPTIONS requests to `api.github.com` over HTTPS. Everything else is denied. The proxy auto-detects TLS on HTTPS endpoints and terminates it to inspect each HTTP request and enforce the `read-only` access preset at the method level. Apply it: diff --git a/docs/get-started/tutorials/github-sandbox.mdx b/docs/get-started/tutorials/github-sandbox.mdx index 1f225fb60..63c895147 100644 --- a/docs/get-started/tutorials/github-sandbox.mdx +++ b/docs/get-started/tutorials/github-sandbox.mdx @@ -11,7 +11,7 @@ keywords: "Generative AI, Cybersecurity, Tutorial, GitHub, Sandbox, Policy, Clau This tutorial walks through an iterative sandbox policy workflow. You launch a sandbox, ask Claude Code to push code to GitHub, and observe the default network policy denying the request. You then diagnose the denial from your machine and from inside the sandbox, apply a policy update, and verify that the policy update to the sandbox takes effect. -After completing this tutorial, you will have: +After completing this tutorial, you have: - A running sandbox with Claude Code that can push to a GitHub repository. - A custom network policy that grants GitHub access for a specific repository. @@ -33,8 +33,10 @@ This tutorial requires the following: This tutorial uses two terminals to demonstrate the iterative policy workflow: -- **Terminal 1**: The sandbox terminal. You create the sandbox in this terminal by running `openshell sandbox create` and interact with Claude Code inside it. -- **Terminal 2**: A terminal outside the sandbox on your machine. You use this terminal for viewing the sandbox logs with `openshell term` and applying an updated policy with `openshell policy set`. +| Terminal | Purpose | +|---|---| +| Terminal 1 | Use the sandbox terminal to create the sandbox with `openshell sandbox create` and interact with Claude Code. | +| Terminal 2 | Use a terminal outside the sandbox on your machine to view sandbox logs with `openshell term` and apply an updated policy with `openshell policy set`. | Each section below indicates which terminal to use. @@ -131,7 +133,7 @@ The sandbox runs a proxy that enforces policies on outbound traffic. The `github_rest_api` policy allows GET requests (used to read the file) but blocks PUT/write requests to GitHub. This is a sandbox-level restriction, not a token issue. No matter what token you provide, pushes through the API -will be blocked until the policy is updated. +are blocked until you update the policy. Both perspectives confirm the same thing: the proxy is doing its job. The default policy is designed to be restrictive. To allow GitHub pushes, you need to update the network policy. @@ -162,7 +164,7 @@ Refer to the following policy example to compare with the generated policy befor The following YAML shows a complete policy that extends the [default policy](/reference/default-policy) with GitHub access for a single repository. Replace `` with your GitHub organization or username and `` with your repository name. -The `filesystem_policy`, `landlock`, and `process` sections are static. They are read once at sandbox creation and cannot be changed by a hot-reload. They are included here for completeness so the file is self-contained, but only the `network_policies` section takes effect when you apply this to a running sandbox. +The `filesystem_policy`, `landlock`, and `process` sections are static. OpenShell reads them at sandbox creation, and a hot reload cannot change them. They are included here for completeness so the file is self-contained, but only the `network_policies` section takes effect when you apply this to a running sandbox. ```yaml version: 1 diff --git a/docs/get-started/tutorials/index.mdx b/docs/get-started/tutorials/index.mdx index f5a79b543..c03e924f7 100644 --- a/docs/get-started/tutorials/index.mdx +++ b/docs/get-started/tutorials/index.mdx @@ -29,6 +29,6 @@ Route inference through Ollama using cloud-hosted or local models, and verify it -Route inference to a local LM Studio server via the OpenAI or Anthropic compatible APIs. +Route inference to a local LM Studio server using the OpenAI-compatible or Anthropic-compatible APIs. diff --git a/docs/get-started/tutorials/inference-ollama.mdx b/docs/get-started/tutorials/inference-ollama.mdx index 4a16eee01..4f46b847e 100644 --- a/docs/get-started/tutorials/inference-ollama.mdx +++ b/docs/get-started/tutorials/inference-ollama.mdx @@ -8,12 +8,12 @@ description: "Run local and cloud models inside an OpenShell sandbox using the O keywords: "Generative AI, Cybersecurity, Tutorial, Inference Routing, Ollama, Local Inference, Sandbox" --- -This tutorial covers two ways to use Ollama with OpenShell: +This tutorial covers two ways of running Ollama with OpenShell: -1. **Ollama sandbox (recommended)** — a self-contained sandbox with Ollama, Claude Code, and Codex pre-installed. One command to start. -2. **Host-level Ollama** — run Ollama on the gateway host and route sandbox inference to it. Useful when you want a single Ollama instance shared across multiple sandboxes. +1. Ollama sandbox. This is the recommended way to run Ollama. A self-contained sandbox with Ollama, Claude Code, and Codex pre-installed. One command starts it. +2. Host-level Ollama. This is an alternative way to run Ollama. Run Ollama on the gateway host and route sandbox inference to it. Use this option when you want a single Ollama instance shared across multiple sandboxes. -After completing this tutorial, you will know how to: +After completing this tutorial, you know how to: - Launch the Ollama community sandbox for a batteries-included experience. - Use `ollama launch` to start coding agents inside a sandbox. @@ -190,11 +190,11 @@ The response should be JSON from the model. Common issues and fixes: -- **Ollama not reachable from sandbox** — Ollama must be bound to `0.0.0.0`, not `127.0.0.1`. This applies to host-level Ollama only; the community sandbox handles this automatically. -- **`OPENAI_BASE_URL` wrong** — Use `http://host.openshell.internal:11434/v1`, not `localhost` or `127.0.0.1`. -- **Model not found** — Run `ollama ps` to confirm the model is loaded. Run `ollama pull ` if needed. -- **HTTPS vs HTTP** — Code inside sandboxes must call `https://inference.local`, not `http://`. -- **AMD GPU driver issues** — Ollama v0.18+ requires ROCm 7 drivers for AMD GPUs. Update your drivers if you see GPU detection failures. +- **Ollama not reachable from sandbox:** Ollama must be bound to `0.0.0.0`, not `127.0.0.1`. This applies to host-level Ollama only; the community sandbox handles this automatically. +- **`OPENAI_BASE_URL` wrong:** Use `http://host.openshell.internal:11434/v1`, not `localhost` or `127.0.0.1`. +- **Model not found:** Run `ollama ps` to confirm the model is loaded. Run `ollama pull ` if needed. +- **HTTPS instead of HTTP:** Code inside sandboxes must call `https://inference.local`, not `http://`. +- **AMD GPU driver issues:** Ollama v0.18+ requires ROCm 7 drivers for AMD GPUs. Update your drivers if you see GPU detection failures. Useful commands: diff --git a/docs/get-started/tutorials/local-inference-lmstudio.mdx b/docs/get-started/tutorials/local-inference-lmstudio.mdx index 2d1246459..7ce604009 100644 --- a/docs/get-started/tutorials/local-inference-lmstudio.mdx +++ b/docs/get-started/tutorials/local-inference-lmstudio.mdx @@ -15,7 +15,7 @@ The LM Studio server provides easy setup with both OpenAI and Anthropic compatib -This tutorial will cover: +This tutorial covers: - Expose a local inference server to OpenShell sandboxes. - Verify end-to-end inference from inside a sandbox. @@ -54,11 +54,11 @@ lms daemon up Start the LM Studio local server from the Developer tab, and verify the OpenAI-compatible endpoint is enabled. -LM Studio will listen to `127.0.0.1:1234` by default. For use with OpenShell, you'll need to configure LM Studio to listen on all interfaces (`0.0.0.0`). +LM Studio listens to `127.0.0.1:1234` by default. For use with OpenShell, configure LM Studio to listen on all interfaces (`0.0.0.0`). -If you're using the GUI, go to the Developer Tab, select Server Settings, then enable Serve on Local Network. +If you use the GUI, go to the Developer Tab, select Server Settings, then enable Serve on Local Network. -If you're using llmster in headless mode, run `lms server start --bind 0.0.0.0`. +If you use llmster in headless mode, run `lms server start --bind 0.0.0.0`. ## Test with a small model diff --git a/docs/index.mdx b/docs/index.mdx index 4d358a827..91aba803d 100644 --- a/docs/index.mdx +++ b/docs/index.mdx @@ -38,7 +38,7 @@ uncontrolled network activity. Install OpenShell and create your first sandbox in two commands. -{/*Terminal demo styles live in fern/main.css — inline