An experiment in on-device, in-browser AI agents using WebMCP tools and the Chrome Prompt API AI models. Search, research, and create answers/artifacts around Nearform's articles and case studies entirely in the browser!
Three AI agents collaborate entirely in the browser — no server-side LLM calls:
- Coordinator: triages user questions and decides whether new research is needed
- Researcher: searches a knowledge base via a cross-origin WebMCP tool
- Writer: synthesises research into a streamed response with citations
Tools are shared across origins using an iframe-based transport bridge from @mcp-b/transports -- an early way to experiment since WebMCP doesn't have realy browser transport suppor yet. A hidden iframe hosts a remote tool server; the parent page discovers and calls tools over JSON-RPC. Local tools (notepad) and remote tools (search) are unified in a single registry.
Some other neat things:
- Use the tools: in the Tools area, you can click on any registered tool and directly invoke it.
- See agent data: in the Agent Activity pane, click into agent actions to see full data, prompts, etc.
- Chrome with the Prompt API / Language Model API enabled (provides the on-device LLM)
- vector-search-web provides the
search_nearform_knowledgetool. (Runs onhttp://localhost:4600in localdev and https://nearform.github.io/vector-search-web/ in production).
Run the development server:
# Navigate to http://127.0.0.1:4610/public
npm run devAll code in this project is licensed under the MIT License — see the LICENSE file for details.