Skip to content

coder-msk/Qdrant-Vector-Store-Knowledge-Base

Repository files navigation

Qdrant Vector Store Knowledge Base (n8n)

An n8n workflow that ingests documents into Qdrant and exposes a retrieval-augmented (RAG) chat agent powered by OpenAI embeddings and chat. The included example configures a hotel concierge that answers only from the knowledge base.

Repository: github.com/coder-msk/Qdrant-Vector-Store-Knowledge-Base

What this workflow does

  1. Ingestion branch — A Form Trigger collects uploaded files (configured for CSV). Text is split with a Recursive Character Text Splitter, loaded via the Default Data Loader, embedded with OpenAI Embeddings, and inserted into a Qdrant collection (qdrant_database in the export).
  2. Chat branch — A Chat Trigger sends user messages to an AI Agent that uses OpenAI Chat (gpt-4.1-mini in the export) and a second Qdrant Vector Store node as a retrieve-as-tool (topK 3) so answers are grounded in stored vectors.

Requirements

  • n8n (Cloud or self-hosted) with LangChain / AI nodes enabled
  • Qdrant instance (e.g. Qdrant Cloud) and API credentials
  • OpenAI API key for embeddings and chat
  • A Qdrant collection compatible with your embedding size (the demo collection uses 1536-dimensional vectors with cosine distance, typical for OpenAI embeddings)

Import the workflow

  1. In n8n, use Import workflow and select Qdrant Vectore Store Knowledge Base.json.
  2. Create or attach credentials: Qdrant API, OpenAI API.
  3. Confirm the collection name on both Qdrant Vector Store nodes matches your cluster.
  4. Activate the workflow and open the form and chat URLs from the respective trigger nodes.

Screenshots

Full RAG workflow (editor and successful executions)

The canvas shows both branches: ingestion (form → splitter → loader → embeddings → Qdrant insert) and chat (chat trigger → AI Agent with OpenAI model + Qdrant as retrieval tool). The Executions panel shows completed runs.

n8n RAG workflow editor and executions

Ingestion: form UI and indexing pipeline

The “Upload file to database” form (CSV) next to the n8n nodes that embed and write into Qdrant.

n8n ingestion form and vector indexing workflow

Qdrant Cloud: collection and vector count

Collections view for the cluster, showing the qdrant_database collection status, approximate points, and vector config (1536, cosine).

Qdrant Cloud collections dashboard

Chat: concierge grounded in the knowledge base

The embedded chat UI answering a hotel booking question using the agent wired to Qdrant retrieval.

n8n chat hotel concierge demo

Customization

  • Change the AI Agent system prompt and tool description in n8n for domains other than hotels.
  • Adjust topK, text splitter, and loader settings for your file types and chunking strategy.
  • Replace or extend triggers (e.g. webhook, schedule) while keeping the same Qdrant + embeddings wiring.

License

Add a LICENSE file in the repository if you want to specify terms for reuse of the workflow and assets.

About

n8n RAG workflow using Qdrant vector database with OpenAI embeddings and AI Agent for document ingestion and retrieval-based chat (hotel concierge demo).

Topics

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

 
 
 

Contributors