Skip to content

MathisVerstrepen/Meridian

Folders and files

NameName
Last commit message
Last commit date

Latest commit

Β 

History

1,550 Commits
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 

Repository files navigation

header

Meridian - Graph-Powered Conversational AI

License Frontend Backend Status

Table of Contents

✨ Introduction

Meridian is an open-source, graph-based platform for building, visualizing, and interacting with complex AI workflows. Instead of traditional linear chats, Meridian uses a visual canvas where you can connect different AI models, data sources, structured tools, and logic blocks to create powerful and dynamic conversational agents.

This graph-based approach allows for sophisticated context management, branching conversations, advanced execution patterns like parallel model querying and conditional routing, and richer tool-assisted workflows. It provides both a powerful visual graph for building workflows and a clean, feature-rich chat interface for interacting with them.

main-canvas-view

main-chat-view

🌟 Key Features

  • Visual Graph Canvas: At its core, Meridian provides an interactive canvas where you can build, manage, and visualize AI workflows as interconnected nodes.

  • Modular Node System:

    • Input Nodes: Provide context from various sources, including plain text (Prompt), local files (Attachment), and connected repositories (GitHub and GitLab).

    key-features-input-nodes

    • Generator Nodes: The processing units of the graph.
      • Text-to-Text: A standard Large Language Model (LLM) call.
      • Parallelization: Executes a prompt against multiple LLMs simultaneously and uses an aggregator model to synthesize the results into a single, comprehensive answer.
      • Routing: Dynamically selects the next node or model based on the input, enabling conditional logic in your workflows.

    key-features-generator-nodes

  • Integrated Chat & Graph Experience:

    • A feature-rich chat interface that serves as a user-friendly view of the graph's execution.
    • The ability to create complex branching conversations that are naturally represented and manageable in the graph.
    • Rich tool call rendering, artifact embeds, structured follow-up questions, and clearer waiting states when a workflow needs user input.
  • Rich Content & Tooling:

    • Full Markdown support for text formatting.
    • LaTeX rendering for mathematical and scientific notation.
    • Syntax highlighting for over 200 languages in code blocks.
    • AI-powered Mermaid.js diagram generation with validation and retry support for visualizing data and processes.
    • Sandboxed Python code execution with persisted artifacts rendered inline in chat.
    • A dedicated Visualise tool for Mermaid, SVG, and HTML outputs.
    • Structured Ask User tool support so runs can pause for clarification and resume cleanly.
    • Deep GitHub and GitLab integration to use code from repositories as context for the AI.

    key-features-rich-content-formatting

  • Execution & Orchestration Engine:

    • Run entire graphs or specific sub-sections (e.g., all nodes upstream or downstream from a selected point).
    • A visual execution plan that shows the sequence of node processing in real-time.
    • Safer execution with persistent tool call history, stronger lifecycle handling, and sandbox-backed code runs.
  • Prompt Improver Workflow:

    • Audit prompt nodes across structured quality dimensions.
    • Generate targeted rewrites, review changes individually, and apply approved edits back to the graph.
    • Ask clarifying questions during optimization when the model needs more context.
  • Flexible Model Backend:

    • Powered by OpenRouter.ai, providing access to a vast array of proprietary and open-source models (from OpenAI, Anthropic, Google, Mistral, and more).
    • Granular control over model parameters on both global and per-canvas levels.
    • Dedicated model controls for image generation, prompt improvement, and visual generation workflows.
  • Enterprise-Ready Foundation:

    • Secure authentication with support for OAuth (GitHub, Google) and standard username/password.
    • Persistent and robust data storage using PostgreSQL for structured data, Neo4j for the graph engine, and Redis for caching and runtime support.
    • Cost and token usage tracking for each model call, providing full transparency.
    • Stronger provider and repository safety, including safer auth/header handling and stricter repository path protections.
    • Monitoring and Error Tracking: Optional integration with Sentry for real-time performance monitoring and error tracking in both frontend and backend services.

Tip

Detailed overview of the features in the Features.md file. Latest release notes are available in docs/changelogs/Update-1.4.0.md.

πŸ› οΈ Technologies Used

πŸ—οΈ Production Deployment

Meridian offers multiple deployment options to suit different needs and environments. Choose the approach that best fits your infrastructure and requirements.

Prerequisites

  • Docker and Docker Compose installed on your machine
  • Yq (from Mike Farah) for TOML configuration processing
  • Git (for cloning the repository)
  • Docker daemon access for the sandbox execution service if you want sandboxed code execution enabled

Deployment Options

Option 1: Quick Start with Pre-built Images (Recommended)

Use pre-built images from GitHub Container Registry for the fastest deployment.

  1. Clone the repository:

    git clone https://github.com/MathisVerstrepen/Meridian.git
    cd Meridian/docker
  2. Create your configuration:

    cp config.example.toml config.toml

    Edit config.toml with your production settings. See Configuration Guide for details.

  3. Deploy with pre-built images:

    chmod +x run.sh
    ./run.sh prod -d
  4. Access the application: Open http://localhost:3000 (or your configured port) in your web browser.

./run.sh prod now also prepares the sandbox worker image used by code execution and generated artifact workflows.

Option 2: Build from Source

Build images locally for customization or when pre-built images aren't suitable.

  1. Clone and configure:

    git clone https://github.com/MathisVerstrepen/Meridian.git
    cd Meridian/docker
    cp config.example.toml config.toml
    # Edit config.toml with your settings
  2. Deploy with local builds:

    chmod +x run.sh
    ./run.sh build -d
  3. Force rebuild (if needed):

    ./run.sh build --force-rebuild -d

Build mode also builds the dedicated sandbox worker image.

Essential Configuration

Before deploying, you must configure these critical settings in your config.toml:

Required Settings

[api]
# Get your API key from https://openrouter.ai/
MASTER_OPEN_ROUTER_API_KEY = "sk-or-v1-your-api-key-here"

# Generate secure secrets with: python -c "import os; print(os.urandom(32).hex())"
BACKEND_SECRET = "your-64-character-hex-secret"
JWT_SECRET_KEY = "your-64-character-hex-secret"

[ui]
NUXT_SESSION_PASSWORD = "your-64-character-hex-secret"

[database]
POSTGRES_PASSWORD = "your-secure-database-password"

[neo4j]
NEO4J_PASSWORD = "your-secure-neo4j-password"

config.example.toml also includes a [sandbox] section with safe defaults for the sandbox manager, execution limits, and artifact limits. Keep that section present if you want code execution features enabled.

Optional: Sentry for Monitoring

To enable performance monitoring and error tracking, provide your Sentry DSN. If left empty, Sentry will be disabled.

[sentry]
SENTRY_DSN = "your-sentry-dsn-here"

πŸ“š Detailed Configuration Guide: See Config.md for complete configuration options and OAuth setup instructions.

Management Commands

Starting Services

# Production mode with pre-built images
./run.sh prod -d

# Build mode (compile locally)
./run.sh build -d

# Force rebuild without cache
./run.sh build --force-rebuild -d

After upgrading between releases that add migrations, run the backend migrations before or during startup. 1.4.0 introduced schema updates for tool calls and Prompt Improver state.

Stopping Services

# Stop services (preserve data)
./run.sh prod down

# Stop services and remove volumes (⚠️ deletes all data)
./run.sh prod down -v

Monitoring and Maintenance

# View logs
docker compose -f docker-compose.prod.yml logs -f

# Check service status
docker compose -f docker-compose.prod.yml ps

# Update to latest images
docker compose -f docker-compose.prod.yml pull
./run.sh prod down
./run.sh prod -d

πŸ§‘β€πŸ’» Local Development

Set up Meridian for local development with hot reloading, debugging capabilities, and direct access to logs. This setup runs databases in Docker while keeping the application services local for optimal development experience.

Prerequisites

  • Docker and Docker Compose installed on your machine
  • Yq (from Mike Farah) for TOML configuration processing
  • Python 3.11 or higher for the backend
  • Node.js 20+ and pnpm/npm for the frontend
  • Git (for cloning the repository)

Development Setup

1. Clone and Configure

# Clone the repository
git clone https://github.com/MathisVerstrepen/Meridian.git
cd Meridian/docker

# Create local development configuration
cp config.local.example.toml config.local.toml

2. Configure for Development

Edit config.local.toml with your development settings:

[general]
ENV = "dev"
NAME = "meridian_dev"

[ui]
NITRO_PORT = 3000
NUXT_PUBLIC_API_BASE_URL = "http://localhost:8000"
NUXT_API_INTERNAL_BASE_URL = "http://localhost:8000"  # Direct to local backend
NUXT_PUBLIC_IS_OAUTH_DISABLED = "true"  # Simplify development
# Generate with: python -c "import os; print(os.urandom(32).hex())"
NUXT_SESSION_PASSWORD = "your-development-session-secret"

[api]
API_PORT = 8000
ALLOW_CORS_ORIGINS = "http://localhost:3000"
# Get your API key from https://openrouter.ai/
MASTER_OPEN_ROUTER_API_KEY = "sk-or-v1-your-api-key-here"
# Generate secrets with: python -c "import os; print(os.urandom(32).hex())"
BACKEND_SECRET = "your-development-backend-secret"
JWT_SECRET_KEY = "your-development-jwt-secret"

[sandbox]
SANDBOX_MANAGER_PORT = 5000
SANDBOX_MANAGER_URL = "http://localhost:5000"

[database]
POSTGRES_HOST = "localhost"  # Connect to Docker database from host
POSTGRES_PORT = 5432
POSTGRES_PASSWORD = "dev-postgres-password"

[neo4j]
NEO4J_HOST = "localhost"  # Connect to Docker Neo4j from host
NEO4J_BOLT_PORT = 7687
NEO4J_HTTP_PORT = 7474
NEO4J_PASSWORD = "dev-neo4j-password"

πŸ“š Configuration Reference: See Config.md for all available options.

3. Start Development Databases

# Start only PostgreSQL and Neo4j in Docker
chmod +x run.sh
./run.sh dev -d

This command starts only the database containers, leaving the application services for manual startup.

If you want local sandboxed code execution and artifact generation, start the sandbox service too:

./run.sh dev --sandbox-manager -d

4. Set Up and Start Backend

Open a new terminal for the backend:

cd Meridian/api

# Create Python virtual environment
python3 -m venv venv
source venv/bin/activate  # On Windows: venv\Scripts\activate

# Install dependencies
pip install -r requirements.txt

# Run database migrations (required before first launch and after updates)
alembic upgrade head

# Start the backend with hot reloading
./run-dev.sh

Backend will be available at: http://localhost:8000 API Documentation: http://localhost:8000/docs

5. Set Up and Start Frontend

Open another terminal for the frontend:

cd Meridian/ui

# Install dependencies (choose your preferred package manager)
pnpm install
# OR
npm install

# Start development server with hot reloading
pnpm dev
# OR
npm run dev

Frontend will be available at: http://localhost:3000

Management Commands

Database Management

# Start development databases
./run.sh dev -d

# Stop databases (preserve data)
./run.sh dev down

# Stop and remove all data (⚠️ destroys development data)
./run.sh dev down -v

# View database logs
docker compose logs -f db neo4j

# Access database directly
docker compose exec db psql -U postgres -d postgres

Application Management

# Backend commands (in api/ directory with venv activated)
alembic upgrade head             # Run database migrations (required before first launch and after updates)
./run-dev.sh                     # Development server with reload exclusions for runtime files

# Frontend commands (in ui/ directory)
pnpm dev                        # Development server
pnpm build                      # Build for production
pnpm preview                    # Preview production build

πŸ“„ API Documentation

The backend API documentation (powered by FastAPI's Swagger UI) will be available at: http://localhost:8000/docs (when the backend is running).

πŸ—ΊοΈ Project Structure

Meridian/
β”œβ”€β”€ docker/          # Docker-related files and configurations files
β”œβ”€β”€ api/             # Backend API code
β”œβ”€β”€ sandbox_manager/ # Sandboxed Python execution service
β”œβ”€β”€ ui/              # Frontend code
β”œβ”€β”€ docs/            # Documentation files
β”œβ”€β”€ README.md        # Project overview and setup instructions

🀝 Contributing

We welcome contributions to Meridian! Whether it's adding new features, improving existing ones, or fixing bugs, your help is appreciated.

πŸ› Issues and Bug Reports

Found a bug or have a feature request? Please open an issue on our GitHub Issues page.

πŸ“„ License

This project is licensed under the MIT License - see the LICENSE file for details.


Made with ❀️ by Mathis Verstrepen

Sponsor this project

 

Packages

 
 
 

Contributors