A lightweight text embedding API designed as a drop-in replacement for the OpenAI embeddings endpoint.
Built with FastAPI and fastembed, LocalEmbed is optimized for running local document processing and vector pipelines securely on your own infrastructure.
- OpenAI SDK Compatible: Natively mirrors the
/v1/embeddingsschema. Point your existing OpenAI client tolocalhostand it just works. - Privacy First: 100% local execution. No data ever leaves your network.
- Zero-Latency Starts: Automatically pre-loads your default model into memory on server boot.
- Container-Native: Multi-stage Docker build utilizing
uvfor a minimal, highly optimized runtime footprint.
- Docker (Recommended)
- Python 3.12+ (for local development)
LocalEmbed uses optional environment variables for configuration. Create a .env file in the root directory:
- Copy the sample environment file from here:
cp .env.sample .env
- Open the
.envfile and set your desired configurations (likeDEFAULT_EMBEDDING_MODELorHF_TOKEN).
The easiest and recommended way to run LocalEmbed is using the pre-built Docker image from Docker Hub.
docker run -d --name localembed --env-file .env -p 8000:8000 heshinth/localembed:latestThe compose file includes environment variables directly within it.
Download the docker-compose.yml file from here
You can edit the file to configure it, then simply run:
docker compose up -dThe API will be available at: http://localhost:8000.
If you want to run the application natively without Docker:
-
Install the dependencies using
uv(recommended):uv sync
-
Run the FastAPI development server:
fastapi dev app/main.py
GET /v1/health— Health checkPOST /v1/embeddings— Generate text embeddings using local models (OpenAI API compatible)GET /v1/models— List supported and ready-to-use embedding models
LocalEmbed supports all dense text embedding models provided by fastembed.
You can view the full list of supported models in the FastEmbed Documentation, or programmatically query your running instance via the API:
GET http://localhost:8000/v1/modelsSince the /v1/embeddings endpoint is OpenAI API compatible, you can easily use the official openai Python package to interact with it just like the real OpenAI API:
from openai import OpenAI
# Initialize the client pointing to the local base URL
client = OpenAI(
base_url="http://localhost:8000/v1",
api_key="sk-no-key-required"
)
# Generate an embedding
response = client.embeddings.create(
input=["Hello, world!"],
model="BAAI/bge-small-en-v1.5" # Replace with any supported model
)
print(response.data[0].embedding)This project is licensed under the MIT License - see the LICENSE file for details.