Skip to content

Feat/add litellm provider#2909

Open
RheagalFire wants to merge 3 commits into
Chainlit:mainfrom
RheagalFire:feat/add-litellm-provider
Open

Feat/add litellm provider#2909
RheagalFire wants to merge 3 commits into
Chainlit:mainfrom
RheagalFire:feat/add-litellm-provider

Conversation

@RheagalFire
Copy link
Copy Markdown

@RheagalFire RheagalFire commented Apr 24, 2026

Summary

  • Adds instrument_litellm() for automatic LLM step logging when using LiteLLM (100+ providers: Anthropic, AWS Bedrock, Google Vertex AI, Cohere, Mistral, Groq, Together
    AI, etc.)
  • Follows the same pattern as instrument_openai() and instrument_mistralai(). Users call cl.instrument_litellm() once, and all litellm.completion() / litellm.acompletion() calls appear as LLM steps in
    the Chainlit UI with model name, messages, response, token usage, and timing

Changes

  • backend/chainlit/litellm/__init__.py - instrument_litellm() function + ChainlitLogger(CustomLogger) that creates Chainlit Steps on each litellm completion
  • backend/chainlit/__init__.py - exported instrument_litellm via __getattr__, TYPE_CHECKING, and __all__
  • backend/pyproject.toml - added litellm>=1.55,<2.0 to test dependencies
  • backend/tests/litellm/__init__.py + test_litellm.py - 13 unit tests

Usage

import chainlit as cl

cl.instrument_litellm()

import litellm

@cl.on_message
async def on_message(message: cl.Message):
    response = await litellm.acompletion(                                                                                                                                                                          
        model="anthropic/claude-sonnet-4-20250514",
        messages=[{"role": "user", "content": message.content}],                                                                                                                                                   
    )                                                                                                                                                                                                              
    await cl.Message(content=response.choices[0].message.content).send()

Set your provider's env var (ANTHROPIC_API_KEY, OPENAI_API_KEY, GEMINI_API_KEY, etc.) and every litellm.completion() / litellm.acompletion() call automatically appears as an LLM step in the Chainlit UI.

Testing

  13 unit tests:
  test_basic_generation ................ ok
  test_messages_converted .............. ok
  test_completion_extracted ............ ok                                                                                                                                                                          
  test_token_counts .................... ok
  test_provider_extraction ............. ok                                                                                                                                                                          
  test_provider_without_slash .......... ok
  test_none_response ................... ok                                                                                                                                                                          
  test_empty_choices ................... ok
  test_settings_strip_none ............. ok
  test_registers_callback .............. ok                                                                                                                                                                          
  test_idempotent ...................... ok
  test_callback_creates_step ........... ok                                                                                                                                                                          
  test_callback_handles_no_context ..... ok
  13 passed in 1.05s                                                                                                                                                                                                 

Live E2E against Azure AI Foundry via litellm.acompletion():
Content: 4
Model: claude-sonnet-4-6
Messages: 2
Completion: {'role': 'assistant', 'content': '4'}
Tokens: 28 input, 5 output
Duration: 3.07s
ALL E2E PASSED

All pre-commit hooks pass (lint, format, type check).

Risk / Compatibility

  • Additive only. Existing integrations untouched.
  • litellm is only needed at runtime if user calls instrument_litellm(). No import-time dependency.
  • Same asyncio.create_task(step.send()) pattern as instrument_openai() and instrument_mistralai().

Summary by cubic

Adds instrument_litellm() to automatically log litellm completions as LLM steps in Chainlit. Captures model, messages, tokens, and timing, and safely handles datetime timestamps and sync contexts without a running event loop.

  • New Features

    • Integrates with litellm via CustomLogger to create LLM steps.
    • Exposed as chainlit.instrument_litellm; opt-in and idempotent.
    • Adds tests for generation building and callback behavior.
  • Dependencies

    • Adds litellm>=1.55,<2.0 to test dependencies.

Written for commit 900263c. Summary will update on new commits.

@dosubot dosubot Bot added size:L This PR changes 100-499 lines, ignoring generated files. backend Pertains to the Python backend. enhancement New feature or request unit-tests Has unit tests. labels Apr 24, 2026
Copy link
Copy Markdown
Contributor

@cubic-dev-ai cubic-dev-ai Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

1 issue found across 5 files

Prompt for AI agents (unresolved issues)

Check if these issues are valid — if so, understand the root cause of each and fix them. If appropriate, use sub-agents to investigate and fix each issue separately.


<file name="backend/chainlit/litellm/__init__.py">

<violation number="1" location="backend/chainlit/litellm/__init__.py:145">
P2: Synchronous LiteLLM callback unconditionally uses `asyncio.create_task`, which can fail with no running event loop and silently drop step logging.</violation>
</file>

Reply with feedback, questions, or to request a fix. Tag @cubic-dev-ai to re-run a review.

Comment thread backend/chainlit/litellm/__init__.py Outdated
Guard asyncio.create_task with get_running_loop check and fall back
to asyncio.run when no event loop is running.
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

backend Pertains to the Python backend. enhancement New feature or request size:L This PR changes 100-499 lines, ignoring generated files. unit-tests Has unit tests.

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant