Skip to main content
OpenLLMetry is an open source project that allows you to easily start monitoring and debugging the execution of your LLM app. Tracing is done in a non-intrusive way, built on top of OpenTelemetry. You can choose to export the traces to Traceloop, or to your existing observability stack.
You can use OpenLLMetry whether you use a supported LLM framework, or directly interact with a foundation model API.
import os

from openai import OpenAI
from traceloop.sdk import Traceloop
from traceloop.sdk.decorators import workflow

Traceloop.init(app_name="joke_generation_service")

@workflow(name="joke_creation")
def create_joke():
  client = OpenAI(api_key=os.environ["OPENAI_API_KEY"])
  completion = client.chat.completions.create(
      model="gpt-3.5-turbo",
      messages=[{"role": "user", "content": "Tell me a joke about opentelemetry"}],
  )

  return completion.choices[0].message.content

Getting Started

Select from the following guides to learn more about how to use OpenLLMetry:

Start with Python

Set up Traceloop Python SDK in your project

Start with Javascript / Typescript

Set up Traceloop Javascript SDK in your project

Start with Go

Set up Traceloop Go SDK in your project

Workflows, Tasks, Agents and Tools

Learn how to annotate your code to enrich your traces

Integrations

Learn how to connect to your existing observability stack

Privacy

How we secure your data