Get LLM Observability with {platform}
OpenLLMetry lets you connect all your LLM foundation models and vector DB metrics, traces and logs to {platform}. With 5 minutes of work you can get complete view of your system directly in {platform}.
Discover use cases

Trace a LlamaIndex-based Agent
Build an agent with LlamaIndex. See everything your agent does including call to tools, HTTP requests, and database call as full traces

Trace a LlamaIndex-based RAG pipeline
Build a RAG pipeline with Chroma and LlamaIndex. See vectors returned from Chroma, full prompt in OpenAI and responses as traces

Trace chains built with LangChain
Trace any chain built with LangChain, whether you're using LCEL or the original SequentialChain API

Trace Indexing pipelines with Chroma
Build an indexing pipeline and save documents to Chroma. See how your documents are indexed and detect errors.

Trace prompts and completions
Call OpenAI and see prompts, completions, and token usage for your call.

Trace prompts and completions in Anthropic
Call any Claude model by Anthropic and see prompts, completions, and token usage for your call. Both Claude Instant and Claude 2 are supported.

Trace prompts and completions in Bedrock
Call Bedrock and see prompts, completions and token usage for your call. All Bedrock models supported.

Trace prompts and completions in Cohere
Call Cohere and see prompts, completions, and token usage for your call.

Trace prompts and completions in Watsonx
Call Watsonx and see prompts, completions, and token usage for your call.

Trace your RAG retrieval pipeline
Build a RAG pipeline with Chroma and OpenAI. See vectors returned from Chroma, full prompt in OpenAI and responses