Hub Configuration
How to configure Traceloop Hub and connect it to different LLM providers
The hub configuration is done through the config.yaml
file that should be placed in the root directory of the hub.
Here’s an example of the configuration file:
Providers
This is where you list the LLM providers that you want to use with the hub. You can have multiple providers of the same type, just give them different keys.
Models
This is where you list the models that you want to use with the hub. Each model should be associated with a provider.
You can have multiple models of the same type with different providers - for example, you can use GPT-4o on Azure and on OpenAI.
Then, you can define a pipeline (see below) that switches between them according to availabilty.
Each model has a type
which is how the hub understands that 2 model specifications are actually the same “model”,
Pipelines
A pipeline is something you can execute when calling the hub. It contains a list of plugins that are executed in order. Here are the plugins that are available:
logging
: Logs the request and response.tracing
: Enables OpenTelemetry tracing for requests going through the pipeline.model-router
: Routes the request to a model, according to the list specified in themodels
section.
Was this page helpful?