You can use Traceloop to manage your prompts and model configurations. That way you can easily experiment with different prompts, and rollout changes gradually and safely.

Make sure you’ve created an API key and set it as an environment variable TRACELOOP_API_KEY before you start. Check out the SDK’s getting started guide for more information.

1

Create a new prompt

Click New Prompt to create a new prompt. Give it a name, which will be used to retrieve it in your code later.

2

Define it in the Prompt Registry

Set the system and/or user prompt. You can use variables in your prompt by following the Jinja format of {{ variable_name }}. The values of these variables will be passed in when you retrieve the prompt in your code.

For more information see the Registry Documentation.

This screen is also a prompt playground. Give the prompt a try by clicking Test at the bottom.

3

Deploy the prompt to your developement environement

Click Deploy to Dev to deploy the prompt to your development environment.

4

Use the prompt in your code

If you haven’t done so, make sure to generate an API key and set it as an environment variable TRACELOOP_API_KEY.

Make sure to initialize the SDK. On Typescript/Javascript, you should also wait for the initialization to complete.

from traceloop.sdk import Traceloop

Traceloop.init()

Retrieve your prompt by using the get_prompt function. For example, if you’ve created a prompt with the key joke_generator and a single variable persona:

from openai import OpenAI
from traceloop.sdk.prompts import get_prompt

client = OpenAI(api_key=os.environ["OPENAI_API_KEY"])

prompt_args = get_prompt(key="joke_generator", variables={"persona": "pirate"})
completion = client.chat.completions.create(**prompt_args)

The returned variable prompt_args is compatible with the API used by the foundation models SDKs (OpenAI, Anthropic, etc.) which means you can directly plug in the response to the appropriate API call.

For more information see the SDK Usage Documentation.