Traceloop SDK supports several ways to annotate workflows, tasks, agents and tools in your code to get a more complete picture of your app structure.

If you’re using a framework like Langchain, Haystack or LlamaIndex - no need to do anything! OpenLLMetry will automatically detect the framework and annotate your traces.

Workflows and Tasks

Sometimes called a “chain”, intended for a multi-step process that can be traced as a single unit.

Use it as @workflow(name="my_workflow") or @task(name="my_task").

The name argument is optional. If you don’t provide it, we will use the function name as the workflow or task name.

You can version your workflows and tasks. Just provide the version argument to the decorator: @workflow(name="my_workflow", version=2)

from openai import OpenAI
from traceloop.sdk.decorators import workflow, task

client = OpenAI(api_key=os.environ["OPENAI_API_KEY"])

@task(name="joke_creation")
def create_joke():
    completion = client.chat.completions.create(
        model="gpt-3.5-turbo",
        messages=[{"role": "user", "content": "Tell me a joke about opentelemetry"}],
    )

    return completion.choices[0].message.content

@task(name="signature_generation")
def generate_signature(joke: str):
    completion = openai.Completion.create(
        model="davinci-002",[]
        prompt="add a signature to the joke:\n\n" + joke,
    )

    return completion.choices[0].text


@workflow(name="pirate_joke_generator")
def joke_workflow():
    eng_joke = create_joke()
    pirate_joke = translate_joke_to_pirate(eng_joke)
    signature = generate_signature(pirate_joke)
    print(pirate_joke + "\n\n" + signature)

Agents and Tools

Similarily, if you use autonomous agents, you can use the @agent decorator to trace them as a single unit. Each tool should be marked with @tool.

from openai import OpenAI
from traceloop.sdk.decorators import agent, tool

client = OpenAI(api_key=os.environ["OPENAI_API_KEY"])

@agent(name="joke_translation")
def translate_joke_to_pirate(joke: str):
    completion = client.chat.completions.create(
        model="gpt-3.5-turbo",
        messages=[{"role": "user", "content": f"Translate the below joke to pirate-like english:\n\n{joke}"}],
    )

    history_jokes_tool()

    return completion.choices[0].message.content


@tool(name="history_jokes")
def history_jokes_tool():
    completion = client.chat.completions.create(
        model="gpt-3.5-turbo",
        messages=[{"role": "user", "content": f"get some history jokes"}],
    )

    return completion.choices[0].message.content

Async methods

In Typescript, you can use the same syntax for async methods.

In Python, you’ll need to switch to an equivalent async decorator. So, if you’re decorating an async method, use @aworkflow, @atask and so forth.

See also a separate section on using threads in Python with OpenLLMetry.

Decorating Classes (Python only)

While the examples above shows how to decorate functions, you can also decorate classes. In this case, you will also need to provide the name of the method that runs the workflow, task, agent or tool.

Python
from openai import OpenAI
from traceloop.sdk.decorators import agent

client = OpenAI(api_key=os.environ["OPENAI_API_KEY"])

@agent(name="base_joke_generator", method_name="generate_joke")
class JokeAgent:
    def generate_joke(self):
        completion = client.chat.completions.create(
            model="gpt-3.5-turbo",
            messages=[{"role": "user", "content": "Tell me a joke about Traceloop"}],
        )

        return completion.choices[0].message.content