Skip to main content
Open on GitHub

TruLens

TruLens is an open-source package that provides instrumentation and evaluation tools for large language model (LLM) based applications.

This page covers how to use TruLens to evaluate and track LLM apps built on langchain.

Installation and Setup

Install the trulens-eval python package.

pip install trulens-eval

Quickstart

See the integration details in the TruLens documentation.

Tracking

Once you've created your LLM chain, you can use TruLens for evaluation and tracking. TruLens has a number of out-of-the-box Feedback Functions, and is also an extensible framework for LLM evaluation.

Create the feedback functions:

from trulens_eval.feedback import Feedback, Huggingface, 

# Initialize HuggingFace-based feedback function collection class:
hugs = Huggingface()
openai = OpenAI()

# Define a language match feedback function using HuggingFace.
lang_match = Feedback(hugs.language_match).on_input_output()
# By default this will check language match on the main app input and main app
# output.

# Question/answer relevance between overall question and answer.
qa_relevance = Feedback(openai.relevance).on_input_output()
# By default this will evaluate feedback on main app input and main app output.

# Toxicity of input
toxicity = Feedback(openai.toxicity).on_input()

Chains

After you've set up Feedback Function(s) for evaluating your LLM, you can wrap your application with TruChain to get detailed tracing, logging and evaluation of your LLM app.

Note: See code for the chain creation is in the TruLens documentation.

from trulens_eval import TruChain

# wrap your chain with TruChain
truchain = TruChain(
chain,
app_id='Chain1_ChatApplication',
feedbacks=[lang_match, qa_relevance, toxicity]
)
# Note: any `feedbacks` specified here will be evaluated and logged whenever the chain is used.
truchain("que hora es?")

Evaluation

Now you can explore your LLM-based application!

Doing so will help you understand how your LLM application is performing at a glance. As you iterate new versions of your LLM application, you can compare their performance across all of the different quality metrics you've set up. You'll also be able to view evaluations at a record level, and explore the chain metadata for each record.

from trulens_eval import Tru

tru = Tru()
tru.run_dashboard() # open a Streamlit app to explore

For more information on TruLens, visit trulens.org


Was this page helpful?