Getting Started#

LLMs are powerful but can be hard to steer and prone to errors when deployed. At the same time, new models and techniques are being developed all the time. We want to make it easy for you to experiment with different techniques, understand their tradeoffs, and make informed decisions for your specific use case.

The package is organized to make it easy to test architectures around higher level “functionality”. This includes:

  • Retrieval-augmented generation

  • Agent tool use

  • Extraction

They all share a same “Task” interface to provide some abstractions to create and evaluate different models in-context, including different “environments” and shared evaluators.

This notebook shows how to get started with the package. For any given task, the main steps are:

  1. Install the package

  2. Select a task

  3. Download the dataset

  4. Define your architecture

  5. Run the evaluation

Setup#

The evaluations use LangSmith (see: docs) to host the benchmark datasets and track your architecture’s traces and evaluation metrics.

Create a LangSmith account and set your API key below.

import os

# Get from https://smith.langchain.com/settings
os.environ["LANGCHAIN_API_KEY"] = "sk-..."

Installation#

Next, install the required packages.

# %pip install -U --quiet langchain_benchmarks langchain langsmith

Select a task#

Each benchmark has a corresponding description, dataset, and other “environment” information. You can view the available tasks by checking the registry.

from langchain_benchmarks import registry

registry
Name Type Dataset ID Description
Tool Usage - Typewriter (1 tool) ToolUsageTask 59577193-8938-4ccf-92a7-e8a96bcf4f86Environment with a single tool that accepts a single letter as input, and prints it on a piece of virtual paper. The objective of this task is to evaluate the ability of the model to use the provided tools to repeat a given input string. For example, if the string is 'abc', the tools 'a', 'b', and 'c' must be invoked in that order. The dataset includes examples of varying difficulty. The difficulty is measured by the length of the string.
Tool Usage - Typewriter (26 tools)ToolUsageTask 128af05e-aa00-4e3b-a958-d166dd450581Environment with 26 tools each tool represents a letter of the alphabet. The objective of this task is to evaluate the model's ability the use tools for a simple repetition task. For example, if the string is 'abc', the tools 'a', 'b', and 'c' must be invoked in that order. The dataset includes examples of varying difficulty. The difficulty is measured by the length of the string. This is a variation of the typer writer task, where 26 parameterless tools are given instead of a single tool that takes a letter as an argument.
Tool Usage - Relational Data ToolUsageTask 1d89f4b3-5f73-48cf-a127-2fdeb22f6d84Environment with fake data about users and their locations and favorite foods. The environment provides a set of tools that can be used to query the data. The objective of this task is to evaluate the ability to use the provided tools to answer questions about relational data. The dataset contains 21 examples of varying difficulty. The difficulty is measured by the number of tools that need to be used to answer the question. Each example is composed of a question, a reference answer, and information about the sequence in which tools should be used to answer the question. Success is measured by the ability to answer the question correctly, and efficiently.
Multiverse Math ToolUsageTask 47ed57bc-e852-4f84-a23e-cce4793864e9An environment that contains a few basic math operations, but with altered results. For example, multiplication of 5*3 will be re-interpreted as 5*3*1.1. The basic operations retain some basic properties, such as commutativity, associativity, and distributivity; however, the results are different than expected. The objective of this task is to evaluate the ability to use the provided tools to solve simple math questions and ignore any innate knowledge about math. This task is associated with 20 test examples.
Multiverse Math (Tiny) ToolUsageTask 594f9f60-30a0-49bf-b075-f44beabf546aAn environment that contains a few basic math operations, but with altered results. For example, multiplication of 5*3 will be re-interpreted as 5*3*1.1. The basic operations retain some basic properties, such as commutativity, associativity, and distributivity; however, the results are different than expected. The objective of this task is to evaluate the ability to use the provided tools to solve simple math questions and ignore any innate knowledge about math. This is a tiny version of the Multiverse Math task, with 10 examples only.
Email Extraction ExtractionTaska1742786-bde5-4f51-a1d8-e148e5251ddbA dataset of 42 real emails deduped from a spam folder, with semantic HTML tags removed, as well as a script for initial extraction and formatting of other emails from an arbitrary .mbox file like the one exported by Gmail. Some additional cleanup of the data was done by hand after the initial pass. See https://github.com/jacoblee93/oss-model-extraction-evals.
Chat Extraction ExtractionTask00f4444c-9460-4a82-b87a-f50096f1cfefA dataset meant to test the ability of an LLM to extract and infer structured information from a dialogue. The dialogue is between a user and a support engineer. Outputs should be structured as a JSON object and test both the ability of the LLM to correctly structure the information and its ability to perform simple classification tasks.
LangChain Docs Q&A RetrievalTask 452ccafc-18e1-4314-885b-edd735f17b9dQuestions and answers based on a snapshot of the LangChain python docs. The environment provides the documents and the retriever information. Each example is composed of a question and reference answer. Success is measured based on the accuracy of the answer relative to the reference answer. We also measure the faithfulness of the model's response relative to the retrieved documents (if any).
Semi-structured Reports RetrievalTask c47d9617-ab99-4d6e-a6e6-92b8daf85a7dQuestions and answers based on PDFs containing tables and charts. The task provides the raw documents as well as factory methods to easily index them and create a retriever. Each example is composed of a question and reference answer. Success is measured based on the accuracy of the answer relative to the reference answer. We also measure the faithfulness of the model's response relative to the retrieved documents (if any).
Multi-modal slide decks RetrievalTask 40afc8e7-9d7e-44ed-8971-2cae1eb59731This public dataset is a work-in-progress and will be extended over time. Questions and answers based on slide decks containing visual tables and charts. Each example is composed of a question and reference answer. Success is measured based on the accuracy of the answer relative to the reference answer.
Name Correction ExtractionTask A dataset of 23 misspelled full names and their correct spellings.

Download the dataset#

Each benchmark task has a corresponding dataset. To run evals on the specified benchmark, you can use our download function. For more details on working with datasets within the LangChain Benchmarks package, check out the datasets notebook.

from langchain_benchmarks import clone_public_dataset

task = registry["Tool Usage - Relational Data"]

clone_public_dataset(task.dataset_id)

Define architecture and evaluate#

After fetching the dataset, you can create your architecture and evaluate it using the task’s evaluation parameters. This differs by task. For more information, check the examples for your task.