Skip to main content

Baseten

Baseten is a provider of all the infrastructure you need to deploy and serve ML models performantly, scalably, and cost-efficiently.

As a model inference platform, Baseten is a Provider in the LangChain ecosystem. The Baseten integration currently implements a single Component, LLMs, but more are planned!

Baseten lets you run both open source models like Llama 2 or Mistral and run proprietary or fine-tuned models on dedicated GPUs. If you're used to a provider like OpenAI, using Baseten has a few differences:

  • Rather than paying per token, you pay per minute of GPU used.
  • Every model on Baseten uses Truss, our open-source model packaging framework, for maximum customizability.
  • While we have some OpenAI ChatCompletions-compatible models, you can define your own I/O spec with Truss.

Learn more about model IDs and deployments.

Learn more about Baseten in the Baseten docs.

Installation and Setup

You'll need two things to use Baseten models with LangChain:

Export your API key to your as an environment variable called BASETEN_API_KEY.

export BASETEN_API_KEY="paste_your_api_key_here"

LLMs

See a usage example.

from langchain_community.llms import Baseten
API Reference:Baseten

Was this page helpful?


You can also leave detailed feedback on GitHub.