Skip to main content

ChatMistralAI

Mistral AI is a research organization and hosting platform for LLMs. They're most known for their family of 7B models (mistral7b // mistral-tiny, mixtral8x7b // mistral-small).

The LangChain implementation of Mistral's models uses their hosted generation API, making it easier to access their models without needing to run them locally.

Models

Mistral's API offers access to two of their models: mistral7b and mixtral8x7b. You can access these through the API by using their model names:

  • mistral7b -> mistral-tiny
  • mixtral8x7b -> mistral-small

Setup

In order to use the Mistral API you'll need an API key. You can sign up for a Mistral account and create an API key here.

You'll first need to install the @langchain/mistralai package:

npm install @langchain/mistralai

Usage

When sending chat messages to mistral, there are a few requirements to follow:

  • The first message can not be an assistant (ai) message.
  • Messages must alternate between user and assistant (ai) messages.
  • Messages can not end with an assistant (ai) or system message.
import { ChatMistralAI } from "@langchain/mistralai";
import { ChatPromptTemplate } from "@langchain/core/prompts";

const model = new ChatMistralAI({
apiKey: process.env.MISTRAL_API_KEY,
modelName: "mistral-small",
});
const prompt = ChatPromptTemplate.fromMessages([
["system", "You are a helpful assistant"],
["human", "{input}"],
]);
const chain = prompt.pipe(model);
const response = await chain.invoke({
input: "Hello",
});
console.log("response", response);
/**
response AIMessage {
lc_namespace: [ 'langchain_core', 'messages' ],
content: "Hello! I'm here to help answer any questions you might have or provide information on a variety of topics. How can I assist you today?\n" +
'\n' +
'Here are some common tasks I can help with:\n' +
'\n' +
'* Setting alarms or reminders\n' +
'* Sending emails or messages\n' +
'* Making phone calls\n' +
'* Providing weather information\n' +
'* Creating to-do lists\n' +
'* Offering suggestions for restaurants, movies, or other local activities\n' +
'* Providing definitions and explanations for words or concepts\n' +
'* Translating text into different languages\n' +
'* Playing music or podcasts\n' +
'* Setting timers\n' +
'* Providing directions or traffic information\n' +
'* And much more!\n' +
'\n' +
"Let me know how I can help you specifically, and I'll do my best to make your day easier and more productive!\n" +
'\n' +
'Best regards,\n' +
'Your helpful assistant.',
name: undefined,
additional_kwargs: {}
}
*/

API Reference:

info

You can see a LangSmith trace of this example here

Streaming

Mistral's API also supports streaming token responses. The example below demonstrates how to use this feature.

import { ChatMistralAI } from "@langchain/mistralai";
import { ChatPromptTemplate } from "@langchain/core/prompts";
import { StringOutputParser } from "@langchain/core/output_parsers";

const model = new ChatMistralAI({
apiKey: process.env.MISTRAL_API_KEY,
modelName: "mistral-small",
});
const prompt = ChatPromptTemplate.fromMessages([
["system", "You are a helpful assistant"],
["human", "{input}"],
]);
const outputParser = new StringOutputParser();
const chain = prompt.pipe(model).pipe(outputParser);
const response = await chain.stream({
input: "Hello",
});
for await (const item of response) {
console.log("stream item:", item);
}
/**
stream item:
stream item: Hello! I'm here to help answer any questions you
stream item: might have or assist you with any task you'd like to
stream item: accomplish. I can provide information
stream item: on a wide range of topics
stream item: , from math and science to history and literature. I can
stream item: also help you manage your schedule, set reminders, and
stream item: much more. Is there something specific you need help with? Let
stream item: me know!
stream item:
*/

API Reference:

info

You can see a LangSmith trace of this example here