Python interpreter tool
danger
This tool executes code and can potentially perform destructive actions. Be careful that you trust any code passed to it!
LangChain offers an experimental tool for executing arbitrary Python code. This can be useful in combination with an LLM that can generate code to perform more powerful computations.
Usage
- npm
- Yarn
- pnpm
npm install @langchain/openai
yarn add @langchain/openai
pnpm add @langchain/openai
import { OpenAI } from "@langchain/openai";
import { PythonInterpreterTool } from "langchain/experimental/tools/pyinterpreter";
import { ChatPromptTemplate } from "@langchain/core/prompts";
import { StringOutputParser } from "@langchain/core/output_parsers";
const prompt = ChatPromptTemplate.fromTemplate(
`Generate python code that does {input}. Do not generate anything else.`
);
const model = new OpenAI({});
const interpreter = await PythonInterpreterTool.initialize({
indexURL: "../node_modules/pyodide",
});
const chain = prompt
.pipe(model)
.pipe(new StringOutputParser())
.pipe(interpreter);
const result = await chain.invoke({
input: `prints "Hello LangChain"`,
});
console.log(JSON.parse(result).stdout);
API Reference:
- OpenAI from
@langchain/openai
- PythonInterpreterTool from
langchain/experimental/tools/pyinterpreter
- ChatPromptTemplate from
@langchain/core/prompts
- StringOutputParser from
@langchain/core/output_parsers